Search results for: edge computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1791

Search results for: edge computing

171 Safe and Scalable Framework for Participation of Nodes in Smart Grid Networks in a P2P Exchange of Short-Term Products

Authors: Maciej Jedrzejczyk, Karolina Marzantowicz

Abstract:

Traditional utility value chain is being transformed during last few years into unbundled markets. Increased distributed generation of energy is one of considerable challenges faced by Smart Grid networks. New sources of energy introduce volatile demand response which has a considerable impact on traditional middlemen in E&U market. The purpose of this research is to search for ways to allow near-real-time electricity markets to transact with surplus energy based on accurate time synchronous measurements. A proposed framework evaluates the use of secure peer-2-peer (P2P) communication and distributed transaction ledgers to provide flat hierarchy, and allow real-time insights into present and forecasted grid operations, as well as state and health of the network. An objective is to achieve dynamic grid operations with more efficient resource usage, higher security of supply and longer grid infrastructure life cycle. Methods used for this study are based on comparative analysis of different distributed ledger technologies in terms of scalability, transaction performance, pluggability with external data sources, data transparency, privacy, end-to-end security and adaptability to various market topologies. An intended output of this research is a design of a framework for safer, more efficient and scalable Smart Grid network which is bridging a gap between traditional components of the energy network and individual energy producers. Results of this study are ready for detailed measurement testing, a likely follow-up in separate studies. New platforms for Smart Grid achieving measurable efficiencies will allow for development of new types of Grid KPI, multi-smart grid branches, markets, and businesses.

Keywords: autonomous agents, Distributed computing, distributed ledger technologies, large scale systems, micro grids, peer-to-peer networks, Self-organization, self-stabilization, smart grids

Procedia PDF Downloads 303
170 Seismotectonics and Seismology the North of Algeria

Authors: Djeddi Mabrouk

Abstract:

The slow coming together between the Afro-Eurasia plates seems to be the main cause of the active deformation in the whole of North Africa which in consequence come true in Algeria with a large zone of deformation in an enough large limited band, southern through Saharan atlas and northern through tell atlas. Maghrebin and Atlassian Chain along North Africa are the consequence of this convergence. In junction zone, we have noticed a compressive regime NW-SE with a creases-faults structure and structured overthrust. From a geological point of view the north part of Algeria is younger then Saharan platform, it’s changing so unstable and constantly in movement, it’s characterized by creases openly reversed, overthrusts and reversed faults, and undergo perpetually complex movement vertically and horizontally. On structural level the north of Algeria it's a part of erogenous alpine peri-Mediterranean and essentially the tertiary age It’s spread from east to the west of Algeria over 1200 km.This oogenesis is extended from east to west on broadband of 100 km.The alpine chain is shaped by 3 domains: tell atlas in north, high plateaus in mid and Saharan atlas in the south In extreme south we find the Saharan platform which is made of Precambrian bedrock recovered by Paleozoic practically not deformed. The Algerian north and the Saharan platform are separated by an important accident along of 2000km from Agadir (Morocco) to Gabes (Tunisian). The seismic activity is localized essentially in a coastal band in the north of Algeria shaped by tell atlas, high plateaus, Saharan atlas. Earthquakes are limited in the first 20km of the earth's crust; they are caused by movements along faults of inverted orientation NE-SW or sliding tectonic plates. The center region characterizes Strong Earthquake Activity who locates mainly in the basin of Mitidja (age Neogene).The southern periphery (Atlas Blidéen) constitutes the June, more Important seism genic sources in the city of Algiers and east (Boumerdes region). The North East Region is also part of the tellian area, but it is characterized by a different strain in other parts of northern Algeria. The deformation is slow and low to moderate seismic activity. Seismic activity is related to the tectonic-slip earthquake. The most pronounced is that of 27 October 1985 (Constantine) of seismic moment magnitude Mw = 5.9. North-West region is quite active and also artificial seismic hypocenters which do not exceed 20km. The deep seismicity is concentrated mainly a narrow strip along the edge of Quaternary and Neogene basins Intra Mountains along the coast. The most violent earthquakes in this region are the earthquake of Oran in 1790 and earthquakes Orléansville (El Asnam in 1954 and 1980).

Keywords: alpine chain, seismicity north Algeria, earthquakes in Algeria, geophysics, Earth

Procedia PDF Downloads 409
169 Delving into Market-Driving Behavior: A Conceptual Roadmap to Delineating Its Key Antecedents and Outcomes

Authors: Konstantinos Kottikas, Vlasis Stathakopoulos, Ioannis G. Theodorakis, Efthymia Kottika

Abstract:

Theorists have argued that Market Orientation is comprised of two facets, namely the Market Driven and the Market Driving components. The present theoretical paper centers on the latter, which to date has been notably under-investigated. The term Market Driving (MD) pertains to influencing the structure of the market, or the behavior of market players in a direction that enhances the competitive edge of the firm. Presently, the main objectives of the paper are the specification of key antecedents and outcomes of Market Driving behavior. Market Driving firms behave proactively, by leading their customers and changing the rules of the game rather than by responding passively to them. Leading scholars were the first to conceptually conceive the notion, followed by some qualitative studies and a limited number of quantitative publications. However, recently, academicians noted that research on the topic remains limited, expressing a strong necessity for further insights. Concerning the key antecedents, top management’s Transformational Leadership (i.e. the form of leadership which influences organizational members by aligning their values, goals and aspirations to facilitate value-consistent behaviors) is one of the key drivers of MD behavior. Moreover, scholars have linked the MD concept with Entrepreneurship. Finally, the role that Employee’s Creativity plays in the development of MD behavior has been theoretically exemplified by a stream of literature. With respect to the key outcomes, it has been demonstrated that MD Behavior positively triggers firm Performance, while theorists argue that it empowers the Competitive Advantage of the firm. Likewise, researchers explicate that MD Behavior produces Radical Innovation. In order to test the robustness of the proposed theoretical framework, a combination of qualitative and quantitative methods is proposed. In particular, the conduction of in-depth interviews with distinguished executives and academicians, accompanied with a large scale quantitative survey will be employed, in order to triangulate the empirical findings. Given that it triggers overall firm’s success, the MD concept is of high importance to managers. Managers can become aware that passively reacting to market conditions is no longer sufficient. On the contrary, behaving proactively, leading the market, and shaping its status quo are new innovative approaches that lead to a paramount competitive posture and Innovation outcomes. This study also exemplifies that managers can foster MD Behavior through Transformational Leadership, Entrepreneurship and recruitment of Creative Employees. To date, the majority of the publications on Market Orientation is unilaterally directed towards the responsive (i.e. the Market Driven) component. The present paper further builds on scholars’ exhortations, and investigates the Market Driving facet, ultimately aspiring to conceptually integrate the somehow fragmented scientific findings, in a holistic framework.

Keywords: entrepreneurial orientation, market driving behavior, market orientation

Procedia PDF Downloads 385
168 Investigating Seasonal Changes of Urban Land Cover with High Spatio-Temporal Resolution Satellite Data via Image Fusion

Authors: Hantian Wu, Bo Huang, Yuan Zeng

Abstract:

Divisions between wealthy and poor, private and public landscapes are propagated by the increasing economic inequality of cities. While these are the spatial reflections of larger social issues and problems, urban design can at least employ spatial techniques that promote more inclusive rather than exclusive, overlapping rather than segregated, interlinked rather than disconnected landscapes. Indeed, the type of edge or border between urban landscapes plays a critical role in the way the environment is perceived. China experiences rapid urbanization, which poses unpredictable environmental challenges. The urban green cover and water body are under changes, which highly relevant to resident wealth and happiness. However, very limited knowledge and data on their rapid changes are available. In this regard, enhancing the monitoring of urban landscape with high-frequency method, evaluating and estimating the impacts of the urban landscape changes, and understating the driving forces of urban landscape changes can be a significant contribution for urban planning and studying. High-resolution remote sensing data has been widely applied to urban management in China. The map of urban land use map for the entire China of 2018 with 10 meters resolution has been published. However, this research focuses on the large-scale and high-resolution remote sensing land use but does not precisely focus on the seasonal change of urban covers. High-resolution remote sensing data has a long-operation cycle (e.g., Landsat 8 required 16 days for the same location), which is unable to satisfy the requirement of monitoring urban-landscape changes. On the other hand, aerial-remote or unmanned aerial vehicle (UAV) sensing are limited by the aviation-regulation and cost was hardly widely applied in the mega-cities. Moreover, those data are limited by the climate and weather conditions (e.g., cloud, fog), and those problems make capturing spatial and temporal dynamics is always a challenge for the remote sensing community. Particularly, during the rainy season, no data are available even for Sentinel Satellite data with 5 days interval. Many natural events and/or human activities drive the changes of urban covers. In this case, enhancing the monitoring of urban landscape with high-frequency method, evaluating and estimating the impacts of the urban landscape changes, and understanding the mechanism of urban landscape changes can be a significant contribution for urban planning and studying. This project aims to use the high spatiotemporal fusion of remote sensing data to create short-cycle, high-resolution remote sensing data sets for exploring the high-frequently urban cover changes. This research will enhance the long-term monitoring applicability of high spatiotemporal fusion of remote sensing data for the urban landscape for optimizing the urban management of landscape border to promoting the inclusive of the urban landscape to all communities.

Keywords: urban land cover changes, remote sensing, high spatiotemporal fusion, urban management

Procedia PDF Downloads 128
167 The Computational Psycholinguistic Situational-Fuzzy Self-Controlled Brain and Mind System Under Uncertainty

Authors: Ben Khayut, Lina Fabri, Maya Avikhana

Abstract:

The models of the modern Artificial Narrow Intelligence (ANI) cannot: a) independently and continuously function without of human intelligence, used for retraining and reprogramming the ANI’s models, and b) think, understand, be conscious, cognize, infer, and more in state of Uncertainty, and changes in situations, and environmental objects. To eliminate these shortcomings and build a new generation of Artificial Intelligence systems, the paper proposes a Conception, Model, and Method of Computational Psycholinguistic Cognitive Situational-Fuzzy Self-Controlled Brain and Mind System (CPCSFSCBMSUU) using a neural network as its computational memory, operating under uncertainty, and activating its functions by perception, identification of real objects, fuzzy situational control, forming images of these objects, modeling their psychological, linguistic, cognitive, and neural values of properties and features, the meanings of which are identified, interpreted, generated, and formed taking into account the identified subject area, using the data, information, knowledge, and images, accumulated in the Memory. The functioning of the CPCSFSCBMSUU is carried out by its subsystems of the: fuzzy situational control of all processes, computational perception, identifying of reactions and actions, Psycholinguistic Cognitive Fuzzy Logical Inference, Decision making, Reasoning, Systems Thinking, Planning, Awareness, Consciousness, Cognition, Intuition, Wisdom, analysis and processing of the psycholinguistic, subject, visual, signal, sound and other objects, accumulation and using the data, information and knowledge in the Memory, communication, and interaction with other computing systems, robots and humans in order of solving the joint tasks. To investigate the functional processes of the proposed system, the principles of Situational Control, Fuzzy Logic, Psycholinguistics, Informatics, and modern possibilities of Data Science were applied. The proposed self-controlled System of Brain and Mind is oriented on use as a plug-in in multilingual subject Applications.

Keywords: computational brain, mind, psycholinguistic, system, under uncertainty

Procedia PDF Downloads 180
166 Transforming Data Science Curriculum Through Design Thinking

Authors: Samar Swaid

Abstract:

Today, corporates are moving toward the adoption of Design-Thinking techniques to develop products and services, putting their consumer as the heart of the development process. One of the leading companies in Design-Thinking, IDEO (Innovation, Design, Engineering Organization), defines Design-Thinking as an approach to problem-solving that relies on a set of multi-layered skills, processes, and mindsets that help people generate novel solutions to problems. Design thinking may result in new ideas, narratives, objects or systems. It is about redesigning systems, organizations, infrastructures, processes, and solutions in an innovative fashion based on the users' feedback. Tim Brown, president and CEO of IDEO, sees design thinking as a human-centered approach that draws from the designer's toolkit to integrate people's needs, innovative technologies, and business requirements. The application of design thinking has been witnessed to be the road to developing innovative applications, interactive systems, scientific software, healthcare application, and even to utilizing Design-Thinking to re-think business operations, as in the case of Airbnb. Recently, there has been a movement to apply design thinking to machine learning and artificial intelligence to ensure creating the "wow" effect on consumers. The Association of Computing Machinery task force on Data Science program states that" Data scientists should be able to implement and understand algorithms for data collection and analysis. They should understand the time and space considerations of algorithms. They should follow good design principles developing software, understanding the importance of those principles for testability and maintainability" However, this definition hides the user behind the machine who works on data preparation, algorithm selection and model interpretation. Thus, the Data Science program includes design thinking to ensure meeting the user demands, generating more usable machine learning tools, and developing ways of framing computational thinking. Here, describe the fundamentals of Design-Thinking and teaching modules for data science programs.

Keywords: data science, design thinking, AI, currculum, transformation

Procedia PDF Downloads 83
165 Role of Calcination Treatment on the Structural Properties and Photocatalytic Activity of Nanorice N-Doped TiO₂ Catalyst

Authors: Totsaporn Suwannaruang, Kitirote Wantala

Abstract:

The purposes of this research were to synthesize titanium dioxide photocatalyst doped with nitrogen (N-doped TiO₂) by hydrothermal method and to test the photocatalytic degradation of paraquat under UV and visible light illumination. The effect of calcination treatment temperature on their physical and chemical properties and photocatalytic efficiencies were also investigated. The characterizations of calcined N-doped TiO₂ photocatalysts such as specific surface area, textural properties, bandgap energy, surface morphology, crystallinity, phase structure, elements and state of charges were investigated by Brunauer, Emmett, Teller (BET) and Barrett, Joyner, Halenda (BJH) equations, UV-Visible diffuse reflectance spectroscopy (UV-Vis-DRS) by using the Kubelka-Munk theory, Wide-angle X-ray scattering (WAXS), Focussed ion beam scanning electron microscopy (FIB-SEM), X-ray photoelectron spectroscopy (XPS) and X-ray absorption spectroscopy (XAS), respectively. The results showed that the effect of calcination temperature was significant on surface morphology, crystallinity, specific surface area, pore size diameter, bandgap energy and nitrogen content level, but insignificant on phase structure and oxidation state of titanium (Ti) atom. The N-doped TiO₂ samples illustrated only anatase crystalline phase due to nitrogen dopant in TiO₂ restrained the phase transformation from anatase to rutile. The samples presented the nanorice-like morphology. The expansion on the particle was found at 650 and 700°C of calcination temperature, resulting in increased pore size diameter. The bandgap energy was determined by Kubelka-Munk theory to be in the range 3.07-3.18 eV, which appeared slightly lower than anatase standard (3.20 eV), resulting in the nitrogen dopant could modify the optical absorption edge of TiO₂ from UV to visible light region. The nitrogen content was observed at 100, 300 and 400°C only. Also, the nitrogen element disappeared at 500°C onwards. The nitrogen (N) atom can be incorporated in TiO₂ structure with the interstitial site. The uncalcined (100°C) sample displayed the highest percent paraquat degradation under UV and visible light irradiation due to this sample revealed both the highest specific surface area and nitrogen content level. Moreover, percent paraquat removal significantly decreased with increasing calcination treatment temperature. The nitrogen content level in TiO₂ accelerated the rate of reaction with combining the effect of the specific surface area that generated the electrons and holes during illuminated with light. Therefore, the specific surface area and nitrogen content level demonstrated the important roles in the photocatalytic activity of paraquat under UV and visible light illumination.

Keywords: restraining phase transformation, interstitial site, chemical charge state, photocatalysis, paraquat degradation

Procedia PDF Downloads 158
164 Challenging Role of Talent Management, Career Development and Compensation Management toward Employee Retention and Organizational Performance with Mediating Effect of Employee Motivation in Service Sector of Pakistan

Authors: Muhammad Younas, Sidra Sawati, M. Razzaq Athar

Abstract:

Organizational development history reveals that it has ever been a challenge to identify and fathom the role of talent management, career development and compensation management towards employees’ retention and organizational performance. Organizations strive hard to measure the impact of all those factors which affect employee retention and organizational performance. Researchers have worked in great deal in order to know the relationship of independent variables i.e. Talent Management, Career Development and Compensation Management on dependent variables i.e. Employee Retention and Organizational Performance. Employees adorned with latest skills with long lasting loyalty play a significant role towards successful achievement of short term as well as long term goals of the organizations. Retention of valuable and resourceful employees for a longer time is equally essential for meeting the set goals. The organizations which spend reasonable chunk of their resources for taking such measures that help to retain their employees through talent management and satisfactory career development always enjoy a competitive edge over their competitors. Human resource is regarded as one of the most precious and difficult resource to management. It has its own needs and requirement. It becomes an easy prey to monotony when lacks career development. Wants and aspirations of this resource are seldom met completely but can be managed through career development and compensation management. In this era of competition, organizations have to take viable steps to management their resources especially human resource. Top management and Managers keep on working for an amenable solution in order to address the challenges relating career development and compensation management as their ultimate goal is to ensure the organizational performance on optimum level. The current study was conducted to examine the impact of Talent Management, Career Development and Compensation Management towards Employees Retention and Organizational Performance with mediating effect of Employees Motivation in Service Sector of Pakistan. The current study is based on Resource Based View (RBV) and Ability Motivation Opportunity (AMO) theories. It explains that by increasing internal resources we can manage employee talent, career development through compensation management and employee motivation more effectively. It will result in effective execution of HRM practices for employee retention enabling an organization to achieve and sustain competitive advantage through optimal performance. Data collection was made through a structured questionnaire which was based upon adopted instruments after testing reliability and validity. A total 300 employees of 30 firms in service sector of Pakistan were sampled through non-probability sampling technique. Regression analysis revealed that talent management, career development and compensation management have significant positive impact on employee retention and perceived organizational performance. The results further showed that employee motivation have a significant mediating effect on employee retention and organizational performance. The interpretation of the findings and limitations, theoretical and managerial implications are also discussed.

Keywords: career development, compensation management, employee retention, organizational performance, talent management

Procedia PDF Downloads 321
163 Colonizing the Colonizers: Layers of Subjectification in the Russian Caucasus

Authors: Aaron Derner

Abstract:

Unlike the histories of France, the UK, or even Spain, the Russian colonial past often dissolves before the seemingly more salient Cold War figurations or Soviet dissolution. The obvious explanation behind Caucasian states’ roles—that of Russian-propped governments obeying the whims of their patron—is but the latest instance of such oversight. Where the results of colonial social and cultural interactions are indelibly stamped across France, Algeria, and every other former (and current) French holding, so to are the Muscovite and Russian colonial ambitions embedded within the modern politics and cultures of both Russia and the Caucasus. Russian colonial artefacts are enhanced and perhaps granted an additional social explanatory edge over those of the ‘typical’ colonizers, by the cyclical adoration for and noisy rejection of European cultural markers over the centuries, along with the somewhat unusual composition of the Cossacks: Russia’s main agents of colonialization within the Caucasian frontier. The story of Russia and Chechnya, of all the Caucasus, is of the manufacture of social and individual identity through “modes of subjectification” inherent within the region’s colonial history and driven by the triangular interactions between three main groups: the Cossacks, the Caucasian Mountain Tribes, and the Russian Metropol. Together, interactions between these social groups worked to shape and transform the lifestyles and institutional pathologies that constitute the Russian and Chechen states and the politics between them. At the core of this (Western) state-building is the simultaneous and seemingly contradictory desire to be more Western and emulate Western cultural and political practices while also desperately grasping for a uniquely Russian identity. This sits somewhat ironically against the backdrop that Russia hosted a frontier-based settler society and had established that distinctly European feature of settler colonialism early in its history—arguably establishing a claim to being the most “colonial” of the colonial powers. There is no doubt that these forces worked to shape contemporary Russian political and social identity—apparent in the mythic popularity of the Cossack in Russian literature, politics, and academic discourse. What needs to be expanded from the current narrative, however, is that beyond the Cossack identity’s attractiveness on the grounds of its tones of freedom and resistance to unjust authority, the identity is rooted in the imperial ambitions and colonial experiences of the Russian state, and is, therefore, a direct marker of domination and subjectification. Adding an unusual dimension to this not-uncommon cultural progression, the Russian state needed to colonize both the Caucases and the Russian Cossacks, appropriating them in much the same way they appropriated the Circassian mountain tribes. The focus of this paper is not to tell yet another story of how one culture entered an area to overpower another but how a ‘powerful,’ ‘modern,’ ‘Western(ish)’ culture was profoundly and continually changed through its contact with a group of tribal ‘savages’ and ‘braves.’

Keywords: Russia, chechnya, subjectification, caucasus, cossacks, Ukraine

Procedia PDF Downloads 79
162 The Use of Image Analysis Techniques to Describe a Cluster Cracks in the Cement Paste with the Addition of Metakaolinite

Authors: Maciej Szeląg, Stanisław Fic

Abstract:

The impact of elevated temperatures on the construction materials manifests in change of their physical and mechanical characteristics. Stresses and thermal deformations that occur inside the volume of the material cause its progressive degradation as temperature increase. Finally, the reactions and transformations of multiphase structure of cementitious composite cause its complete destruction. A particularly dangerous phenomenon is the impact of thermal shock – a sudden high temperature load. The thermal shock leads to a high value of the temperature gradient between the outer surface and the interior of the element in a relatively short time. The result of mentioned above process is the formation of the cracks and scratches on the material’s surface and inside the material. The article describes the use of computer image analysis techniques to identify and assess the structure of the cluster cracks on the surfaces of modified cement pastes, caused by thermal shock. Four series of specimens were tested. Two Portland cements were used (CEM I 42.5R and CEM I 52,5R). In addition, two of the series contained metakaolinite as a replacement for 10% of the cement content. Samples in each series were made in combination of three w/b (water/binder) indicators of respectively 0.4; 0.5; 0.6. Surface cracks of the samples were created by a sudden temperature load at 200°C for 4 hours. Images of the cracked surfaces were obtained via scanning at 1200 DPI; digital processing and measurements were performed using ImageJ v. 1.46r software. In order to examine the cracked surface of the cement paste as a system of closed clusters – the dispersal systems theory was used to describe the structure of cement paste. Water is used as the dispersing phase, and the binder is used as the dispersed phase – which is the initial stage of cement paste structure creation. A cluster itself is considered to be the area on the specimen surface that is limited by cracks (created by sudden temperature loading) or by the edge of the sample. To describe the structure of cracks two stereological parameters were proposed: A ̅ – the cluster average area, L ̅ – the cluster average perimeter. The goal of this study was to compare the investigated stereological parameters with the mechanical properties of the tested specimens. Compressive and tensile strength testes were carried out according to EN standards. The method used in the study allowed the quantitative determination of defects occurring in the examined modified cement pastes surfaces. Based on the results, it was found that the nature of the cracks depends mainly on the physical parameters of the cement and the intermolecular interactions on the dispersal environment. Additionally, it was noted that the A ̅/L ̅ relation of created clusters can be described as one function for all tested samples. This fact testifies about the constant geometry of the thermal cracks regardless of the presence of metakaolinite, the type of cement and the w/b ratio.

Keywords: cement paste, cluster cracks, elevated temperature, image analysis, metakaolinite, stereological parameters

Procedia PDF Downloads 390
161 Temporal Estimation of Hydrodynamic Parameter Variability in Constructed Wetlands

Authors: Mohammad Moezzibadi, Isabelle Charpentier, Adrien Wanko, Robert Mosé

Abstract:

The calibration of hydrodynamic parameters for subsurface constructed wetlands (CWs) is a sensitive process since highly non-linear equations are involved in unsaturated flow modeling. CW systems are engineered systems designed to favour natural treatment processes involving wetland vegetation, soil, and their microbial flora. Their significant efficiency at reducing the ecological impact of urban runoff has been recently proved in the field. Numerical flow modeling in a vertical variably saturated CW is here carried out by implementing the Richards model by means of a mixed hybrid finite element method (MHFEM), particularly well adapted to the simulation of heterogeneous media, and the van Genuchten-Mualem parametrization. For validation purposes, MHFEM results were compared to those of HYDRUS (a software based on a finite element discretization). As van Genuchten-Mualem soil hydrodynamic parameters depend on water content, their estimation is subject to considerable experimental and numerical studies. In particular, the sensitivity analysis performed with respect to the van Genuchten-Mualem parameters reveals a predominant influence of the shape parameters α, n and the saturated conductivity of the filter on the piezometric heads, during saturation and desaturation. Modeling issues arise when the soil reaches oven-dry conditions. A particular attention should also be brought to boundary condition modeling (surface ponding or evaporation) to be able to tackle different sequences of rainfall-runoff events. For proper parameter identification, large field datasets would be needed. As these are usually not available, notably due to the randomness of the storm events, we thus propose a simple, robust and low-cost numerical method for the inverse modeling of the soil hydrodynamic properties. Among the methods, the variational data assimilation technique introduced by Le Dimet and Talagrand is applied. To that end, a variational data assimilation technique is implemented by applying automatic differentiation (AD) to augment computer codes with derivative computations. Note that very little effort is needed to obtain the differentiated code using the on-line Tapenade AD engine. Field data are collected for a three-layered CW located in Strasbourg (Alsace, France) at the water edge of the urban water stream Ostwaldergraben, during several months. Identification experiments are conducted by comparing measured and computed piezometric head by means of the least square objective function. The temporal variability of hydrodynamic parameter is then assessed and analyzed.

Keywords: automatic differentiation, constructed wetland, inverse method, mixed hybrid FEM, sensitivity analysis

Procedia PDF Downloads 165
160 Reliability Levels of Reinforced Concrete Bridges Obtained by Mixing Approaches

Authors: Adrián D. García-Soto, Alejandro Hernández-Martínez, Jesús G. Valdés-Vázquez, Reyna A. Vizguerra-Alvarez

Abstract:

Reinforced concrete bridges designed by code are intended to achieve target reliability levels adequate for the geographical environment where the code is applicable. Several methods can be used to estimate such reliability levels. Many of them require the establishment of an explicit limit state function (LSF). When such LSF is not available as a close-form expression, the simulation techniques are often employed. The simulation methods are computing intensive and time consuming. Note that if the reliability of real bridges designed by code is of interest, numerical schemes, the finite element method (FEM) or computational mechanics could be required. In these cases, it can be quite difficult (or impossible) to establish a close-form of the LSF, and the simulation techniques may be necessary to compute reliability levels. To overcome the need for a large number of simulations when no explicit LSF is available, the point estimate method (PEM) could be considered as an alternative. It has the advantage that only the probabilistic moments of the random variables are required. However, in the PEM, fitting of the resulting moments of the LSF to a probability density function (PDF) is needed. In the present study, a very simple alternative which allows the assessment of the reliability levels when no explicit LSF is available and without the need of extensive simulations is employed. The alternative includes the use of the PEM, and its applicability is shown by assessing reliability levels of reinforced concrete bridges in Mexico when a numerical scheme is required. Comparisons with results by using the Monte Carlo simulation (MCS) technique are included. To overcome the problem of approximating the probabilistic moments from the PEM to a PDF, a well-known distribution is employed. The approach mixes the PEM and other classic reliability method (first order reliability method, FORM). The results in the present study are in good agreement whit those computed with the MCS. Therefore, the alternative of mixing the reliability methods is a very valuable option to determine reliability levels when no close form of the LSF is available, or if numerical schemes, the FEM or computational mechanics are employed.

Keywords: structural reliability, reinforced concrete bridges, combined approach, point estimate method, monte carlo simulation

Procedia PDF Downloads 347
159 Bedouin Dispersion in Israel: Between Sustainable Development and Social Non-Recognition

Authors: Tamir Michal

Abstract:

The subject of Bedouin dispersion has accompanied the State of Israel from the day of its establishment. From a legal point of view, this subject has offered a launchpad for creative judicial decisions. Thus, for example, the first court decision in Israel to recognize affirmative action (Avitan), dealt with a petition submitted by a Jew appealing the refusal of the State to recognize the Petitioner’s entitlement to the long-term lease of a plot designated for Bedouins. The Supreme Court dismissed the petition, holding that there existed a public interest in assisting Bedouin to establish permanent urban settlements, an interest which justifies giving them preference by selling them plots at subsidized prices. In another case (The Forum for Coexistence in the Negev) the Supreme Court extended equitable relief for the purpose of constructing a bridge, even though the construction infringed the Law, in order to allow the children of dispersed Bedouin to reach school. Against this background, the recent verdict, delivered during the Protective Edge military campaign, which dismissed a petition aimed at forcing the State to spread out Protective Structures in Bedouin villages in the Negev against the risk of being hit from missiles launched from Gaza (Abu Afash) is disappointing. Even if, in arguendo, no selective discrimination was involved in the State’s decision not to provide such protection, the decision, and its affirmation by the Court, is problematic when examined through the prism of the Theory of Recognition. The article analyses the issue by tools of theory of Recognition, according to which people develop their identities through mutual relations of recognition in different fields. In the social context, the path to recognition is cognitive respect, which is provided by means of legal rights. By seeing other participants in Society as bearers of rights and obligations, the individual develops an understanding of his legal condition as reflected in the attitude to others. Consequently, even if the Court’s decision may be justified on strict legal grounds, the fact that Jewish settlements were protected during the military operation, whereas Bedouin villages were not, is a setback in the struggle to make the Bedouin citizens with equal rights in Israeli society. As the Court held, ‘Beyond their protective function, the Migunit [Protective Structures] may make a moral and psychological contribution that should not be undervalued’. This contribution is one that the Bedouin did not receive in the Abu Afash verdict. The basic thesis is that the Court’s verdict analyzed above clearly demonstrates that the reliance on classical liberal instruments (e.g., equality) cannot secure full appreciation of all aspects of Bedouin life, and hence it can in fact prejudice them. Therefore, elements of the recognition theory should be added, in order to find the channel for cognitive dignity, thereby advancing the Bedouins’ ability to perceive themselves as equal human beings in the Israeli society.

Keywords: bedouin dispersion, cognitive respect, recognition theory, sustainable development

Procedia PDF Downloads 353
158 Harnessing Sunlight for Clean Water: Scalable Approach for Silver-Loaded Titanium Dioxide Nanoparticles

Authors: Satam Alotibi, Muhammad J. Al-Zahrani, Fahd K. Al-Naqidan, Turki S. Hussein, Moteb Alotaibi, Mohammed Alyami, Mahdy M. Elmahdy, Abdellah Kaiba, Fatehia S. Alhakami, Talal F. Qahtan

Abstract:

Water pollution is a critical global challenge that demands scalable and effective solutions for water decontamination. In this captivating research, we unveil a groundbreaking strategy for harnessing solar energy to synthesize silver (Ag) clusters on stable titanium dioxide (TiO₂) nanoparticles dispersed in water, without the need for traditional stabilization agents. These Ag-loaded TiO₂ nanoparticles exhibit exceptional photocatalytic activity, surpassing that of pristine TiO₂ nanoparticles, offering a promising solution for highly efficient water decontamination under sunlight irradiation. To the best knowledge, we have developed a unique method to stabilize TiO₂ P25 nanoparticles in water without the use of stabilization agents. This breakthrough allows us to create an ideal platform for the solar-driven synthesis of Ag clusters. Under sunlight irradiation, the stable dispersion of TiO₂ P25 nanoparticles acts as a highly efficient photocatalyst, generating electron-hole pairs. The photogenerated electrons effectively reduce silver ions derived from a silver precursor, resulting in the formation of Ag clusters. The Ag clusters loaded on TiO₂ P25 nanoparticles exhibit remarkable photocatalytic activity for water decontamination under sunlight irradiation. Acting as active sites, these Ag clusters facilitate the generation of reactive oxygen species (ROS) upon exposure to sunlight. These ROS play a pivotal role in rapidly degrading organic pollutants, enabling efficient water decontamination. To confirm the success of our approach, we characterized the synthesized Ag-loaded TiO₂ P25 nanoparticles using cutting-edge analytical techniques, such as transmission electron microscopy (TEM), scanning electron microscopy (SEM), X-ray diffraction (XRD), and spectroscopic methods. These characterizations unequivocally confirm the successful synthesis of Ag clusters on stable TiO₂ P25 nanoparticles without traditional stabilization agents. Comparative studies were conducted to evaluate the superior photocatalytic performance of Ag-loaded TiO₂ P25 nanoparticles compared to pristine TiO₂ P25 nanoparticles. The Ag clusters loaded on TiO₂ P25 nanoparticles exhibit significantly enhanced photocatalytic activity, benefiting from the synergistic effect between the Ag clusters and TiO₂ nanoparticles, which promotes ROS generation for efficient water decontamination. Our scalable strategy for synthesizing Ag clusters on stable TiO₂ P25 nanoparticles without stabilization agents presents a game-changing solution for highly efficient water decontamination under sunlight irradiation. The use of commercially available TiO₂ P25 nanoparticles streamlines the synthesis process and enables practical scalability. The outstanding photocatalytic performance of Ag-loaded TiO₂ P25 nanoparticles opens up new avenues for their application in large-scale water treatment and remediation processes, addressing the urgent need for sustainable water decontamination solutions.

Keywords: water pollution, solar energy, silver clusters, TiO₂ nanoparticles, photocatalytic activity

Procedia PDF Downloads 70
157 Impact of Preksha Meditation on Academic Anxiety of Female Teenagers

Authors: Neelam Vats, Madhvi Pathak Pillai, Rajender Lal, Indu Dabas

Abstract:

The pressure of scoring higher marks to be able to get admission in a higher ranked institution has become a social stigma for school students. It leads to various social and academic pressures on them, causing psychological anxiety. This undue stress on students sometimes may even steer to aggressive behavior or suicidal tendencies. Human mind is always surrounded by the some desires, emotions and passions, which usually disturbs our mental peace. In such a scenario, we look for a solution that helps in removing all the obstacles of mind and make us mentally peaceful and strong enough to be able to deal with all kind of pressure. Preksha meditation is one such technique which aims at bringing the positive changes for overall transformation of personality. Hence, the present study was undertaken to assess the impact of Preksha Meditation on the academic anxiety on female teenagers. The study was conducted on 120 high school students from the capital city of India. All students were in the age group of 13-15 years. They also belonged to similar social as well as economic status. The sample was equally divided into two groups i.e. experimental group (N = 60) and control group (N = 60). Subjects of the experimental group were given the intervention of Preksha Meditation practice by the trained instructor for one hour per day, six days a week, for three months for the first experimental stage and another three months for the second experimental stage. The subjects of the control group were not assigned any specific type of activity rather they continued doing their normal official activities as usual. The Academic Anxiety Scale was used to collect data during multi-level stages i.e. pre-experimental stage, post-experimental stage phase-I, and post-experimental stage phase-II. The data were statistically analyzed by computing the two-tailed-‘t’ test for inter group comparison and Sandler’s ‘A’ test with alpha = or p < 0.05 for intra-group comparisons. The study concluded that the practice for longer duration of Preksha Meditation practice brings about very significant and beneficial changes in the pattern of academic anxiety.

Keywords: academic anxiety, academic pressure, Preksha, meditation

Procedia PDF Downloads 133
156 Developing Creative and Critically Reflective Digital Learning Communities

Authors: W. S. Barber, S. L. King

Abstract:

This paper is a qualitative case study analysis of the development of a fully online learning community of graduate students through arts-based community building activities. With increasing numbers and types of online learning spaces, it is incumbent upon educators to continue to push the edge of what best practices look like in digital learning environments. In digital learning spaces, instructors can no longer be seen as purveyors of content knowledge to be examined at the end of a set course by a final test or exam. The rapid and fluid dissemination of information via Web 3.0 demands that we reshape our approach to teaching and learning, from one that is content-focused to one that is process-driven. Rather than having instructors as formal leaders, today’s digital learning environments require us to share expertise, as it is the collective experiences and knowledge of all students together with the instructors that help to create a very different kind of learning community. This paper focuses on innovations pursued in a 36 hour 12 week graduate course in higher education entitled “Critical and Reflective Practice”. The authors chronicle their journey to developing a fully online learning community (FOLC) by emphasizing the elements of social, cognitive, emotional and digital spaces that form a moving interplay through the community. In this way, students embrace anywhere anytime learning and often take the learning, as well as the relationships they build and skills they acquire, beyond the digital class into real world situations. We argue that in order to increase student online engagement, pedagogical approaches need to stem from two primary elements, both creativity and critical reflection, that are essential pillars upon which instructors can co-design learning environments with students. The theoretical framework for the paper is based on the interaction and interdependence of Creativity, Intuition, Critical Reflection, Social Constructivism and FOLCs. By leveraging students’ embedded familiarity with a wide variety of technologies, this case study of a graduate level course on critical reflection in education, examines how relationships, quality of work produced, and student engagement can improve by using creative and imaginative pedagogical strategies. The authors examine their professional pedagogical strategies through the lens that the teacher acts as facilitator, guide and co-designer. In a world where students can easily search for and organize information as self-directed processes, creativity and connection can at times be lost in the digitized course environment. The paper concludes by posing further questions as to how institutions of higher education may be challenged to restructure their credit granting courses into more flexible modules, and how students need to be considered an important part of assessment and evaluation strategies. By introducing creativity and critical reflection as central features of the digital learning spaces, notions of best practices in digital teaching and learning emerge.

Keywords: online, pedagogy, learning, communities

Procedia PDF Downloads 407
155 Study on Control Techniques for Adaptive Impact Mitigation

Authors: Rami Faraj, Cezary Graczykowski, Błażej Popławski, Grzegorz Mikułowski, Rafał Wiszowaty

Abstract:

Progress in the field of sensors, electronics and computing results in more and more often applications of adaptive techniques for dynamic response mitigation. When it comes to systems excited with mechanical impacts, the control system has to take into account the significant limitations of actuators responsible for system adaptation. The paper provides a comprehensive discussion of the problem of appropriate design and implementation of adaptation techniques and mechanisms. Two case studies are presented in order to compare completely different adaptation schemes. The first example concerns a double-chamber pneumatic shock absorber with a fast piezo-electric valve and parameters corresponding to the suspension of a small unmanned aerial vehicle, whereas the second considered system is a safety air cushion applied for evacuation of people from heights during a fire. For both systems, it is possible to ensure adaptive performance, but a realization of the system’s adaptation is completely different. The reason for this is technical limitations corresponding to specific types of shock-absorbing devices and their parameters. Impact mitigation using a pneumatic shock absorber corresponds to much higher pressures and small mass flow rates, which can be achieved with minimal change of valve opening. In turn, mass flow rates in safety air cushions relate to gas release areas counted in thousands of sq. cm. Because of these facts, both shock-absorbing systems are controlled based on completely different approaches. Pneumatic shock-absorber takes advantage of real-time control with valve opening recalculated at least every millisecond. In contrast, safety air cushion is controlled using the semi-passive technique, where adaptation is provided using prediction of the entire impact mitigation process. Similarities of both approaches, including applied models, algorithms and equipment, are discussed. The entire study is supported by numerical simulations and experimental tests, which prove the effectiveness of both adaptive impact mitigation techniques.

Keywords: adaptive control, adaptive system, impact mitigation, pneumatic system, shock-absorber

Procedia PDF Downloads 92
154 Preparation, Characterization and Photocatalytic Activity of a New Noble Metal Modified TiO2@SrTiO3 and SrTiO3 Photocatalysts

Authors: Ewelina Grabowska, Martyna Marchelek

Abstract:

Among the various semiconductors, nanosized TiO2 has been widely studied due to its high photosensitivity, low cost, low toxicity, and good chemical and thermal stability. However, there are two main drawbacks to the practical application of pure TiO2 films. One is that TiO2 can be induced only by ultraviolet (UV) light due to its intrinsic wide bandgap (3.2 eV for anatase and 3.0 eV for rutile), which limits its practical efficiency for solar energy utilization since UV light makes up only 4-5% of the solar spectrum. The other is that a high electron-hole recombination rate will reduce the photoelectric conversion efficiency of TiO2. In order to overcome the above drawbacks and modify the electronic structure of TiO2, some semiconductors (eg. CdS, ZnO, PbS, Cu2O, Bi2S3, and CdSe) have been used to prepare coupled TiO2 composites, for improving their charge separation efficiency and extending the photoresponse into the visible region. It has been proved that the fabrication of p-n heterostructures by combining n-type TiO2 with p-type semiconductors is an effective way to improve the photoelectric conversion efficiency of TiO2. SrTiO3 is a good candidate for coupling TiO2 and improving the photocatalytic performance of the photocatalyst because its conduction band edge is more negative than TiO2. Due to the potential differences between the band edges of these two semiconductors, the photogenerated electrons transfer from the conduction band of SrTiO3 to that of TiO2. Conversely, the photogenerated electrons transfer from the conduction band of SrTiO3 to that of TiO2. Then the photogenerated charge carriers can be efficiently separated by these processes, resulting in the enhancement of the photocatalytic property in the photocatalyst. Additionally, one of the methods for improving photocatalyst performance is addition of nanoparticles containing one or two noble metals (Pt, Au, Ag and Pd) deposited on semiconductor surface. The mechanisms were proposed as (1) the surface plasmon resonance of noble metal particles is excited by visible light, facilitating the excitation of the surface electron and interfacial electron transfer (2) some energy levels can be produced in the band gap of TiO2 by the dispersion of noble metal nanoparticles in the TiO2 matrix; (3) noble metal nanoparticles deposited on TiO2 act as electron traps, enhancing the electron–hole separation. In view of this, we recently obtained series of TiO2@SrTiO3 and SrTiO3 photocatalysts loaded with noble metal NPs. using photodeposition method. The M- TiO2@SrTiO3 and M-SrTiO3 photocatalysts (M= Rh, Rt, Pt) were studied for photodegradation of phenol in aqueous phase under UV-Vis and visible irradiation. Moreover, in the second part of our research hydroxyl radical formations were investigated. Fluorescence of irradiated coumarin solution was used as a method of ˙OH radical detection. Coumarin readily reacts with generated hydroxyl radicals forming hydroxycoumarins. Although the major hydroxylation product is 5-hydroxycoumarin, only 7-hydroxyproduct of coumarin hydroxylation emits fluorescent light. Thus, this method was used only for hydroxyl radical detection, but not for determining concentration of hydroxyl radicals.

Keywords: composites TiO2, SrTiO3, photocatalysis, phenol degradation

Procedia PDF Downloads 224
153 Augmented and Virtual Reality Experiences in Plant and Agriculture Science Education

Authors: Sandra Arango-Caro, Kristine Callis-Duehl

Abstract:

The Education Research and Outreach Lab at the Donald Danforth Plant Science Center established the Plant and Agriculture Augmented and Virtual Reality Learning Laboratory (PAVRLL) to promote science education through professional development, school programs, internships, and outreach events. Professional development is offered to high school and college science and agriculture educators on the use and applications of zSpace and Oculus platforms. Educators learn to use, edit, or create lesson plans in the zSpace platform that are aligned with the Next Generation Science Standards. They also learn to use virtual reality experiences created by the PAVRLL available in Oculus (e.g. The Soybean Saga). Using a cost-free loan rotation system, educators can bring the AVR units to the classroom and offer AVR activities to their students. Each activity has user guides and activity protocols for both teachers and students. The PAVRLL also offers activities for 3D plant modeling. High school students work in teams of art-, science-, and technology-oriented students to design and create 3D models of plant species that are under research at the Danforth Center and present their projects at scientific events. Those 3D models are open access through the zSpace platform and are used by PAVRLL for professional development and the creation of VR activities. Both teachers and students acquire knowledge of plant and agriculture content and real-world problems, gain skills in AVR technology, 3D modeling, and science communication, and become more aware and interested in plant science. Students that participate in the PAVRLL activities complete pre- and post-surveys and reflection questions that evaluate interests in STEM and STEM careers, students’ perceptions of three design features of biology lab courses (collaboration, discovery/relevance, and iteration/productive failure), plant awareness, and engagement and learning in AVR environments. The PAVRLL was established in the fall of 2019, and since then, it has trained 15 educators, three of which will implement the AVR programs in the fall of 2021. Seven students have worked in the 3D plant modeling activity through a virtual internship. Due to the COVID-19 pandemic, the number of teachers trained, and classroom implementations have been very limited. It is expected that in the fall of 2021, students will come back to the schools in person, and by the spring of 2022, the PAVRLL activities will be fully implemented. This will allow the collection of enough data on student assessments that will provide insights on benefits and best practices for the use of AVR technologies in the classrooms. The PAVRLL uses cutting-edge educational technologies to promote science education and assess their benefits and will continue its expansion. Currently, the PAVRLL is applying for grants to create its own virtual labs where students can experience authentic research experiences using real Danforth research data based on programs the Education Lab already used in classrooms.

Keywords: assessment, augmented reality, education, plant science, virtual reality

Procedia PDF Downloads 174
152 The Relationship between Central Bank Independence and Inflation: Evidence from Africa

Authors: R. Bhattu Babajee, Marie Sandrine Estelle Benoit

Abstract:

The past decades have witnessed a considerable institutional shift towards Central Bank Independence across economies of the world. The motivation behind such a change is the acceptance that increased central bank autonomy has the power of alleviating inflation bias. Hence, studying whether Central Bank Independence acts as a significant factor behind the price stability in the African economies or whether this macroeconomic aim in these countries result from other economic, political or social factors is a pertinent issue. The main research objective of this paper is to assess the relationship between central bank autonomy and inflation in African economies where inflation has proved to be a serious problem. In this optic, we shall measure the degree of CBI in Africa by computing the turnover rates of central banks governors thereby studying whether decisions made by African central banks are affected by external forces. The purpose of this study is to investigate empirically the association between Central Bank Independence (CBI) and inflation for 10 African economies over a period of 17 years, from 1995 to 2012. The sample includes Botswana, Egypt, Ghana, Kenya, Madagascar, Mauritius, Mozambique, Nigeria, South Africa, and Uganda. In contrast to empirical research, we have not been using the usual static panel model for it is associated with potential mis specification arising from the absence of dynamics. To this issue a dynamic panel data model which integrates several control variables has been used. Firstly, the analysis includes dynamic terms to explain the tenacity of inflation. Given the confirmation of inflation inertia, that is very likely in African countries there exists the need for including lagged inflation in the empirical model. Secondly, due to known reverse causality between Central Bank Independence and inflation, the system generalized method of moments (GMM) is employed. With GMM estimators, the presence of unknown forms of heteroskedasticity is admissible as well as auto correlation in the error term. Thirdly, control variables have been used to enhance the efficiency of the model. The main finding of this paper is that central bank independence is negatively associated with inflation even after including control variables.

Keywords: central bank independence, inflation, macroeconomic variables, price stability

Procedia PDF Downloads 365
151 Attitudes of Gratitude: An Analysis of 30 Cancer Patient Narratives Published by Leading U.S. Cancer Care Centers

Authors: Maria L. McLeod

Abstract:

This study examines the ways in which cancer patient narratives are portrayed and framed on the websites of three leading U.S. cancer care centers –The University of Texas MD Anderson Cancer Center in Houston, Memorial Sloan Kettering Cancer Center in New York, and Seattle Cancer Care Alliance. Thirty patient stories, ten from each cancer center website blog, were analyzed using qualitative and quantitative textual analysis of unstructured data, documenting repeated use of specific metaphors and tropes while charting common themes and other elements of story structure and content. Patient narratives were coded using grounded theory as the basis for conducting emergent qualitative research. As part of a systematic, inductive approach to collecting and analyzing data, recurrent and unique themes were examined and compared in terms of positive and negative framing, patient agency, and institutional praise. All three of these cancer care centers are teaching hospitals with university affiliations, that emphasizes an evidence-based scientific approach to treatment that utilizes the latest research and cutting-edge techniques and technology. Thus, the use of anecdotal evidence presented in patient narratives could be perceived as being in conflict with this evidence-based model, as the patient stories are not an accurate representation of scientific outcomes related to developing cancer, cancer reoccurrence, or cancer outcomes. The representative patient narratives tend to exclude or downplay adverse responses to treatment, survival rates, integrative and/or complementary cancer treatments, cancer prevention and causes, and barriers to treatment, such as the limitation of insurance plans, costs of treatment, and/or other issues related to access, potentially contributing to false narratives and inaccurate notions of cancer prevention, cancer care treatment and the potential for a cure. Both quantitative and qualitative findings demonstrate that cancer patient stories featured on the blogsites of the nation’s top cancer care centers deemphasize patient agency and, instead, emphasize deference and gratitude toward the institutions where the featured patients received treatment. Along these lines, language choices reflect positive framing of the cancer experience. Accompanying portrait photos of healthy appearing subjects as well as positive-framed headlines, subheads, and pull quotes function similarly, reflecting hopeful, transformative experiences and outcomes over hardship and suffering. Although patient narratives include real, factual scientific details and descriptions of actual events, the stories lack references to more negative realities of cancer diagnosis and treatment. Instead, they emphasize the triumph of survival by which the cancer care center, in the savior/hero role, enables the patient’s success, represented as a cathartic medical journey.

Keywords: cancer framing, cancer stories, medical gaze, patient narratives

Procedia PDF Downloads 163
150 The Study of Cost Accounting in S Company Based on TDABC

Authors: Heng Ma

Abstract:

Third-party warehousing logistics has an important role in the development of external logistics. At present, the third-party logistics in our country is still a new industry, the accounting system has not yet been established, the current financial accounting system of third-party warehousing logistics is mainly in the traditional way of thinking, and only able to provide the total cost information of the entire enterprise during the accounting period, unable to reflect operating indirect cost information. In order to solve the problem of third-party logistics industry cost information distortion, improve the level of logistics cost management, the paper combines theoretical research and case analysis method to reflect cost allocation by building third-party logistics costing model using Time-Driven Activity-Based Costing(TDABC), and takes S company as an example to account and control the warehousing logistics cost. Based on the idea of “Products consume activities and activities consume resources”, TDABC put time into the main cost driver and use time-consuming equation resources assigned to cost objects. In S company, the objects focuses on three warehouse, engaged with warehousing and transportation (the second warehouse, transport point) service. These three warehouse respectively including five departments, Business Unit, Production Unit, Settlement Center, Security Department and Equipment Division, the activities in these departments are classified by in-out of storage forecast, in-out of storage or transit and safekeeping work. By computing capacity cost rate, building the time-consuming equation, the paper calculates the final operation cost so as to reveal the real cost. The numerical analysis results show that the TDABC can accurately reflect the cost allocation of service customers and reveal the spare capacity cost of resource center, verifies the feasibility and validity of TDABC in third-party logistics industry cost accounting. It inspires enterprises focus on customer relationship management and reduces idle cost to strengthen the cost management of third-party logistics enterprises.

Keywords: third-party logistics enterprises, TDABC, cost management, S company

Procedia PDF Downloads 360
149 Shedding Light on the Black Box: Explaining Deep Neural Network Prediction of Clinical Outcome

Authors: Yijun Shao, Yan Cheng, Rashmee U. Shah, Charlene R. Weir, Bruce E. Bray, Qing Zeng-Treitler

Abstract:

Deep neural network (DNN) models are being explored in the clinical domain, following the recent success in other domains such as image recognition. For clinical adoption, outcome prediction models require explanation, but due to the multiple non-linear inner transformations, DNN models are viewed by many as a black box. In this study, we developed a deep neural network model for predicting 1-year mortality of patients who underwent major cardio vascular procedures (MCVPs), using temporal image representation of past medical history as input. The dataset was obtained from the electronic medical data warehouse administered by Veteran Affairs Information and Computing Infrastructure (VINCI). We identified 21,355 veterans who had their first MCVP in 2014. Features for prediction included demographics, diagnoses, procedures, medication orders, hospitalizations, and frailty measures extracted from clinical notes. Temporal variables were created based on the patient history data in the 2-year window prior to the index MCVP. A temporal image was created based on these variables for each individual patient. To generate the explanation for the DNN model, we defined a new concept called impact score, based on the presence/value of clinical conditions’ impact on the predicted outcome. Like (log) odds ratio reported by the logistic regression (LR) model, impact scores are continuous variables intended to shed light on the black box model. For comparison, a logistic regression model was fitted on the same dataset. In our cohort, about 6.8% of patients died within one year. The prediction of the DNN model achieved an area under the curve (AUC) of 78.5% while the LR model achieved an AUC of 74.6%. A strong but not perfect correlation was found between the aggregated impact scores and the log odds ratios (Spearman’s rho = 0.74), which helped validate our explanation.

Keywords: deep neural network, temporal data, prediction, frailty, logistic regression model

Procedia PDF Downloads 154
148 Exploration of Cone Foam Breaker Behavior Using Computational Fluid Dynamic

Authors: G. St-Pierre-Lemieux, E. Askari Mahvelati, D. Groleau, P. Proulx

Abstract:

Mathematical modeling has become an important tool for the study of foam behavior. Computational Fluid Dynamic (CFD) can be used to investigate the behavior of foam around foam breakers to better understand the mechanisms leading to the ‘destruction’ of foam. The focus of this investigation was the simple cone foam breaker, whose performance has been identified in numerous studies. While the optimal pumping angle is known from the literature, the contribution of pressure drop, shearing, and centrifugal forces to the foam syneresis are subject to speculation. This work provides a screening of those factors against changes in the cone angle and foam rheology. The CFD simulation was made with the open source OpenFOAM toolkits on a full three-dimensional model discretized using hexahedral cells. The geometry was generated using a python script then meshed with blockMesh. The OpenFOAM Volume Of Fluid (VOF) method was used (interFOAM) to obtain a detailed description of the interfacial forces, and the model k-omega SST was used to calculate the turbulence fields. The cone configuration allows the use of a rotating wall boundary condition. In each case, a pair of immiscible fluids, foam/air or water/air was used. The foam was modeled as a shear thinning (Herschel-Buckley) fluid. The results were compared to our measurements and to results found in the literature, first by computing the pumping rate of the cone, and second by the liquid break-up at the exit of the cone. A 3D printed version of the cones submerged in foam (shaving cream or soap solution) and water, at speeds varying between 400 RPM and 1500 RPM, was also used to validate the modeling results by calculating the torque exerted on the shaft. While most of the literature is focusing on cone behavior using Newtonian fluids, this works explore its behavior in shear thinning fluid which better reflects foam apparent rheology. Those simulations bring new light on the cone behavior within the foam and allow the computation of shearing, pressure, and velocity of the fluid, enabling to better evaluate the efficiency of the cones as foam breakers. This study contributes to clarify the mechanisms behind foam breaker performances, at least in part, using modern CFD techniques.

Keywords: bioreactor, CFD, foam breaker, foam mitigation, OpenFOAM

Procedia PDF Downloads 207
147 Waveguiding in an InAs Quantum Dots Nanomaterial for Scintillation Applications

Authors: Katherine Dropiewski, Michael Yakimov, Vadim Tokranov, Allan Minns, Pavel Murat, Serge Oktyabrsky

Abstract:

InAs Quantum Dots (QDs) in a GaAs matrix is a well-documented luminescent material with high light yield, as well as thermal and ionizing radiation tolerance due to quantum confinement. These benefits can be leveraged for high-efficiency, room temperature scintillation detectors. The proposed scintillator is composed of InAs QDs acting as luminescence centers in a GaAs stopping medium, which also acts as a waveguide. This system has appealing potential properties, including high light yield (~240,000 photons/MeV) and fast capture of photoelectrons (2-5ps), orders of magnitude better than currently used inorganic scintillators, such as LYSO or BaF2. The high refractive index of the GaAs matrix (n=3.4) ensures light emitted by the QDs is waveguided, which can be collected by an integrated photodiode (PD). Scintillation structures were grown using Molecular Beam Epitaxy (MBE) and consist of thick GaAs waveguiding layers with embedded sheets of modulation p-type doped InAs QDs. An AlAs sacrificial layer is grown between the waveguide and the GaAs substrate for epitaxial lift-off to separate the scintillator film and transfer it to a low-index substrate for waveguiding measurements. One consideration when using a low-density material like GaAs (~5.32 g/cm³) as a stopping medium is the matrix thickness in the dimension of radiation collection. Therefore, luminescence properties of very thick (4-20 microns) waveguides with up to 100 QD layers were studied. The optimization of the medium included QD shape, density, doping, and AlGaAs barriers at the waveguide surfaces to prevent non-radiative recombination. To characterize the efficiency of QD luminescence, low temperature photoluminescence (PL) (77-450 K) was measured and fitted using a kinetic model. The PL intensity degrades by only 40% at RT, with an activation energy for electron escape from QDs to the barrier of ~60 meV. Attenuation within the waveguide (WG) is a limiting factor for the lateral size of a scintillation detector, so PL spectroscopy in the waveguiding configuration was studied. Spectra were measured while the laser (630 nm) excitation point was scanned away from the collecting fiber coupled to the edge of the WG. The QD ground state PL peak at 1.04 eV (1190 nm) was inhomogeneously broadened with FWHM of 28 meV (33 nm) and showed a distinct red-shift due to self-absorption in the QDs. Attenuation stabilized after traveling over 1 mm through the WG, at about 3 cm⁻¹. Finally, a scintillator sample was used to test detection and evaluate timing characteristics using 5.5 MeV alpha particles. With a 2D waveguide and a small area of integrated PD, the collected charge averaged 8.4 x10⁴ electrons, corresponding to a collection efficiency of about 7%. The scintillation response had 80 ps noise-limited time resolution and a QD decay time of 0.6 ns. The data confirms unique properties of this scintillation detector which can be potentially much faster than any currently used inorganic scintillator.

Keywords: GaAs, InAs, molecular beam epitaxy, quantum dots, III-V semiconductor

Procedia PDF Downloads 257
146 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data

Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz

Abstract:

In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.

Keywords: real-time spatial big data, quality of service, vertical partitioning, horizontal partitioning, matching algorithm, hamming distance, stream query

Procedia PDF Downloads 158
145 Investigating the Role of Community in Heritage Conservation through the Ladder of Citizen Participation Approach: Case Study, Port Said, Egypt

Authors: Sara S. Fouad, Omneya Messallam

Abstract:

Egypt has countless prestigious buildings and diversity of cultural heritage which are located in many cities. Most of the researchers, archaeologists, stakeholders and governmental bodies are paying more attention to the big cities such as Cairo and Alexandria, due to the country’s centralization nature. However, there are other historic cities that are grossly neglected and in need of emergency conservation. For instance, Port Said which is a former colonial city that was established in nineteenth century located at the edge of the northeast Egyptian coast between the Mediterranean Sea and the Suez Canal. This city is chosen because it presents one of the important Egyptian archaeological sites that archive Egyptian architecture of the 19th and 20th centuries. The historic urban fabric is divided into three main districts; the Arab, the European (Al-Afrang), and Port Fouad. The European district is selected to be the research case study as it has culture diversity, significant buildings, and includes the largest number of the listed heritage buildings in Port Said. Based on questionnaires and interviews, since 2003 several initiative trials have been taken by Alliance Francaise, the National Organization for Urban Harmony (NOUH), some Non-Governmental Organizations (NGOs), and few number of community residents to highlight the important city legacy and protect it from being demolished. Unfortunately, the limitation of their participation in decision-making policies is considered a crucial threat facing sustainable heritage conservation. Therefore, encouraging the local community to participate in their architecture heritage conservation would create a self-confident one, capable of making decisions for the city’s future development. This paper aims to investigate the role of the local inhabitants in protecting their buildings heritage through listing the community level of participations twice (2012 and 2018) in preserving their heritage based on the ladder citizen participation approach. Also, it is to encourage community participation in order to promote city architecture conservation, heritage management, and sustainable development. The methodology followed in this empirical research involves using several data assembly methods such as structural observations, questionnaires, interviews, and mental mapping. The questionnaire was distributed among 92 local inhabitants aged 18-60 years. However, the outset of this research at the beginning demonstrated the majority negative attitude, motivation, and confidence of the local inhabitants’ role to safeguard their architectural heritage. Over time, there was a change in the negative attitudes. Therefore, raising public awareness and encouraging community participation by providing them with a real opportunity to take part in the decision-making. This may lead to a positive relationship between the community residents and the built heritage, which is essential for promoting its preservation and sustainable development.

Keywords: buildings preservation, community participation, heritage conservation, local inhabitant, ladder of citizen participation

Procedia PDF Downloads 168
144 Attention Treatment for People With Aphasia: Language-Specific vs. Domain-General Neurofeedback

Authors: Yael Neumann

Abstract:

Attention deficits are common in people with aphasia (PWA). Two treatment approaches address these deficits: domain-general methods like Play Attention, which focus on cognitive functioning, and domain-specific methods like Language-Specific Attention Treatment (L-SAT), which use linguistically based tasks. Research indicates that L-SAT can improve both attentional deficits and functional language skills, while Play Attention has shown success in enhancing attentional capabilities among school-aged children with attention issues compared to standard cognitive training. This study employed a randomized controlled cross-over single-subject design to evaluate the effectiveness of these two attention treatments over 25 weeks. Four PWA participated, undergoing a battery of eight standardized tests measuring language and cognitive skills. The treatments were counterbalanced. Play Attention used EEG sensors to detect brainwaves, enabling participants to manipulate items in a computer game while learning to suppress theta activity and increase beta activity. An algorithm tracked changes in the theta-to-beta ratio, allowing points to be earned during the games. L-SAT, on the other hand, involved hierarchical language tasks that increased in complexity, requiring greater attention from participants. Results showed that for language tests, Participant 1 (moderate aphasia) aligned with existing literature, showing L-SAT was more effective than Play Attention. However, Participants 2 (very severe) and 3 and 4 (mild) did not conform to this pattern; both treatments yielded similar outcomes. This may be due to the extremes of aphasia severity: the very severe participant faced significant overall deficits, making both approaches equally challenging, while the mild participant performed well initially, leaving limited room for improvement. In attention tests, Participants 1 and 4 exhibited results consistent with prior research, indicating Play Attention was superior to L-SAT. Participant 2, however, showed no significant improvement with either program, although L-SAT had a slight edge on the Visual Elevator task, measuring switching and mental flexibility. This advantage was not sustained at the one-month follow-up, likely due to the participant’s struggles with complex attention tasks. Participant 3's results similarly did not align with prior studies, revealing no difference between the two treatments, possibly due to the challenging nature of the attention measures used. Regarding participation and ecological tests, all participants showed similar mild improvements with both treatments. This limited progress could stem from the short study duration, with only five weeks allocated for each treatment, which may not have been enough time to achieve meaningful changes affecting life participation. In conclusion, the performance of participants appeared influenced by their level of aphasia severity. The moderate PWA’s results were most aligned with existing literature, indicating better attention improvement from the domain-general approach (Play Attention) and better language improvement from the domain-specific approach (L-SAT).

Keywords: attention, language, cognitive rehabilitation, neurofeedback

Procedia PDF Downloads 21
143 LES Simulation of a Thermal Plasma Jet with Modeled Anode Arc Attachment Effects

Authors: N. Agon, T. Kavka, J. Vierendeels, M. Hrabovský, G. Van Oost

Abstract:

A plasma jet model was developed with a rigorous method for calculating the thermophysical properties of the gas mixture without mixing rules. A simplified model approach to account for the anode effects was incorporated in this model to allow the valorization of the simulations with experimental results. The radial heat transfer was under-predicted by the model because of the limitations of the radiation model, but the calculated evolution of centerline temperature, velocity and gas composition downstream of the torch exit corresponded well with the measured values. The CFD modeling of thermal plasmas is either focused on development of the plasma arc or the flow of the plasma jet outside of the plasma torch. In the former case, the Maxwell equations are coupled with the Navier-Stokes equations to account for electromagnetic effects which control the movements of the anode arc attachment. In plasma jet simulations, however, the computational domain starts from the exit nozzle of the plasma torch and the influence of the arc attachment fluctuations on the plasma jet flow field is not included in the calculations. In that case, the thermal plasma flow is described by temperature, velocity and concentration profiles at the torch exit nozzle and no electromagnetic effects are taken into account. This simplified approach is widely used in literature and generally acceptable for plasma torches with a circular anode inside the torch chamber. The unique DC hybrid water/gas-stabilized plasma torch developed at the Institute of Plasma Physics of the Czech Academy of Sciences on the other hand, consists of a rotating anode disk, located outside of the torch chamber. Neglecting the effects of the anode arc attachment downstream of the torch exit nozzle leads to erroneous predictions of the flow field. With the simplified approach introduced in this model, the Joule heating between the exit nozzle and the anode attachment position of the plasma arc is modeled by a volume heat source and the jet deflection caused by the anode processes by a momentum source at the anode surface. Furthermore, radiation effects are included by the net emission coefficient (NEC) method and diffusion is modeled with the combined diffusion coefficient method. The time-averaged simulation results are compared with numerous experimental measurements. The radial temperature profiles were obtained by spectroscopic measurements at different axial positions downstream of the exit nozzle. The velocity profiles were evaluated from the time-dependent evolution of flow structures, recorded by photodiode arrays. The shape of the plasma jet was compared with charge-coupled device (CCD) camera pictures. In the cooler regions, the temperature was measured by enthalpy probe downstream of the exit nozzle and by thermocouples in radial direction around the torch nozzle. The model results correspond well with the experimental measurements. The decrease in centerline temperature and velocity is predicted within an acceptable range and the shape of the jet closely resembles the jet structure in the recorded images. The temperatures at the edge of the jet are underestimated due to the absence of radial radiative heat transfer in the model.

Keywords: anode arc attachment, CFD modeling, experimental comparison, thermal plasma jet

Procedia PDF Downloads 367
142 Quantum Information Scrambling and Quantum Chaos in Silicon-Based Fermi-Hubbard Quantum Dot Arrays

Authors: Nikolaos Petropoulos, Elena Blokhina, Andrii Sokolov, Andrii Semenov, Panagiotis Giounanlis, Xutong Wu, Dmytro Mishagli, Eugene Koskin, Robert Bogdan Staszewski, Dirk Leipold

Abstract:

We investigate entanglement and quantum information scrambling (QIS) by the example of a many-body Extended and spinless effective Fermi-Hubbard Model (EFHM and e-FHM, respectively) that describes a special type of quantum dot array provided by Equal1 labs silicon-based quantum computer. The concept of QIS is used in the framework of quantum information processing by quantum circuits and quantum channels. In general, QIS is manifest as the de-localization of quantum information over the entire quantum system; more compactly, information about the input cannot be obtained by local measurements of the output of the quantum system. In our work, we will first make an introduction to the concept of quantum information scrambling and its connection with the 4-point out-of-time-order (OTO) correlators. In order to have a quantitative measure of QIS we use the tripartite mutual information, in similar lines to previous works, that measures the mutual information between 4 different spacetime partitions of the system and study the Transverse Field Ising (TFI) model; this is used to quantify the dynamical spreading of quantum entanglement and information in the system. Then, we investigate scrambling in the quantum many-body Extended Hubbard Model with external magnetic field Bz and spin-spin coupling J for both uniform and thermal quantum channel inputs and show that it scrambles for specific external tuning parameters (e.g., tunneling amplitudes, on-site potentials, magnetic field). In addition, we compare different Hilbert space sizes (different number of qubits) and show the qualitative and quantitative differences in quantum scrambling as we increase the number of quantum degrees of freedom in the system. Moreover, we find a "scrambling phase transition" for a threshold temperature in the thermal case, that is, the temperature of the model that the channel starts to scramble quantum information. Finally, we make comparisons to the TFI model and highlight the key physical differences between the two systems and mention some future directions of research.

Keywords: condensed matter physics, quantum computing, quantum information theory, quantum physics

Procedia PDF Downloads 101