Search results for: disaster scenarios
271 Beam Deflection with Unidirectionality Due to Zeroth Order and Evanescent Wave Coupling in a Photonic Crystal with a Defect Layer without Corrugations under Oblique Incidence
Authors: Evrim Colak, Andriy E. Serebryannikov, Thore Magath, Ekmel Ozbay
Abstract:
Single beam deflection and unidirectional transmission are examined for oblique incidence in a Photonic Crystal (PC) structure which employs defect layer instead of surface corrugations at the interfaces. In all of the studied cases, the defect layer is placed such that the symmetry is broken. Two types of deflection are observed depending on whether the zeroth order is coupled or not. These two scenarios can be distinguished from each other by considering the simulated field distribution in PC. In the first deflection type, Floquet-Bloch mode enables zeroth order coupling. The energy of the zeroth order is redistributed between the diffraction orders at the defect layer, providing deflection. In the second type, when zeroth order is not coupled, strong diffractions cause blazing and the evanescent waves deliver energy to higher order diffraction modes. Simulated isofrequency contours can be utilized to estimate the coupling behavior. The defect layer is placed at varying rows, preserving the asymmetry of PC while evancescent waves can still couple to higher order modes. Even for deeply buried defect layer, asymmetric transmission and beam deflection are still encountered when the zeroth order is not coupled. We assume ε=11.4 (refractive index close to that of GaAs and Si) for the PC rods. A possible operation wavelength can be within microwave and infrared range. Since the suggested material is low loss, the structure can be scaled down to operate higher frequencies. Thus, a sample operation wavelength is selected as 1.5μm. Although the structure employs no surface corrugations transmission value T≈0.97 can be achieved by means of diffraction order m=-1. Moreover, utilizing an extra line defect, T value can be increased upto 0.99, under oblique incidence even if the line defect layer is deeply embedded in the photonic crystal. The latter configuration can be used to obtain deflection in one frequency range and can also be utilized for the realization of another functionality like defect-mode wave guiding in another frequency range but still using the same structure.Keywords: asymmetric transmission, beam deflection, blazing, bi-directional splitting, defect layer, dual beam splitting, Floquet-Bloch modes, isofrequency contours, line defect, oblique incidence, photonic crystal, unidirectionality
Procedia PDF Downloads 262270 Development of a Data-Driven Method for Diagnosing the State of Health of Battery Cells, Based on the Use of an Electrochemical Aging Model, with a View to Their Use in Second Life
Authors: Desplanches Maxime
Abstract:
Accurate estimation of the remaining useful life of lithium-ion batteries for electronic devices is crucial. Data-driven methodologies encounter challenges related to data volume and acquisition protocols, particularly in capturing a comprehensive range of aging indicators. To address these limitations, we propose a hybrid approach that integrates an electrochemical model with state-of-the-art data analysis techniques, yielding a comprehensive database. Our methodology involves infusing an aging phenomenon into a Newman model, leading to the creation of an extensive database capturing various aging states based on non-destructive parameters. This database serves as a robust foundation for subsequent analysis. Leveraging advanced data analysis techniques, notably principal component analysis and t-Distributed Stochastic Neighbor Embedding, we extract pivotal information from the data. This information is harnessed to construct a regression function using either random forest or support vector machine algorithms. The resulting predictor demonstrates a 5% error margin in estimating remaining battery life, providing actionable insights for optimizing usage. Furthermore, the database was built from the Newman model calibrated for aging and performance using data from a European project called Teesmat. The model was then initialized numerous times with different aging values, for instance, with varying thicknesses of SEI (Solid Electrolyte Interphase). This comprehensive approach ensures a thorough exploration of battery aging dynamics, enhancing the accuracy and reliability of our predictive model. Of particular importance is our reliance on the database generated through the integration of the electrochemical model. This database serves as a crucial asset in advancing our understanding of aging states. Beyond its capability for precise remaining life predictions, this database-driven approach offers valuable insights for optimizing battery usage and adapting the predictor to various scenarios. This underscores the practical significance of our method in facilitating better decision-making regarding lithium-ion battery management.Keywords: Li-ion battery, aging, diagnostics, data analysis, prediction, machine learning, electrochemical model, regression
Procedia PDF Downloads 69269 Pathway to Sustainable Shipping: Electric Ships
Authors: Wei Wang, Yannick Liu, Lu Zhen, H. Wang
Abstract:
Maritime transport plays an important role in global economic development but also inevitably faces increasing pressures from all sides, such as ship operating cost reduction and environmental protection. An ideal innovation to address these pressures is electric ships. The electric ship is in the early stage. Considering the special characteristics of electric ships, i.e., travel range limit, to guarantee the efficient operation of electric ships, the service network needs to be re-designed carefully. This research designs a cost-efficient and environmentally friendly service network for electric ships, including the location of charging stations, charging plan, route planning, ship scheduling, and ship deployment. The problem is formulated as a mixed-integer linear programming model with the objective of minimizing total cost comprised of charging cost, the construction cost of charging stations, and fixed cost of ships. A case study using data of the shipping network along the Yangtze River is conducted to evaluate the performance of the model. Two operating scenarios are used: an electric ship scenario where all the transportation tasks are fulfilled by electric ships and a conventional ship scenario where all the transportation tasks are fulfilled by fuel oil ships. Results unveil that the total cost of using electric ships is only 42.8% of using conventional ships. Using electric ships can reduce 80% SOx, 93.47% NOx, 89.47% PM, and 42.62% CO2, but will consume 2.78% more time to fulfill all the transportation tasks. Extensive sensitivity analyses are also conducted for key operating factors, including battery capacity, charging speed, volume capacity, and a service time limit of transportation task. Implications from the results are as follows: 1) it is necessary to equip the ship with a large capacity battery when the number of charging stations is low; 2) battery capacity will influence the number of ships deployed on each route; 3) increasing battery capacity will make the electric ship more cost-effective; 4) charging speed does not affect charging amount and location of charging station, but will influence the schedule of ships on each route; 5) there exists an optimal volume capacity, at which all costs and total delivery time are lowest; 6) service time limit will influence ship schedule and ship cost.Keywords: cost reduction, electric ship, environmental protection, sustainable shipping
Procedia PDF Downloads 77268 Efficient Residual Road Condition Segmentation Network Based on Reconstructed Images
Authors: Xiang Shijie, Zhou Dong, Tian Dan
Abstract:
This paper focuses on the application of real-time semantic segmentation technology in complex road condition recognition, aiming to address the critical issue of how to improve segmentation accuracy while ensuring real-time performance. Semantic segmentation technology has broad application prospects in fields such as autonomous vehicle navigation and remote sensing image recognition. However, current real-time semantic segmentation networks face significant technical challenges and optimization gaps in balancing speed and accuracy. To tackle this problem, this paper conducts an in-depth study and proposes an innovative Guided Image Reconstruction Module. By resampling high-resolution images into a set of low-resolution images, this module effectively reduces computational complexity, allowing the network to more efficiently extract features within limited resources, thereby improving the performance of real-time segmentation tasks. In addition, a dual-branch network structure is designed in this paper to fully leverage the advantages of different feature layers. A novel Hybrid Attention Mechanism is also introduced, which can dynamically capture multi-scale contextual information and effectively enhance the focus on important features, thus improving the segmentation accuracy of the network in complex road condition. Compared with traditional methods, the proposed model achieves a better balance between accuracy and real-time performance and demonstrates competitive results in road condition segmentation tasks, showcasing its superiority. Experimental results show that this method not only significantly improves segmentation accuracy while maintaining real-time performance, but also remains stable across diverse and complex road conditions, making it highly applicable in practical scenarios. By incorporating the Guided Image Reconstruction Module, dual-branch structure, and Hybrid Attention Mechanism, this paper presents a novel approach to real-time semantic segmentation tasks, which is expected to further advance the development of this field.Keywords: hybrid attention mechanism, image reconstruction, real-time, road status recognition
Procedia PDF Downloads 23267 Establishing Forecasts Pointing Towards the Hungarian Energy Change Based on the Results of Local Municipal Renewable Energy Production and Energy Export
Authors: Balazs Kulcsar
Abstract:
Professional energy organizations perform analyses mainly on the global and national levels about the expected development of the share of renewables in electric power generation, heating, and cooling, as well as the transport sectors. There are just a few publications, research institutions, non-profit organizations, and national initiatives with a focus on studies in the individual towns, settlements. Issues concerning the self-supply of energy on the settlement level have not become too wide-spread. The goal of our energy geographic studies is to determine the share of local renewable energy sources in the settlement-based electricity supply across Hungary. The Hungarian energy supply system defines four categories based on the installed capacities of electric power generating units. From these categories, the theoretical annual electricity production of small-sized household power plants (SSHPP) featuring installed capacities under 50 kW and small power plants with under 0.5 MW capacities have been taken into consideration. In the above-mentioned power plant categories, the Hungarian Electricity Act has allowed the establishment of power plants primarily for the utilization of renewable energy sources since 2008. Though with certain restrictions, these small power plants utilizing renewable energies have the closest links to individual settlements and can be regarded as the achievements of the host settlements in the shift of energy use. Based on the 2017 data, we have ranked settlements to reflect the level of self-sufficiency in electricity production from renewable energy sources. The results show that the supply of all the energy demanded by settlements from local renewables is within reach now in small settlements, e.g., in the form of the small power plant categories discussed in the study, and is not at all impossible even in small towns and cities. In Hungary, 30 settlements produce more renewable electricity than their own annual electricity consumption. If these overproductive settlements export their excess electricity towards neighboring settlements, then full electricity supply can be realized on further 29 settlements from renewable sources by local small power plants. These results provide an opportunity for governmental planning of the realization of energy shift (legislative background, support system, environmental education), as well as framing developmental forecasts and scenarios until 2030.Keywords: energy geography, Hungary, local small power plants, renewable energy sources, self-sufficiency settlements
Procedia PDF Downloads 147266 Understanding the Interplay between Consumer Knowledge, Trust and Relationship Satisfaction in Financial Services
Authors: Torben Hansen, Lars Gronholdt, Alexander Josiassen, Anne Martensen
Abstract:
Consumers often exhibit a bias in their knowledge; they often think that they know more or less than they do. The concept of 'knowledge over/underconfidence' (O/U) has in previous studies been used to investigate such knowledge bias. O/U appears as a combination of subjective and objective knowledge. Subjective knowledge relates to consumers’ perception of their knowledge, while objective knowledge relates to consumers’ absolute knowledge measured by objective standards. This separation leads to three scenarios: The consumer can either be knowledge calibrated (subjective and objective knowledge are similar), overconfident (subjective knowledge exceeds objective knowledge) or underconfident (objective knowledge exceeds subjective knowledge). Knowledge O/U is a highly useful concept in understanding consumer choice behavior. For example, knowledge overconfident individuals are likely to exaggerate their ability to make right choices, are more likely to opt out of necessary information search, spend less time to carry out a specific task than less knowledge confident consumers, and are more likely to show high financial trading volumes. Through the use of financial services as a case study, this study contributes to previous research by examining how consumer knowledge O/U affects two types of trust (broad-scope trust and narrow-scope trust) and consumer relationship satisfaction. Trust does not only concern consumer trust in individual companies (i.e., narrow.-scope confidence NST), but also concerns consumer confidence in the broader business context in which consumers plan and implement their behavior (i.e., broad scope trust, BST). NST is defined as "the expectation that the service provider can be relied on to deliver on its promises’, while BST is defined as ‘the expectation that companies within a particular business type can generally be relied on to deliver on their promises.’ This study expands our understanding of the interplay between consumer knowledge bias, consumer trust, and relationship marketing in two main ways: First, it is demonstrated that the more knowledge O/U a consumer becomes, the higher/lower NST and levels of relationship satisfaction will be. Second, it is demonstrated that BST has a negative moderating effect on the relationship between knowledge O/U and satisfaction, such that knowledge O/U has a higher positive/negative effect on relationship satisfaction when BST is low vs. high. The data for this study comprises 756 mutual fund investors. Trust is particularly important in consumers’ mutual fund behavior because mutual funds have important responsibilities in providing financial advice and in managing consumers’ funds.Keywords: knowledge, cognitive bias, trust, customer-seller relationships, financial services
Procedia PDF Downloads 301265 Women, Culture and Ambiguity: Postcolonial Feminist Critique of Lobola in African Culture and Society
Authors: Goodness Thandi Ntuli
Abstract:
Some cultural aspects in the African context have a tendency of uplifting women while some thrust them into the worst denigration scenarios; hence African Women Theologians refer to culture as a ‘double edged sword. Through socialization and internalization of social norms, some women become custodians of life, denying aspects of the culture that are against them and hand them down to the next generation. This indirectly contributes to the perpetuation of patriarchal tendencies wherein women themselves uphold and endorse such tendencies to their own detriment. One of the findings of the empirical research study conducted among the Zulu young women in the South African context was that, on the one hand, lobola (the bride-price) is one of the cultural practices that contribute a great deal in the vilification of women. On the other hand, a woman whose lobola has been paid is highly esteemed in the cultural context not only by society at large but also by the implicated woman who takes pride in it. Consequently, lobola becomes an ambiguous cultural practice. Thus from the postcolonial feminist perspective, this paper examines and critiques lobola practice while also disclosing and exposing its deep seated cultural reinforcement that is life denying to women. The paper elucidates the original lobola as a cultural practice before colonization and how it became commercialized during colonial times. With commercialization in the modern world, lobola has completely lost its preliminary meaning and ceased to be a life-giving cultural practice, particularly for women. It turned out to be the worst cultural practice that demeans women to the extent that it becomes suicidal to women dignity because, in marriage, they become objects or property to the men who purchased them. Women objectification in marriage does not only leave them culturally trapped in what was perceived to be a good practice, but it also leads to women abuse and gender based or domestic violence. The research has indicated that this kind of violence is escalating and has become so pervasive in the South African context that the country is rated as one of the capital cities of violence against women in the world. Therefore, this paper demonstrates how cultural practices at times indirectly contribute to this national scourge that needs to be condemned, disparaged and rejected. Women in the African context where such cultural activities are still viewed as a norm are in desperate need for true liberation from such ambiguous cultural practices that leave them in the margins in spite of the earned social status they might have achieved.Keywords: african, ambiguity, critique, culture, feminist, lobola, postcolonial, society
Procedia PDF Downloads 200264 Development of a 3D Model of Real Estate Properties in Fort Bonifacio, Taguig City, Philippines Using Geographic Information Systems
Authors: Lyka Selene Magnayi, Marcos Vinas, Roseanne Ramos
Abstract:
As the real estate industry continually grows in the Philippines, Geographic Information Systems (GIS) provide advantages in generating spatial databases for efficient delivery of information and services. The real estate sector is not only providing qualitative data about real estate properties but also utilizes various spatial aspects of these properties for different applications such as hazard mapping and assessment. In this study, a three-dimensional (3D) model and a spatial database of real estate properties in Fort Bonifacio, Taguig City are developed using GIS and SketchUp. Spatial datasets include political boundaries, buildings, road network, digital terrain model (DTM) derived from Interferometric Synthetic Aperture Radar (IFSAR) image, Google Earth satellite imageries, and hazard maps. Multiple model layers were created based on property listings by a partner real estate company, including existing and future property buildings. Actual building dimensions, building facade, and building floorplans are incorporated in these 3D models for geovisualization. Hazard model layers are determined through spatial overlays, and different scenarios of hazards are also presented in the models. Animated maps and walkthrough videos were created for company presentation and evaluation. Model evaluation is conducted through client surveys requiring scores in terms of the appropriateness, information content, and design of the 3D models. Survey results show very satisfactory ratings, with the highest average evaluation score equivalent to 9.21 out of 10. The output maps and videos obtained passing rates based on the criteria and standards set by the intended users of the partner real estate company. The methodologies presented in this study were found useful and have remarkable advantages in the real estate industry. This work may be extended to automated mapping and creation of online spatial databases for better storage, access of real property listings and interactive platform using web-based GIS.Keywords: geovisualization, geographic information systems, GIS, real estate, spatial database, three-dimensional model
Procedia PDF Downloads 158263 Mathematics as the Foundation for the STEM Disciplines: Different Pedagogical Strategies Addressed
Authors: Marion G. Ben-Jacob, David Wang
Abstract:
There is a mathematics requirement for entry level college and university students, especially those who plan to study STEM (Science, Technology, Engineering and Mathematics). Most of them take College Algebra, and to continue their studies, they need to succeed in this course. Different pedagogical strategies are employed to promote the success of our students. There is, of course, the Traditional Method of teaching- lecture, examples, problems for students to solve. The Emporium Model, another pedagogical approach, replaces traditional lectures with a learning resource center model featuring interactive software and on-demand personalized assistance. This presentation will compare these two methods of pedagogy and the study done with its results on this comparison. Math is the foundation for science, technology, and engineering. Its work is generally used in STEM to find patterns in data. These patterns can be used to test relationships, draw general conclusions about data, and model the real world. In STEM, solutions to problems are analyzed, reasoned, and interpreted using math abilities in a assortment of real-world scenarios. This presentation will examine specific examples of how math is used in the different STEM disciplines. Math becomes practical in science when it is used to model natural and artificial experiments to identify a problem and develop a solution for it. As we analyze data, we are using math to find the statistical correlation between the cause of an effect. Scientists who use math include the following: data scientists, scientists, biologists and geologists. Without math, most technology would not be possible. Math is the basis of binary, and without programming, you just have the hardware. Addition, subtraction, multiplication, and division is also used in almost every program written. Mathematical algorithms are inherent in software as well. Mechanical engineers analyze scientific data to design robots by applying math and using the software. Electrical engineers use math to help design and test electrical equipment. They also use math when creating computer simulations and designing new products. Chemical engineers often use mathematics in the lab. Advanced computer software is used to aid in their research and production processes to model theoretical synthesis techniques and properties of chemical compounds. Mathematics mastery is crucial for success in the STEM disciplines. Pedagogical research on formative strategies and necessary topics to be covered are essential.Keywords: emporium model, mathematics, pedagogy, STEM
Procedia PDF Downloads 75262 Using Lean-Six Sigma Philosophy to Enhance Revenues and Improve Customer Satisfaction: Case Studies from Leading Telecommunications Service Providers in India
Authors: Senthil Kumar Anantharaman
Abstract:
Providing telecommunications based network services in developing countries like India which has a population of 1.5 billion people, so that these services reach every individual, is one of the greatest challenges the country has been facing in its journey towards economic growth and development. With growing number of telecommunications service providers in the country, a constant challenge that has been faced by these providers is in providing not only quality but also delightful customer experience while simultaneously generating enhanced revenues and profits. Thus, the role played by process improvement methodologies like Six Sigma cannot be undermined and specifically in telecom service provider based operations, it has provided substantial benefits. Therefore, it advantages are quite comparable to its applications and advantages in other sectors like manufacturing, financial services, information technology-based services and Healthcare services. One of the key reasons that this methodology has been able to reap great benefits in telecommunications sector is that this methodology has been combined with many of its competing process improvement techniques like Theory of Constraints, Lean and Kaizen to give the maximum benefit to the service providers thereby creating a winning combination of organized process improvement methods for operational excellence thereby leading to business excellence. This paper discusses about some of the key projects and areas in the end to end ‘Quote to Cash’ process at big three Indian telecommunication companies that have been highly assisted by applying Six Sigma along with other process improvement techniques. While the telecommunication companies which we have considered, is primarily in India and run by both private operators and government based setups, the methodology can be applied equally well in any other part of developing countries around the world having similar context. This study also compares the enhanced revenues that can arise out of appropriate opportunities in emerging market scenarios, that Six Sigma as a philosophy and methodology can provide if applied with vigour and robustness. Finally, the paper also comes out with a winning framework in combining Six Sigma methodology with Kaizen, Lean and Theory of Constraints that will enhance both the top-line as well as the bottom-line while providing the customers a delightful experience.Keywords: emerging markets, lean, process improvement, six sigma, telecommunications, theory of constraints
Procedia PDF Downloads 164261 Low Carbon Tourism Management: Strategies for Climate-Friendly Tourism of Koh Mak, Thailand
Authors: Panwad Wongthong, Thanan Apivantanaporn, Sutthiwan Amattayakul
Abstract:
Nature-based tourism is one of the fastest growing industries that can bring in economic benefits, improve quality of life and promote conservation of biodiversity and habitats. As tourism develops, substantial socio-economic and environmental costs become more explicit. Particularly in island destinations, the dynamic system and geographical limitations makes the intensity of tourism development and severity of the negative environmental impacts greater. The current contribution of the tourism sector to global climate change is established at approximately 5% of global anthropogenic CO2 emissions. In all scenarios, tourism is anticipated to grow substantially and to account for an increasingly large share of global greenhouse gas emissions. This has prompted an urgent call for more sustainable alternatives. This study selected a small island of Koh Mak in Thailand as a case study because of its reputation of being laid back, family oriented and rich in biodiversity. Importantly, it is a test platform for low carbon tourism development project supported by the Designated Areas for Sustainable Tourism Administration (DASTA) in collaboration with the Institute for Small and Medium Enterprises Development (ISMED). The study explores strategies for low carbon tourism management and assesses challenges and opportunities for Koh Mak to become a low carbon tourism destination. The goal is to identify suitable management approaches applicable for Koh Mak which may then be adapted to other small islands in Thailand and the region. Interventions/initiatives to increase energy efficiency in hotels and resorts; cut carbon emissions; reduce impacts on the environment; and promote conservation will be analyzed. Ways toward long-term sustainability of climate-friendly tourism will be recommended. Recognizing the importance of multi-stakeholder involvement in the tourism sector, findings from this study can reward Koh Mak tourism industry with a triple-win: cost savings and compliance with higher standards/markets; less waste, air emissions and effluents; and better capabilities of change, motivation of business owners, staff, tourists as well as residents. The consideration of climate change issues in the planning and implementation of tourism development is of great significance to protect the tourism sector from negative impacts.Keywords: climate change, CO2 emissions, low carbon tourism, sustainable tourism management
Procedia PDF Downloads 281260 Neural Networks Underlying the Generation of Neural Sequences in the HVC
Authors: Zeina Bou Diab, Arij Daou
Abstract:
The neural mechanisms of sequential behaviors are intensively studied, with songbirds a focus for learned vocal production. We are studying the premotor nucleus HVC at a nexus of multiple pathways contributing to song learning and production. The HVC consists of multiple classes of neuronal populations, each has its own cellular, electrophysiological and functional properties. During singing, a large subset of motor cortex analog-projecting HVCRA neurons emit a single 6-10 ms burst of spikes at the same time during each rendition of song, a large subset of basal ganglia-projecting HVCX neurons fire 1 to 4 bursts that are similarly time locked to vocalizations, while HVCINT neurons fire tonically at average high frequency throughout song with prominent modulations whose timing in relation to song remains unresolved. This opens the opportunity to define models relating explicit HVC circuitry to how these neurons work cooperatively to control learning and singing. We developed conductance-based Hodgkin-Huxley models for the three classes of HVC neurons (based on the ion channels previously identified from in vitro recordings) and connected them in several physiologically realistic networks (based on the known synaptic connectivity and specific glutaminergic and gabaergic pharmacology) via different architecture patterning scenarios with the aim to replicate the in vivo firing patterning behaviors. We are able, through these networks, to reproduce the in vivo behavior of each class of HVC neurons, as shown by the experimental recordings. The different network architectures developed highlight different mechanisms that might be contributing to the propagation of sequential neural activity (continuous or punctate) in the HVC and to the distinctive firing patterns that each class exhibits during singing. Examples of such possible mechanisms include: 1) post-inhibitory rebound in HVCX and their population patterns during singing, 2) different subclasses of HVCINT interacting via inhibitory-inhibitory loops, 3) mono-synaptic HVCX to HVCRA excitatory connectivity, and 4) structured many-to-one inhibitory synapses from interneurons to projection neurons, and others. Replication is only a preliminary step that must be followed by model prediction and testing.Keywords: computational modeling, neural networks, temporal neural sequences, ionic currents, songbird
Procedia PDF Downloads 70259 Development of Three-Dimensional Groundwater Model for Al-Corridor Well Field, Amman–Zarqa Basin
Authors: Moayyad Shawaqfah, Ibtehal Alqdah, Amjad Adaileh
Abstract:
Coridoor area (400 km2) lies to the north – east of Amman (60 km). It lies between 285-305 E longitude and 165-185 N latitude (according to Palestine Grid). It been subjected to exploitation of groundwater from new eleven wells since the 1999 with a total discharge of 11 MCM in addition to the previous discharge rate from the well field 14.7 MCM. Consequently, the aquifer balance is disturbed and a major decline in water level. Therefore, suitable groundwater resources management is required to overcome the problems of over pumping and its effect on groundwater quality. Three–dimensional groundwater flow model Processing Modeflow for Windows Pro (PMWIN PRO, 2003) has been used in order to calculate the groundwater budget, aquifer characteristics, and to predict the aquifer response under different stresses for the next 20 years (2035). The model was calibrated for steady state conditions by trial and error calibration. The calibration was performed by matching observed and calculated initial heads for year 2001. Drawdown data for period 2001-2010 were used to calibrate transient model by matching calculated with observed one, after that, the transient model was validated by using the drawdown data for the period 2011-2014. The hydraulic conductivities of the Basalt- A7/B2 aquifer System are ranging between 1.0 and 8.0 m/day. The low conductivity value was found at the north-west and south-western parts of the study area, the high conductivity value was found at north-western corner of the study area and the average storage coefficient is about 0.025. The water balance for the Basalt and B2/A7 formation at steady state condition with a discrepancy of 0.003%. The major inflows come from Jebal Al Arab through the basalt and through the limestone aquifer (B2/A7 12.28 MCMY aquifer and from excess rainfall is about 0.68 MCM/a. While the major outflows from the Basalt-B2/A7 aquifer system are toward Azraq basin with about 5.03 MCMY and leakage to A1/6 aquitard with 7.89 MCMY. Four scenarios have been performed to predict aquifer system responses under different conditions. Scenario no.2 was found to be the best one which indicates that the reduction the abstraction rates by 50% of current withdrawal rate (25.08 MCMY) to 12.54 MCMY. The maximum drawdowns were decreased to reach about, 7.67 and 8.38m in the years 2025 and 2035 respectively.Keywords: Amman/Zarqa Basin, Jordan, groundwater management, groundwater modeling, modflow
Procedia PDF Downloads 216258 Strategic Innovation of Nanotechnology: Novel Applications of Biomimetics and Microfluidics in Food Safety
Authors: Boce Zhang
Abstract:
Strategic innovation of nanotechnology to promote food safety has drawn tremendous attentions among research groups, which includes the need for research support during the implementation of the Food Safety Modernization Act (FSMA) in the United States. There are urgent demands and knowledge gaps to the understanding of a) food-water-bacteria interface as for how pathogens persist and transmit during food processing and storage; b) minimum processing requirement needed to prevent pathogen cross-contamination in the food system. These knowledge gaps are of critical importance to the food industry. However, the lack of knowledge is largely hindered by the limitations of research tools. Our groups recently endeavored two novel engineering systems with biomimetics and microfluidics as a holistic approach to hazard analysis and risk mitigation, which provided unprecedented research opportunities to study pathogen behavior, in particular, contamination, and cross-contamination, at the critical food-water-pathogen interface. First, biomimetically-patterned surfaces (BPS) were developed to replicate the identical surface topography and chemistry of a natural food surface. We demonstrated that BPS is a superior research tool that empowers the study of a) how pathogens persist through sanitizer treatment, b) how to apply fluidic shear-force and surface tension to increase the vulnerability of the bacterial cells, by detaching them from a protected area, etc. Secondly, microfluidic devices were designed and fabricated to study the bactericidal kinetics in the sub-second time frame (0.1~1 second). The sub-second kinetics is critical because the cross-contamination process, which includes detachment, migration, and reattachment, can occur in a very short timeframe. With this microfluidic device, we were able to simulate and study these sub-second cross-contamination scenarios, and to further investigate the minimum sanitizer concentration needed to sufficiently prevent pathogen cross-contamination during the food processing. We anticipate that the findings from these studies will provide critical insight on bacterial behavior at the food-water-cell interface, and the kinetics of bacterial inactivation from a broad range of sanitizers and processing conditions, thus facilitating the development and implementation of science-based food safety regulations and practices to mitigate the food safety risks.Keywords: biomimetic materials, microbial food safety, microfluidic device, nanotechnology
Procedia PDF Downloads 359257 Cross-Tier Collaboration between Preservice and Inservice Language Teachers in Designing Online Video-Based Pragmatic Assessment
Authors: Mei-Hui Liu
Abstract:
This paper reports the progression of language teachers’ learning to assess students’ speech act performance via online videos in a cross-tier professional growth community. This yearlong research project collected multiple data sources from several stakeholders, including 12 preservice and 4 inservice English as a foreign language (EFL) teachers, 4 English professionals, and 82 high school students. Data sources included surveys, (focus group) interviews, online reflection journals, online video-based assessment items/scores, and artifacts related to teacher professional learning. The major findings depicted the effectiveness of this proposed learning module on language teacher development in pragmatic assessment as well as its impact on student learning experience. All these teachers appreciated this professional learning experience which enhanced their knowledge in assessing students’ pragmalinguistic and sociopragmatic performance in an English speech act (i.e., making refusals). They learned how to design online video-based assessment items by attending to specific linguistic structures, semantic formula, and sociocultural issues. They further became aware of how to sharpen pragmatic instructional skills in the near future after putting theories into online assessment and related classroom practices. Additionally, data analysis revealed students’ achievement in and satisfaction with the designed online assessment. Yet, during the professional learning process most participating teachers encountered challenges in reaching a consensus on selecting appropriate video clips from available sources to present the sociocultural values in English-speaking refusal contexts. Also included was to construct test items which could testify the influence of interlanguage transfer on students’ pragmatic performance in various conversational scenarios. With pedagogical implications and research suggestions, this study adds to the increasing amount of research into integrating preservice and inservice EFL teacher education in pragmatic assessment and relevant instruction. Acknowledgment: This research project is sponsored by the Ministry of Science and Technology in the Republic of China under the grant number of MOST 106-2410-H-029-038.Keywords: cross-tier professional development, inservice EFL teachers, pragmatic assessment, preservice EFL teachers, student learning experience
Procedia PDF Downloads 259256 Controlling Deforestation in the Densely Populated Region of Central Java Province, Banjarnegara District, Indonesia
Authors: Guntur Bagus Pamungkas
Abstract:
As part of a tropical country that is normally rich in forest land areas, Indonesia has always been in the world's spotlight due to its significantly increasing process of deforestation. In one hand, it is related to the mainstay for maintaining the sustainability of the earth's ecosystem functions. On the other hand, they also cover the various potential sources of the global economy. Therefore, it can always be the target of different scale of investors to excessively exploit them. No wonder the emergence of disasters in various characteristics always comes up. In fact, the deforestation phenomenon does not only occur in various forest land areas in the main islands of Indonesia but also includes Java Island, the most densely populated areas in the world. This island only remains the forest land of about 9.8% of the total forest land in Indonesia due to its long history of it, especially in Central Java Province, the most densely populated area in Java. Again, not surprisingly, this province belongs to the area with the highest frequency of disasters because of it, landslides in particular. One of the areas that often experience it is Banjarnegara District, especially in mountainous areas that lies in the range from 1000 to 3000 meters above sea level, where the remains of land forest area can easyly still be found. Even among them still leaves less untouchable tropical rain forest whose area also covers part of a neighboring district, Pekalongan, which is considered to be the rest of the world's little paradise on Earth. The district's landscape is indeed beautiful, especially in the Dieng area, a major tourist destination in Central Java Province after Borobudur Temple. However, annually hazardous always threatens this district due to this landslide disaster. Even, there was a tragic event that was buried with its inhabitants a few decades ago. This research aims to find part of the concept of effective forest management through monitoring the presence of remaining forest areas in this area. The research implemented monitoring of deforestation rates using the Stochastic Cellular Automata-Markov Chain (SCA-MC) method, which serves to provide a spatial simulation of land use and cover changes (LULCC). This geospatial process uses the Landsat-8 OLI image product with Thermal Infra-Red Sensors (TIRS) Band 10 in 2020 and Landsat 5 TM with TIRS Band 6 in 2010. Then it is also integrated with physical and social geography issues using the QGIS 2.18.11 application with the Mollusce Plugin, which serves to clarify and calculate the area of land use and cover, especially in forest areas—using the LULCC method, which calculates the rate of forest area reduction in 2010-2020 in Banjarnegara District. Since the dependence of this area on the use of forest land is quite high, concepts and preventive actions are needed, such as rehabilitation and reforestation of critical lands through providing proper monitoring and targeted forest management to restore its ecosystem in the future.Keywords: deforestation, populous area, LULCC method, proper control and effective forest management
Procedia PDF Downloads 135255 Characterization and Modelling of Groundwater Flow towards a Public Drinking Water Well Field: A Case Study of Ter Kamerenbos Well Field
Authors: Buruk Kitachew Wossenyeleh
Abstract:
Groundwater is the largest freshwater reservoir in the world. Like the other reservoirs of the hydrologic cycle, it is a finite resource. This study focused on the groundwater modeling of the Ter Kamerenbos well field to understand the groundwater flow system and the impact of different scenarios. The study area covers 68.9Km2 in the Brussels Capital Region and is situated in two river catchments, i.e., Zenne River and Woluwe Stream. The aquifer system has three layers, but in the modeling, they are considered as one layer due to their hydrogeological properties. The catchment aquifer system is replenished by direct recharge from rainfall. The groundwater recharge of the catchment is determined using the spatially distributed water balance model called WetSpass, and it varies annually from zero to 340mm. This groundwater recharge is used as the top boundary condition for the groundwater modeling of the study area. During the groundwater modeling using Processing MODFLOW, constant head boundary conditions are used in the north and south boundaries of the study area. For the east and west boundaries of the study area, head-dependent flow boundary conditions are used. The groundwater model is calibrated manually and automatically using observed hydraulic heads in 12 observation wells. The model performance evaluation showed that the root means the square error is 1.89m and that the NSE is 0.98. The head contour map of the simulated hydraulic heads indicates the flow direction in the catchment, mainly from the Woluwe to Zenne catchment. The simulated head in the study area varies from 13m to 78m. The higher hydraulic heads are found in the southwest of the study area, which has the forest as a land-use type. This calibrated model was run for the climate change scenario and well operation scenario. Climate change may cause the groundwater recharge to increase by 43% and decrease by 30% in 2100 from current conditions for the high and low climate change scenario, respectively. The groundwater head varies for a high climate change scenario from 13m to 82m, whereas for a low climate change scenario, it varies from 13m to 76m. If doubling of the pumping discharge assumed, the groundwater head varies from 13m to 76.5m. However, if the shutdown of the pumps is assumed, the head varies in the range of 13m to 79m. It is concluded that the groundwater model is done in a satisfactory way with some limitations, and the model output can be used to understand the aquifer system under steady-state conditions. Finally, some recommendations are made for the future use and improvement of the model.Keywords: Ter Kamerenbos, groundwater modelling, WetSpass, climate change, well operation
Procedia PDF Downloads 152254 The Grammar of the Content Plane as a Style Marker in Forensic Authorship Attribution
Authors: Dayane de Almeida
Abstract:
This work aims at presenting a study that demonstrates the usability of categories of analysis from Discourse Semiotics – also known as Greimassian Semiotics in authorship cases in forensic contexts. It is necessary to know if the categories examined in semiotic analysis (the ‘grammar’ of the content plane) can distinguish authors. Thus, a study with 4 sets of texts from a corpus of ‘not on demand’ written samples (those texts differ in formality degree, purpose, addressees, themes, etc.) was performed. Each author contributed with 20 texts, separated into 2 groups of 10 (Author1A, Author1B, and so on). The hypothesis was that texts from a single author were semiotically more similar to each other than texts from different authors. The assumptions and issues that led to this idea are as follows: -The features analyzed in authorship studies mostly relate to the expression plane: they are manifested on the ‘surface’ of texts. If language is both expression and content, content would also have to be considered for more accurate results. Style is present in both planes. -Semiotics postulates the content plane is structured in a ‘grammar’ that underlies expression, and that presents different levels of abstraction. This ‘grammar’ would be a style marker. -Sociolinguistics demonstrates intra-speaker variation: an individual employs different linguistic uses in different situations. Then, how to determine if someone is the author of several texts, distinct in nature (as it is the case in most forensic sets), when it is known intra-speaker variation is dependent on so many factors?-The idea is that the more abstract the level in the content plane, the lower the intra-speaker variation, because there will be a greater chance for the author to choose the same thing. If two authors recurrently chose the same options, differently from one another, it means each one’s option has discriminatory power. -Size is another issue for various attribution methods. Since most texts in real forensic settings are short, methods relying only on the expression plane tend to fail. The analysis of the content plane as proposed by greimassian semiotics would be less size-dependable. -The semiotic analysis was performed using the software Corpus Tool, generating tags to allow the counting of data. Then, similarities and differences were quantitatively measured, through the application of the Jaccard coefficient (a statistical measure that compares the similarities and differences between samples). The results showed the hypothesis was confirmed and, hence, the grammatical categories of the content plane may successfully be used in questioned authorship scenarios.Keywords: authorship attribution, content plane, forensic linguistics, greimassian semiotics, intraspeaker variation, style
Procedia PDF Downloads 242253 Detailed Analysis of Multi-Mode Optical Fiber Infrastructures for Data Centers
Authors: Matej Komanec, Jan Bohata, Stanislav Zvanovec, Tomas Nemecek, Jan Broucek, Josef Beran
Abstract:
With the exponential growth of social networks, video streaming and increasing demands on data rates, the number of newly built data centers rises proportionately. The data centers, however, have to adjust to the rapidly increased amount of data that has to be processed. For this purpose, multi-mode (MM) fiber based infrastructures are often employed. It stems from the fact, the connections in data centers are typically realized within a short distance, and the application of MM fibers and components considerably reduces costs. On the other hand, the usage of MM components brings specific requirements for installation service conditions. Moreover, it has to be taken into account that MM fiber components have a higher production tolerance for parameters like core and cladding diameters, eccentricity, etc. Due to the high demands for the reliability of data center components, the determination of properly excited optical field inside the MM fiber core belongs to the key parameters while designing such an MM optical system architecture. Appropriately excited mode field of the MM fiber provides optimal power budget in connections, leads to the decrease of insertion losses (IL) and achieves effective modal bandwidth (EMB). The main parameter, in this case, is the encircled flux (EF), which should be properly defined for variable optical sources and consequent different mode-field distribution. In this paper, we present detailed investigation and measurements of the mode field distribution for short MM links purposed in particular for data centers with the emphasis on reliability and safety. These measurements are essential for large MM network design. The various scenarios, containing different fibers and connectors, were tested in terms of IL and mode-field distribution to reveal potential challenges. Furthermore, we focused on estimation of particular defects and errors, which can realistically occur like eccentricity, connector shifting or dust, were simulated and measured, and their dependence to EF statistics and functionality of data center infrastructure was evaluated. The experimental tests were performed at two wavelengths, commonly used in MM networks, of 850 nm and 1310 nm to verify EF statistics. Finally, we provide recommendations for data center systems and networks, using OM3 and OM4 MM fiber connections.Keywords: optical fiber, multi-mode, data centers, encircled flux
Procedia PDF Downloads 375252 Architectural Identity in Manifestation of Tall-buildings' Design
Authors: Huda Arshadlamphon
Abstract:
Advancing frontiers of technology and industry is moving rapidly fast influenced by the economic and political phenomena. One vital phenomenon,which has had consolidated the world to a one single village, is Globalization. In response, architecture and the built-environment have faced numerous changes, adjustments, and developments. Tall-buildings, as a product of globalization, represent prestigious icons, symbols, and landmarks for highly economics and advanced countries. Despite the fact, this trend has been encountering several design challenges incorporating architectural identity, traditions, and characteristics that enhance the built-environments' sociocultural values and traditions. The necessity of these values and traditionsform self-solitarily, leading to visual and spatial creativity, independency, and individuality. In other words, they maintain the inherited identity and avoid replications in all means and aspects. This paper, firstly, defines globalization phenomenon, architectural identity, and the concerns of sociocultural values in relation to the traditional characteristics of the built-environment. Secondly, through three case-studies of tall-buildings located in Jeddah city, Saudi Arabia, the Queen's Building, the National Commercial Bank Building (NCB), and the Islamic Development Bank Building; design strategies and methodologies in acclimating architectural identity and characteristics in tall-buildings are discussed. The case-studies highlight buildings' sites and surroundings, concepts and inspirations, design elements, architectural forms and compositions, characteristics, issues, barriers, and trammels facing the designs' decisions, representation of facades, and selection of materials and colors. Furthermore, the research will elucidate briefs of the dominant factors that shape the architectural identity of Jeddah city. In conclusion, the study manifests four tall-buildings' design standards guideline in preserving and developing architectural identity in Jeddah city; the scale of urban and natural environment, the scale of architectural design elements, the integration of visual images, and the creation of spatial scenes and scenarios. The prosed guideline will encourage the development of architectural identity aligned with zeitgeist demands and requirements, supports the contemporary architectural movement toward tall-buildings, and shoresself-solitarily in representing sociocultural values and traditions of the built-environment.Keywords: architectural identity, built-environment, globalization, sociocultural values and traditions, tall-buildings
Procedia PDF Downloads 163251 Factory Communication System for Customer-Based Production Execution: An Empirical Study on the Manufacturing System Entropy
Authors: Nyashadzashe Chiraga, Anthony Walker, Glen Bright
Abstract:
The manufacturing industry is currently experiencing a paradigm shift into the Fourth Industrial Revolution in which customers are increasingly at the epicentre of production. The high degree of production customization and personalization requires a flexible manufacturing system that will rapidly respond to the dynamic and volatile changes driven by the market. They are a gap in technology that allows for the optimal flow of information and optimal manufacturing operations on the shop floor regardless of the rapid changes in the fixture and part demands. Information is the reduction of uncertainty; it gives meaning and context on the state of each cell. The amount of information needed to describe cellular manufacturing systems is investigated by two measures: the structural entropy and the operational entropy. Structural entropy is the expected amount of information needed to describe scheduled states of a manufacturing system. While operational entropy is the amount of information that describes the scheduled states of a manufacturing system, which occur during the actual manufacturing operation. Using Anylogic simulator a typical manufacturing job shop was set-up with a cellular manufacturing configuration. The cellular make-up of the configuration included; a Material handling cell, 3D Printer cell, Assembly cell, manufacturing cell and Quality control cell. The factory shop provides manufactured parts to a number of clients, and there are substantial variations in the part configurations, new part designs are continually being introduced to the system. Based on the normal expected production schedule, the schedule adherence was calculated from the structural entropy and operation entropy of varying the amounts of information communicated in simulated runs. The structural entropy denotes a system that is in control; the necessary real-time information is readily available to the decision maker at any point in time. For contractive analysis, different out of control scenarios were run, in which changes in the manufacturing environment were not effectively communicated resulting in deviations in the original predetermined schedule. The operational entropy was calculated from the actual operations. From the results obtained in the empirical study, it was seen that increasing, the efficiency of a factory communication system increases the degree of adherence of a job to the expected schedule. The performance of downstream production flow fed from the parallel upstream flow of information on the factory state was increased.Keywords: information entropy, communication in manufacturing, mass customisation, scheduling
Procedia PDF Downloads 245250 Beyond the Flipped Classroom: A Tool to Promote Autonomy, Cooperation, Differentiation and the Pleasure of Learning
Authors: Gabriel Michel
Abstract:
The aim of our research is to find solutions for adapting university teaching to today's students and companies. To achieve this, we have tried to change the posture and behavior of those involved in the learning situation by promoting other skills. There is a gap between the expectations and functioning of students and university teaching. At the same time, the business world needs employees who are obviously competent and proficient in technology, but who are also imaginative, flexible, able to communicate, learn on their own and work in groups. These skills are rarely developed as a goal at university. The flipped classroom has been one solution. Thanks to digital tools such as Moodle, for example, but the model behind them is still centered on teachers and classic learning scenarios: it makes course materials available without really involving them and encouraging them to cooperate. It's against this backdrop that we've conducted action research to explore the possibility of changing the way we learn (rather than teach) by changing the posture of both the classic student and the teacher. We hypothesized that a tool we developed would encourage autonomy, the possibility of progressing at one's own pace, collaboration and learning using all available resources(other students, course materials, those on the web and the teacher/facilitator). Experimentation with this tool was carried out with around thirty German and French first-year students at the Université de Lorraine in Metz (France). The projected changesin the groups' learning situations were as follows: - use the flipped classroom approach but with a few traditional presentations by the teacher (materials having been put on a server) and lots of collective case solving, - engage students in their learning by inviting them to set themselves a primary objective from the outset, e.g. “Assimilating 90% of the course”, and secondary objectives (like a to-do list) such as “create a new case study for Tuesday”, - encourage students to take control of their learning (knowing at all times where they stand and how far they still have to go), - develop cooperation: the tool should encourage group work, the search for common solutions and the exchange of the best solutions with other groups. Those who have advanced much faster than the others, or who already have expertise in a subject, can become tutors for the others. A student can also present a case study he or she has developed, for example, or share materials found on the web or produced by the group, as well as evaluating the productions of others, - etc… A questionnaire and analysis of assessment results showed that the test group made considerable progress compared with a similar control group. These results confirmed our hypotheses. Obviously, this tool is only effective if the organization of teaching is adapted and if teachers are willing to change the way they work.Keywords: pedagogy, cooperation, university, learning environment
Procedia PDF Downloads 22249 Renewable Energy Storage Capacity Rating: A Forecast of Selected Load and Resource Scenario in Nigeria
Authors: Yakubu Adamu, Baba Alfa, Salahudeen Adamu Gene
Abstract:
As the drive towards clean, renewable and sustainable energy generation is gradually been reshaped by renewable penetration over time, energy storage has thus, become an optimal solution for utilities looking to reduce transmission and capacity cost, therefore the need for capacity resources to be adjusted accordingly such that renewable energy storage may have the opportunity to substitute for retiring conventional energy systems with higher capacity factors. Considering the Nigeria scenario, where Over 80% of the current Nigerian primary energy consumption is met by petroleum, electricity demand is set to more than double by mid-century, relative to 2025 levels. With renewable energy penetration rapidly increasing, in particular biomass, hydro power, solar and wind energy, it is expected to account for the largest share of power output in the coming decades. Despite this rapid growth, the imbalance between load and resources has created a hindrance to the development of energy storage capacity, load and resources, hence forecasting energy storage capacity will therefore play an important role in maintaining the balance between load and resources including supply and demand. Therefore, the degree to which this might occur, its timing and more importantly its sustainability, is the subject matter of the current research. Here, we forecast the future energy storage capacity rating and thus, evaluate the load and resource scenario in Nigeria. In doing so, We used the scenario-based International Energy Agency models, the projected energy demand and supply structure of the country through 2030 are presented and analysed. Overall, this shows that in high renewable (solar) penetration scenarios in Nigeria, energy storage with 4-6h duration can obtain over 86% capacity rating with storage comprising about 24% of peak load capacity. Therefore, the general takeaway from the current study is that most power systems currently used has the potential to support fairly large penetrations of 4-6 hour storage as capacity resources prior to a substantial reduction in capacity ratings. The data presented in this paper is a crucial eye-opener for relevant government agencies towards developing these energy resources in tackling the present energy crisis in Nigeria. However, if the transformation of the Nigeria. power system continues primarily through expansion of renewable generation, then longer duration energy storage will be needed to qualify as capacity resources. Hence, the analytical task from the current survey will help to determine whether and when long-duration storage becomes an integral component of the capacity mix that is expected in Nigeria by 2030.Keywords: capacity, energy, power system, storage
Procedia PDF Downloads 34248 Design and Test a Robust Bearing-Only Target Motion Analysis Algorithm Based on Modified Gain Extended Kalman Filter
Authors: Mohammad Tarek Al Muallim, Ozhan Duzenli, Ceyhun Ilguy
Abstract:
Passive sonar is a method for detecting acoustic signals in the ocean. It detects the acoustic signals emanating from external sources. With passive sonar, we can determine the bearing of the target only, no information about the range of the target. Target Motion Analysis (TMA) is a process to estimate the position and speed of a target using passive sonar information. Since bearing is the only available information, the TMA technique called Bearing-only TMA. Many TMA techniques have been developed. However, until now, there is not a very effective method that could be used to always track an unknown target and extract its moving trace. In this work, a design of effective Bearing-only TMA Algorithm is done. The measured bearing angles are very noisy. Moreover, for multi-beam sonar, the measurements is quantized due to the sonar beam width. To deal with this, modified gain extended Kalman filter algorithm is used. The algorithm is fine-tuned, and many modules are added to improve the performance. A special validation gate module is used to insure stability of the algorithm. Many indicators of the performance and confidence level measurement are designed and tested. A new method to detect if the target is maneuvering is proposed. Moreover, a reactive optimal observer maneuver based on bearing measurements is proposed, which insure converging to the right solution all of the times. To test the performance of the proposed TMA algorithm a simulation is done with a MATLAB program. The simulator program tries to model a discrete scenario for an observer and a target. The simulator takes into consideration all the practical aspects of the problem such as a smooth transition in the speed, a circular turn of the ship, noisy measurements, and a quantized bearing measurement come for multi-beam sonar. The tests are done for a lot of given test scenarios. For all the tests, full tracking is achieved within 10 minutes with very little error. The range estimation error was less than 5%, speed error less than 5% and heading error less than 2 degree. For the online performance estimator, it is mostly aligned with the real performance. The range estimation confidence level gives a value equal to 90% when the range error less than 10%. The experiments show that the proposed TMA algorithm is very robust and has low estimation error. However, the converging time of the algorithm is needed to be improved.Keywords: target motion analysis, Kalman filter, passive sonar, bearing-only tracking
Procedia PDF Downloads 402247 Preparedness is Overrated: Community Responses to Floods in a Context of (Perceived) Low Probability
Authors: Kim Anema, Matthias Max, Chris Zevenbergen
Abstract:
For any flood risk manager the 'safety paradox' has to be a familiar concept: low probability leads to a sense of safety, which leads to more investments in the area, which leads to higher potential consequences: keeping the aggregated risk (probability*consequences) at the same level. Therefore, it is important to mitigate potential consequences apart from probability. However, when the (perceived) probability is so low that there is no recognizable trend for society to adapt to, addressing the potential consequences will always be the lagging point on the agenda. Preparedness programs fail because of lack of interest and urgency, policy makers are distracted by their day to day business and there's always a more urgent issue to spend the taxpayer's money on. The leading question in this study was how to address the social consequences of flooding in a context of (perceived) low probability. Disruptions of everyday urban life, large or small, can be caused by a variety of (un)expected things - of which flooding is only one possibility. Variability like this is typically addressed with resilience - and we used the concept of Community Resilience as the framework for this study. Drawing on face to face interviews, an extensive questionnaire and publicly available statistical data we explored the 'whole society response' to two recent urban flood events; the Brisbane Floods (AUS) in 2011 and the Dresden Floods (GE) in 2013. In Brisbane, we studied how the societal impacts of the floods were counteracted by both authorities and the public, and in Dresden we were able to validate our findings. A large part of the reactions, both public as institutional, to these two urban flood events were not fuelled by preparedness or proper planning. Instead, more important success factors in counteracting social impacts like demographic changes in neighborhoods and (non-)economic losses were dynamics like community action, flexibility and creativity from authorities, leadership, informal connections and a shared narrative. These proved to be the determining factors for the quality and speed of recovery in both cities. The resilience of the community in Brisbane was good, due to (i) the approachability of (local) authorities, (ii) a big group of ‘secondary victims’ and (iii) clear leadership. All three of these elements were amplified by the use of social media and/ or web 2.0 by both the communities and the authorities involved. The numerous contacts and social connections made through the web were fast, need driven and, in their own way, orderly. Similarly in Dresden large groups of 'unprepared', ad hoc organized citizens managed to work together with authorities in a way that was effective and speeded up recovery. The concept of community resilience is better fitted than 'social adaptation' to deal with the potential consequences of an (im)probable flood. Community resilience is built on capacities and dynamics that are part of everyday life and which can be invested in pre-event to minimize the social impact of urban flooding. Investing in these might even have beneficial trade-offs in other policy fields.Keywords: community resilience, disaster response, social consequences, preparedness
Procedia PDF Downloads 352246 Maintenance Optimization for a Multi-Component System Using Factored Partially Observable Markov Decision Processes
Authors: Ipek Kivanc, Demet Ozgur-Unluakin
Abstract:
Over the past years, technological innovations and advancements have played an important role in the industrial world. Due to technological improvements, the degree of complexity of the systems has increased. Hence, all systems are getting more uncertain that emerges from increased complexity, resulting in more cost. It is challenging to cope with this situation. So, implementing efficient planning of maintenance activities in such systems are getting more essential. Partially Observable Markov Decision Processes (POMDPs) are powerful tools for stochastic sequential decision problems under uncertainty. Although maintenance optimization in a dynamic environment can be modeled as such a sequential decision problem, POMDPs are not widely used for tackling maintenance problems. However, they can be well-suited frameworks for obtaining optimal maintenance policies. In the classical representation of the POMDP framework, the system is denoted by a single node which has multiple states. The main drawback of this classical approach is that the state space grows exponentially with the number of state variables. On the other side, factored representation of POMDPs enables to simplify the complexity of the states by taking advantage of the factored structure already available in the nature of the problem. The main idea of factored POMDPs is that they can be compactly modeled through dynamic Bayesian networks (DBNs), which are graphical representations for stochastic processes, by exploiting the structure of this representation. This study aims to demonstrate how maintenance planning of dynamic systems can be modeled with factored POMDPs. An empirical maintenance planning problem of a dynamic system consisting of four partially observable components deteriorating in time is designed. To solve the empirical model, we resort to Symbolic Perseus solver which is one of the state-of-the-art factored POMDP solvers enabling approximate solutions. We generate some more predefined policies based on corrective or proactive maintenance strategies. We execute the policies on the empirical problem for many replications and compare their performances under various scenarios. The results show that the computed policies from the POMDP model are superior to the others. Acknowledgment: This work is supported by the Scientific and Technological Research Council of Turkey (TÜBİTAK) under grant no: 117M587.Keywords: factored representation, maintenance, multi-component system, partially observable Markov decision processes
Procedia PDF Downloads 134245 Coordinative Remote Sensing Observation Technology for a High Altitude Barrier Lake
Authors: Zhang Xin
Abstract:
Barrier lakes are lakes formed by storing water in valleys, river valleys or riverbeds after being blocked by landslide, earthquake, debris flow, and other factors. They have great potential safety hazards. When the water is stored to a certain extent, it may burst in case of strong earthquake or rainstorm, and the lake water overflows, resulting in large-scale flood disasters. In order to ensure the safety of people's lives and property in the downstream, it is very necessary to monitor the barrier lake. However, it is very difficult and time-consuming to manually monitor the barrier lake in high altitude areas due to the harsh climate and steep terrain. With the development of earth observation technology, remote sensing monitoring has become one of the main ways to obtain observation data. Compared with a single satellite, multi-satellite remote sensing cooperative observation has more advantages; its spatial coverage is extensive, observation time is continuous, imaging types and bands are abundant, it can monitor and respond quickly to emergencies, and complete complex monitoring tasks. Monitoring with multi-temporal and multi-platform remote sensing satellites can obtain a variety of observation data in time, acquire key information such as water level and water storage capacity of the barrier lake, scientifically judge the situation of the barrier lake and reasonably predict its future development trend. In this study, The Sarez Lake, which formed on February 18, 1911, in the central part of the Pamir as a result of blockage of the Murgab River valley by a landslide triggered by a strong earthquake with magnitude of 7.4 and intensity of 9, is selected as the research area. Since the formation of Lake Sarez, it has aroused widespread international concern about its safety. At present, the use of mechanical methods in the international analysis of the safety of Lake Sarez is more common, and remote sensing methods are seldom used. This study combines remote sensing data with field observation data, and uses the 'space-air-ground' joint observation technology to study the changes in water level and water storage capacity of Lake Sarez in recent decades, and evaluate its safety. The situation of the collapse is simulated, and the future development trend of Lake Sarez is predicted. The results show that: 1) in recent decades, the water level of Lake Sarez has not changed much and remained at a stable level; 2) unless there is a strong earthquake or heavy rain, it is less likely that the Lake Sarez will be broken under normal conditions, 3) lake Sarez will remain stable in the future, but it is necessary to establish an early warning system in the Lake Sarez area for remote sensing of the area, 4) the coordinative remote sensing observation technology is feasible for the high altitude barrier lake of Sarez.Keywords: coordinative observation, disaster, remote sensing, geographic information system, GIS
Procedia PDF Downloads 127244 Unified Coordinate System Approach for Swarm Search Algorithms in Global Information Deficit Environments
Authors: Rohit Dey, Sailendra Karra
Abstract:
This paper aims at solving the problem of multi-target searching in a Global Positioning System (GPS) denied environment using swarm robots with limited sensing and communication abilities. Typically, existing swarm-based search algorithms rely on the presence of a global coordinate system (vis-à-vis, GPS) that is shared by the entire swarm which, in turn, limits its application in a real-world scenario. This can be attributed to the fact that robots in a swarm need to share information among themselves regarding their location and signal from targets to decide their future course of action but this information is only meaningful when they all share the same coordinate frame. The paper addresses this very issue by eliminating any dependency of a search algorithm on the need of a predetermined global coordinate frame by the unification of the relative coordinate of individual robots when within the communication range, therefore, making the system more robust in real scenarios. Our algorithm assumes that all the robots in the swarm are equipped with range and bearing sensors and have limited sensing range and communication abilities. Initially, every robot maintains their relative coordinate frame and follow Levy walk random exploration until they come in range with other robots. When two or more robots are within communication range, they share sensor information and their location w.r.t. their coordinate frames based on which we unify their coordinate frames. Now they can share information about the areas that were already explored, information about the surroundings, and target signal from their location to make decisions about their future movement based on the search algorithm. During the process of exploration, there can be several small groups of robots having their own coordinate systems but eventually, it is expected for all the robots to be under one global coordinate frame where they can communicate information on the exploration area following swarm search techniques. Using the proposed method, swarm-based search algorithms can work in a real-world scenario without GPS and any initial information about the size and shape of the environment. Initial simulation results show that running our modified-Particle Swarm Optimization (PSO) without global information we can still achieve the desired results that are comparable to basic PSO working with GPS. In the full paper, we plan on doing the comparison study between different strategies to unify the coordinate system and to implement them on other bio-inspired algorithms, to work in GPS denied environment.Keywords: bio-inspired search algorithms, decentralized control, GPS denied environment, swarm robotics, target searching, unifying coordinate systems
Procedia PDF Downloads 137243 Sertraline Chronic Exposure: Impact on Reproduction and Behavior on the Key Benthic Invertebrate Capitella teleta
Authors: Martina Santobuono, Wing Sze Chan, Elettra D'Amico, Henriette Selck
Abstract:
Chemicals in modern society are fundamental in many different aspects of daily human life. We use a wide range of substances, including polychlorinated compounds, pesticides, plasticizers, and pharmaceuticals, to name a few. These compounds are excessively produced, and this has led to their introduction to the environment and food resources. Municipal and industrial effluents, landfills, and agricultural runoffs are a few examples of sources of chemical pollution. Many of these compounds, such as pharmaceuticals, have been proven to mimic or alter the performance of the hormone system, thus disrupting its normal function and altering the behavior and reproductive capability of non-target organisms. Antidepressants are pharmaceuticals commonly detected in the environment, usually in the range of ng L⁻¹ and µg L⁻¹. Since they are designed to have a biological effect at low concentrations, they might pose a risk to the native species, especially if exposure lasts for long periods. Hydrophobic antidepressants, like the selective serotonin reuptake inhibitor (SSRI) Sertraline, can sorb to the particles in the water column and eventually accumulate in the sediment compartment. Thus, deposit-feeding organisms may be at particular risk of exposure. The polychaete Capitella teleta is widespread in estuarine organically enriched sediments, being a key deposit-feeder involved in geochemistry processes happening in sediments. Since antidepressants are neurotoxic chemicals and endocrine disruptors, the aim of this work was to test if sediment-associated Sertraline impacts burrowing- and feeding behavior as well as reproduction capability in Capitella teleta in a chronic exposure set-up, which could better mimic what happens in the environment. 7 days old juveniles were selected and exposed to different concentrations of Sertraline for an entire generation until the mature stage was reached. This work was able to show that some concentrations of Sertraline altered growth and the time of first reproduction in Capitella teleta juveniles, potentially disrupting the population’s capability of survival. Acknowledgments: This Ph.D. position is part of the CHRONIC project “Chronic exposure scenarios driving environmental risks of Chemicals”, which is an Innovative Training Network (ITN) funded by the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Actions (MSCA).Keywords: antidepressants, Capitella teleta, chronic exposure, endocrine disruption, sublethal endpoints, neurotoxicity
Procedia PDF Downloads 95242 CSoS-STRE: A Combat System-of-System Space-Time Resilience Enhancement Framework
Authors: Jiuyao Jiang, Jiahao Liu, Jichao Li, Kewei Yang, Minghao Li, Bingfeng Ge
Abstract:
Modern warfare has transitioned from the paradigm of isolated combat forces to system-to-system confrontations due to advancements in combat technologies and application concepts. A combat system-of-systems (CSoS) is a combat network composed of independently operating entities that interact with one another to provide overall operational capabilities. Enhancing the resilience of CSoS is garnering increasing attention due to its significant practical value in optimizing network architectures, improving network security and refining operational planning. Accordingly, a unified framework called CSoS space-time resilience enhancement (CSoS-STRE) has been proposed, which enhances the resilience of CSoS by incorporating spatial features. Firstly, a multilayer spatial combat network model has been constructed, which incorporates an information layer depicting the interrelations among combat entities based on the OODA loop, along with a spatial layer that considers the spatial characteristics of equipment entities, thereby accurately reflecting the actual combat process. Secondly, building upon the combat network model, a spatiotemporal resilience optimization model is proposed, which reformulates the resilience optimization problem as a classical linear optimization model with spatial features. Furthermore, the model is extended from scenarios without obstacles to those with obstacles, thereby further emphasizing the importance of spatial characteristics. Thirdly, a resilience-oriented recovery optimization method based on improved non dominated sorting genetic algorithm II (R-INSGA) is proposed to determine the optimal recovery sequence for the damaged entities. This method not only considers spatial features but also provides the optimal travel path for multiple recovery teams. Finally, the feasibility, effectiveness, and superiority of the CSoS-STRE are demonstrated through a case study. Simultaneously, under deliberate attack conditions based on degree centrality and maximum operational loop performance, the proposed CSoS-STRE method is compared with six baseline recovery strategies, which are based on performance, time, degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. The comparison demonstrates that CSoS-STRE achieves faster convergence and superior performance.Keywords: space-time resilience enhancement, resilience optimization model, combat system-of-systems, recovery optimization method, no-obstacles and obstacles
Procedia PDF Downloads 15