Search results for: disaster scenarios
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1981

Search results for: disaster scenarios

301 Reliability Modeling of Repairable Subsystems in Semiconductor Fabrication: A Virtual Age and General Repair Framework

Authors: Keshav Dubey, Swajeeth Panchangam, Arun Rajendran, Swarnim Gupta

Abstract:

In the semiconductor capital equipment industry, effective modeling of repairable system reliability is crucial for optimizing maintenance strategies and ensuring operational efficiency. However, repairable system reliability modeling using a renewal process is not as popular in the semiconductor equipment industry as it is in the locomotive and automotive industries. Utilization of this approach will help optimize maintenance practices. This paper presents a structured framework that leverages both parametric and non-parametric approaches to model the reliability of repairable subsystems based on operational data, maintenance schedules, and system-specific conditions. Data is organized at the equipment ID level, facilitating trend testing to uncover failure patterns and system degradation over time. For non-parametric modeling, the Mean Cumulative Function (Mean Cumulative Function) approach is applied, offering a flexible method to estimate the cumulative number of failures over time without assuming an underlying statistical distribution. This allows for empirical insights into subsystem failure behavior based on historical data. On the parametric side, virtual age modeling, along with Homogeneous and Non-Homogeneous Poisson Process (Homogeneous Poisson Process and Non-Homogeneous Poisson Process) models, is employed to quantify the effect of repairs and the aging process on subsystem reliability. These models allow for a more structured analysis by characterizing repair effectiveness and system wear-out trends over time. A comparison of various Generalized Renewal Process (GRP) approaches highlights their utility in modeling different repair effectiveness scenarios. These approaches provide a robust framework for assessing the impact of maintenance actions on system performance and reliability. By integrating both parametric and non-parametric methods, this framework offers a comprehensive toolset for reliability engineers to better understand equipment behavior, assess the effectiveness of maintenance activities, and make data-driven decisions that enhance system availability and operational performance in semiconductor fabrication facilities.

Keywords: reliability, maintainability, homegenous poission process, repairable system

Procedia PDF Downloads 19
300 Healthcare Fire Disasters: Readiness, Response and Resilience Strategies: A Real-Time Experience of a Healthcare Organization of North India

Authors: Raman Sharma, Ashok Kumar, Vipin Koushal

Abstract:

Healthcare facilities are always seen as places of haven and protection for managing the external incidents, but the situation becomes more difficult and challenging when such facilities themselves are affected from internal hazards. Such internal hazards are arguably more disruptive than external incidents affecting vulnerable ones, as patients are always dependent on supportive measures and are neither in a position to respond to such crisis situation nor do they know how to respond. The situation becomes more arduous and exigent to manage if, in case critical care areas like Intensive Care Units (ICUs) and Operating Rooms (OR) are convoluted. And, due to these complexities of patients’ in-housed there, it becomes difficult to move such critically ill patients on immediate basis. Healthcare organisations use different types of electrical equipment, inflammable liquids, and medical gases often at a single point of use, hence, any sort of error can spark the fire. Even though healthcare facilities face many fire hazards, damage caused by smoke rather than flames is often more severe. Besides burns, smoke inhalation is primary cause of fatality in fire-related incidents. The greatest cause of illness and mortality in fire victims, particularly in enclosed places, appears to be the inhalation of fire smoke, which contains a complex mixture of gases in addition to carbon monoxide. Therefore, healthcare organizations are required to have a well-planned disaster mitigation strategy, proactive and well prepared manpower to cater all types of exigencies resulting from internal as well as external hazards. This case report delineates a true OR fire incident in Emergency Operation Theatre (OT) of a tertiary care multispecialty hospital and details the real life evidence of the challenges encountered by OR staff in preserving both life and property. No adverse event was reported during or after this fire commotion, yet, this case report aimed to congregate the lessons identified of the incident in a sequential and logical manner. Also, timely smoke evacuation and preventing the spread of smoke to adjoining patient care areas by opting appropriate measures, viz. compartmentation, pressurisation, dilution, ventilation, buoyancy, and airflow, helped to reduce smoke-related fatalities. Henceforth, precautionary measures may be implemented to mitigate such incidents. Careful coordination, continuous training, and fire drill exercises can improve the overall outcomes and minimize the possibility of these potentially fatal problems, thereby making a safer healthcare environment for every worker and patient.

Keywords: healthcare, fires, smoke, management, strategies

Procedia PDF Downloads 68
299 Numerical Modelling of the Influence of Meteorological Forcing on Water-Level in the Head Bay of Bengal

Authors: Linta Rose, Prasad K. Bhaskaran

Abstract:

Water-level information along the coast is very important for disaster management, navigation, planning shoreline management, coastal engineering and protection works, port and harbour activities, and for a better understanding of near-shore ocean dynamics. The water-level variation along a coast attributes from various factors like astronomical tides, meteorological and hydrological forcing. The study area is the Head Bay of Bengal which is highly vulnerable to flooding events caused by monsoons, cyclones and sea-level rise. The study aims to explore the extent to which wind and surface pressure can influence water-level elevation, in view of the low-lying topography of the coastal zones in the region. The ADCIRC hydrodynamic model has been customized for the Head Bay of Bengal, discretized using flexible finite elements and validated against tide gauge observations. Monthly mean climatological wind and mean sea level pressure fields of ERA Interim reanalysis data was used as input forcing to simulate water-level variation in the Head Bay of Bengal, in addition to tidal forcing. The output water-level was compared against that produced using tidal forcing alone, so as to quantify the contribution of meteorological forcing to water-level. The average contribution of meteorological fields to water-level in January is 5.5% at a deep-water location and 13.3% at a coastal location. During the month of July, when the monsoon winds are strongest in this region, this increases to 10.7% and 43.1% respectively at the deep-water and coastal locations. The model output was tested by varying the input conditions of the meteorological fields in an attempt to quantify the relative significance of wind speed and wind direction on water-level. Under uniform wind conditions, the results showed a higher contribution of meteorological fields for south-west winds than north-east winds, when the wind speed was higher. A comparison of the spectral characteristics of output water-level with that generated due to tidal forcing alone showed additional modes with seasonal and annual signatures. Moreover, non-linear monthly mode was found to be weaker than during tidal simulation, all of which point out that meteorological fields do not cause much effect on the water-level at periods less than a day and that it induces non-linear interactions between existing modes of oscillations. The study signifies the role of meteorological forcing under fair weather conditions and points out that a combination of multiple forcing fields including tides, wind, atmospheric pressure, waves, precipitation and river discharge is essential for efficient and effective forecast modelling, especially during extreme weather events.

Keywords: ADCIRC, head Bay of Bengal, mean sea level pressure, meteorological forcing, water-level, wind

Procedia PDF Downloads 221
298 Artificial Intelligence and Governance in Relevance to Satellites in Space

Authors: Anwesha Pathak

Abstract:

With the increasing number of satellites and space debris, space traffic management (STM) becomes crucial. AI can aid in STM by predicting and preventing potential collisions, optimizing satellite trajectories, and managing orbital slots. Governance frameworks need to address the integration of AI algorithms in STM to ensure safe and sustainable satellite activities. AI and governance play significant roles in the context of satellite activities in space. Artificial intelligence (AI) technologies, such as machine learning and computer vision, can be utilized to process vast amounts of data received from satellites. AI algorithms can analyse satellite imagery, detect patterns, and extract valuable information for applications like weather forecasting, urban planning, agriculture, disaster management, and environmental monitoring. AI can assist in automating and optimizing satellite operations. Autonomous decision-making systems can be developed using AI to handle routine tasks like orbit control, collision avoidance, and antenna pointing. These systems can improve efficiency, reduce human error, and enable real-time responsiveness in satellite operations. AI technologies can be leveraged to enhance the security of satellite systems. AI algorithms can analyze satellite telemetry data to detect anomalies, identify potential cyber threats, and mitigate vulnerabilities. Governance frameworks should encompass regulations and standards for securing satellite systems against cyberattacks and ensuring data privacy. AI can optimize resource allocation and utilization in satellite constellations. By analyzing user demands, traffic patterns, and satellite performance data, AI algorithms can dynamically adjust the deployment and routing of satellites to maximize coverage and minimize latency. Governance frameworks need to address fair and efficient resource allocation among satellite operators to avoid monopolistic practices. Satellite activities involve multiple countries and organizations. Governance frameworks should encourage international cooperation, information sharing, and standardization to address common challenges, ensure interoperability, and prevent conflicts. AI can facilitate cross-border collaborations by providing data analytics and decision support tools for shared satellite missions and data sharing initiatives. AI and governance are critical aspects of satellite activities in space. They enable efficient and secure operations, ensure responsible and ethical use of AI technologies, and promote international cooperation for the benefit of all stakeholders involved in the satellite industry.

Keywords: satellite, space debris, traffic, threats, cyber security.

Procedia PDF Downloads 76
297 Assessing the Impact of High Fidelity Human Patient Simulation on Teamwork among Nursing, Medicine and Pharmacy Undergraduate Students

Authors: S. MacDonald, A. Manuel, R. Law, N. Bandruak, A. Dubrowski, V. Curran, J. Smith-Young, K. Simmons, A. Warren

Abstract:

High fidelity human patient simulation has been used for many years by health sciences education programs to foster critical thinking, engage learners, improve confidence, improve communication, and enhance psychomotor skills. Unfortunately, there is a paucity of research on the use of high fidelity human patient simulation to foster teamwork among nursing, medicine and pharmacy undergraduate students. This study compared the impact of high fidelity and low fidelity simulation education on teamwork among nursing, medicine and pharmacy students. For the purpose of this study, two innovative teaching scenarios were developed based on the care of an adult patient experiencing acute anaphylaxis: one high fidelity using a human patient simulator and one low fidelity using case based discussions. A within subjects, pretest-posttest, repeated measures design was used with two-treatment levels and random assignment of individual subjects to teams of two or more professions. A convenience sample of twenty-four (n=24) undergraduate students participated, including: nursing (n=11), medicine (n=9), and pharmacy (n=4). The Interprofessional Teamwork Questionnaire was used to assess for changes in students’ perception of their functionality within the team, importance of interprofessional collaboration, comprehension of roles, and confidence in communication and collaboration. Student satisfaction was also assessed. Students reported significant improvements in their understanding of the importance of interprofessional teamwork and of the roles of nursing and medicine on the team after participation in both the high fidelity and the low fidelity simulation. However, only participants in the high fidelity simulation reported a significant improvement in their ability to function effectively as a member of the team. All students reported that both simulations were a meaningful learning experience and all students would recommend both experiences to other students. These findings suggest there is merit in both high fidelity and low fidelity simulation as a teaching and learning approach to foster teamwork among undergraduate nursing, medicine and pharmacy students. However, participation in high fidelity simulation may provide a more realistic opportunity to practice and function as an effective member of the interprofessional health care team.

Keywords: acute anaphylaxis, high fidelity human patient simulation, low fidelity simulation, interprofessional education

Procedia PDF Downloads 231
296 Fast Estimation of Fractional Process Parameters in Rough Financial Models Using Artificial Intelligence

Authors: Dávid Kovács, Bálint Csanády, Dániel Boros, Iván Ivkovic, Lóránt Nagy, Dalma Tóth-Lakits, László Márkus, András Lukács

Abstract:

The modeling practice of financial instruments has seen significant change over the last decade due to the recognition of time-dependent and stochastically changing correlations among the market prices or the prices and market characteristics. To represent this phenomenon, the Stochastic Correlation Process (SCP) has come to the fore in the joint modeling of prices, offering a more nuanced description of their interdependence. This approach has allowed for the attainment of realistic tail dependencies, highlighting that prices tend to synchronize more during intense or volatile trading periods, resulting in stronger correlations. Evidence in statistical literature suggests that, similarly to the volatility, the SCP of certain stock prices follows rough paths, which can be described using fractional differential equations. However, estimating parameters for these equations often involves complex and computation-intensive algorithms, creating a necessity for alternative solutions. In this regard, the Fractional Ornstein-Uhlenbeck (fOU) process from the family of fractional processes offers a promising path. We can effectively describe the rough SCP by utilizing certain transformations of the fOU. We employed neural networks to understand the behavior of these processes. We had to develop a fast algorithm to generate a valid and suitably large sample from the appropriate process to train the network. With an extensive training set, the neural network can estimate the process parameters accurately and efficiently. Although the initial focus was the fOU, the resulting model displayed broader applicability, thus paving the way for further investigation of other processes in the realm of financial mathematics. The utility of SCP extends beyond its immediate application. It also serves as a springboard for a deeper exploration of fractional processes and for extending existing models that use ordinary Wiener processes to fractional scenarios. In essence, deploying both SCP and fractional processes in financial models provides new, more accurate ways to depict market dynamics.

Keywords: fractional Ornstein-Uhlenbeck process, fractional stochastic processes, Heston model, neural networks, stochastic correlation, stochastic differential equations, stochastic volatility

Procedia PDF Downloads 118
295 Women’s Colours in Digital Innovation

Authors: Daniel J. Patricio Jiménez

Abstract:

Digital reality demands new ways of thinking, flexibility in learning, acquisition of new competencies, visualizing reality under new approaches, generating open spaces, understanding dimensions in continuous change, etc. We need inclusive growth, where colors are not lacking, where lights do not give a distorted reality, where science is not half-truth. In carrying out this study, the documentary or bibliographic collection has been taken into account, providing a reflective and analytical analysis of current reality. In this context, deductive and inductive methods have been used on different multidisciplinary information sources. Women today and tomorrow are a strategic element in science and arts, which, under the umbrella of sustainability, implies ‘meeting current needs without detriment to future generations’. We must build new scenarios, which qualify ‘the feminine and the masculine’ as an inseparable whole, encouraging cooperative behavior; nothing is exclusive or excluding, and that is where true respect for diversity must be based. We are all part of an ecosystem, which we will make better as long as there is a real balance in terms of gender. It is the time of ‘the lifting of the veil’, in other words, it is the time to discover the pseudonyms, the women who painted, wrote, investigated, recorded advances, etc. However, the current reality demands much more; we must remove doors where they are not needed. Mass processing of data, big data, needs to incorporate algorithms under the perspective of ‘the feminine’. However, most STEM students (science, technology, engineering, and math) are men. Our way of doing science is biased, focused on honors and short-term results to the detriment of sustainability. Historically, the canons of beauty, the way of looking, of perceiving, of feeling, depended on the circumstances and interests of each moment, and women had no voice in this. Parallel to science, there is an under-representation of women in the arts, but not so much in the universities, but when we look at galleries, museums, art dealers, etc., colours impoverish the gaze and once again highlight the gender gap and the silence of the feminine. Art registers sensations by divining the future, science will turn them into reality. The uniqueness of the so-called new normality requires women to be protagonists both in new forms of emotion and thought, and in the experimentation and development of new models. This will result in women playing a decisive role in the so-called "5.0 society" or, in other words, in a more sustainable, more humane world.

Keywords: art, digitalization, gender, science

Procedia PDF Downloads 165
294 Synthesis of High-Pressure Performance Adsorbent from Coconut Shells Polyetheretherketone for Methane Adsorption

Authors: Umar Hayatu Sidik

Abstract:

Application of liquid base petroleum fuel (petrol and diesel) for transportation fuel causes emissions of greenhouse gases (GHGs), while natural gas (NG) reduces the emissions of greenhouse gases (GHGs). At present, compression and liquefaction are the most matured technology used for transportation system. For transportation use, compression requires high pressure (200–300 bar) while liquefaction is impractical. A relatively low pressure of 30-40 bar is achievable by adsorbed natural gas (ANG) to store nearly compressed natural gas (CNG). In this study, adsorbents for high-pressure adsorption of methane (CH4) was prepared from coconut shells and polyetheretherketone (PEEK) using potassium hydroxide (KOH) and microwave-assisted activation. Design expert software version 7.1.6 was used for optimization and prediction of preparation conditions of the adsorbents for CH₄ adsorption. Effects of microwave power, activation time and quantity of PEEK on the adsorbents performance toward CH₄ adsorption was investigated. The adsorbents were characterized by Fourier transform infrared spectroscopy (FTIR), thermogravimetric (TG) and derivative thermogravimetric (DTG) and scanning electron microscopy (SEM). The ideal CH4 adsorption capacities of adsorbents were determined using volumetric method at pressures of 5, 17, and 35 bar at an ambient temperature and 5 oC respectively. Isotherm and kinetics models were used to validate the experimental results. The optimum preparation conditions were found to be 15 wt% amount of PEEK, 3 minutes activation time and 300 W microwave power. The highest CH4 uptake of 9.7045 mmol CH4 adsorbed/g adsorbent was recorded by M33P15 (300 W of microwave power, 3 min activation time and 15 wt% amount of PEEK) among the sorbents at an ambient temperature and 35 bar. The CH4 equilibrium data is well correlated with Sips, Toth, Freundlich and Langmuir. Isotherms revealed that the Sips isotherm has the best fit, while the kinetics studies revealed that the pseudo-second-order kinetic model best describes the adsorption process. In all scenarios studied, a decrease in temperature led to an increase in adsorption of both gases. The adsorbent (M33P15) maintained its stability even after seven adsorption/desorption cycles. The findings revealed the potential of coconut shell-PEEK as CH₄ adsorbents.

Keywords: adsorption, desorption, activated carbon, coconut shells, polyetheretherketone

Procedia PDF Downloads 67
293 Partial Least Square Regression for High-Dimentional and High-Correlated Data

Authors: Mohammed Abdullah Alshahrani

Abstract:

The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.

Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data

Procedia PDF Downloads 49
292 Multi-Omics Integrative Analysis Coupled to Control Theory and Computational Simulation of a Genome-Scale Metabolic Model Reveal Controlling Biological Switches in Human Astrocytes under Palmitic Acid-Induced Lipotoxicity

Authors: Janneth Gonzalez, Andrés Pinzon Velasco, Maria Angarita

Abstract:

Astrocytes play an important role in various processes in the brain, including pathological conditions such as neurodegenerative diseases. Recent studies have shown that the increase in saturated fatty acids such as palmitic acid (PA) triggers pro-inflammatorypathways in the brain. The use of synthetic neurosteroids such as tibolone has demonstrated neuro-protective mechanisms. However, broad studies with a systemic point of view on the neurodegenerative role of PA and the neuro-protective mechanisms of tibolone are lacking. In this study, we performed the integration of multi-omic data (transcriptome and proteome) into a human astrocyte genomic scale metabolic model to study the astrocytic response during palmitate treatment. We evaluated metabolic fluxes in three scenarios (healthy, induced inflammation by PA, and tibolone treatment under PA inflammation). We also applied a control theory approach to identify those reactions that exert more control in the astrocytic system. Our results suggest that PA generates a modulation of central and secondary metabolism, showing a switch in energy source use through inhibition of folate cycle and fatty acid β‐oxidation and upregulation of ketone bodies formation. We found 25 metabolic switches under PA‐mediated cellular regulation, 9 of which were critical only in the inflammatory scenario but not in the protective tibolone one. Within these reactions, inhibitory, total, and directional coupling profiles were key findings, playing a fundamental role in the (de)regulation of metabolic pathways that may increase neurotoxicity and represent potential treatment targets. Finally, the overall framework of our approach facilitates the understanding of complex metabolic regulation, and it can be used for in silico exploration of the mechanisms of astrocytic cell regulation, directing a more complex future experimental work in neurodegenerative diseases.

Keywords: astrocytes, data integration, palmitic acid, computational model, multi-omics

Procedia PDF Downloads 97
291 Driver of Migration and Appropriate Policy Concern Considering the Southwest Coastal Part of Bangladesh

Authors: Aminul Haque, Quazi Zahangir Hossain, Dilshad Sharmin Chowdhury

Abstract:

The human migration is getting growing concern around the world, and recurrent disasters and climate change impact have great influence on migration. Bangladesh is one of the disaster prone countries that/and has greater susceptibility to stress migration by recurrent disasters and climate change. The study was conducted to investigate the factors that have a strong influence on current migration and changing pattern of life and livelihood means of the southwest coastal part of Bangladesh. Moreover, the study also revealed a strong relationship between disasters and migration and appropriate policy concern. To explore this relation, both qualitative and quantitative methods were applied to a questionnaire survey at household level and simple random sampling technique used in the sampling process along with different secondary data sources for understanding policy concern and practices. The study explores the most influential driver of migration and its relationship with social, economic and environmental drivers. The study denotes that, the environmental driver has a greater effect on the intention of permanent migration (t=1.481, p-value=0.000) at the 1 percent significance level. The significant number of respondents denotes that abrupt pattern of cyclone, flood, salinity intrusion and rainfall are the most significant environmental driver to make a decision on permanent migration. The study also found that the temporary migration pattern has 2-fold increased compared to last ten (10) years. It also appears from the study that environmental factors have a great implication on the changing pattern of the occupation of the study area and it has reported that about 76% of the respondent now in the changing modality of livelihood compare to their traditional practices. The study bares that the migration has foremost impact on children and women by increasing hardship and creating critical social security. The exposure-route of permanent migration is not smooth indeed, these migrations creating urban and conflict in Chittagong hill tracks of Bangladesh. The study denotes that there is not any safeguard of the stress migrant on existing policy and not have any measures for safe migration and resettlement rather considering the emergency response and shelter. The majority of (98%) people believes that migration is not to be the adoption strategies, but contrary to this young group of respondent believes that safe migration could be the adaptation strategy which could bring a positive result compare to the other resilience strategies. On the other hand, the significant number of respondents uttered that appropriate policy measure could be an adaptation strategy for being the formation of a resilient community and reduce the migration by meaningful livelihood options with appropriate protection measure.

Keywords: environmental driver, livelihood, migration, resilience

Procedia PDF Downloads 264
290 Differentiated Instruction for All Learners: Strategies for Full Inclusion

Authors: Susan Dodd

Abstract:

This presentation details the methodology for teachers to identify and support a population of students who have historically been overlooked in regards to their educational needs. The twice exceptional (2e) student is a learner who is considered gifted and also has a learning disability, as defined by the Individuals with Disabilities Education Act (IDEA). Many of these students remain underserved throughout their educational careers because their exceptionalities may mask each other, resulting in a special population of students who are not achieving to their fullest potential. There are three common scenarios that may make the identification of a 2e student challenging. First, the student may have been identified as gifted, and her disability may go unnoticed. She could also be considered an under-achiever, or she may be able to compensate for her disability under the school works becomes more challenging. In the second scenario, the student may be identified as having a learning disability and is only receiving remedial services where his giftedness will not be highlighted. His overall IQ scores may be misleading because they were impacted by his learning disability. In the third scenario, the student is able to compensate for her ability well enough to maintain average scores, and she goes undetected as both gifted and learning disabled. Research in the area identifies the complexity involved in identifying 2e students, and how multiple forms of assessment are required. It is important for teachers to be aware of the common characteristics exhibited by many 2e students, so these learners can be identified and appropriately served. Once 2e students have been identified, teachers are then challenged to meet the varying needs of these exceptional learners. Strength-based teaching entails simultaneously providing gifted instruction as well as individualized accommodations for those students. Research in this field has yielded strategies that have proven helpful for teaching 2e students, as well as other students who may be struggling academically. Differentiated instruction, while necessary in all classrooms, is especially important for 2e students, as is encouragement for academic success. Teachers who take the time to really know their students will have a better understanding of each student’s strengths and areas for growth, and therefore tailor instruction to extend the intellectual capacities for optimal achievement. Teachers should also understand that some learning activities can prove very frustrating to students, and these activities can be modified based on individual student needs. Because 2e students can often become discouraged by their learning challenges, it is especially important for teachers to assist students in recognizing their own strengths and maintaining motivation for learning. Although research on the needs of 2e students has spanned across two decades, this population remains underserved in many educational institutions. Teacher awareness of the identification of and the support strategies for 2e students is critical for their success.

Keywords: gifted, learning disability, special needs, twice exceptional

Procedia PDF Downloads 179
289 Analysing the Applicability of a Participatory Approach to Life Cycle Sustainability Assessment: Case Study of a Housing Estate Regeneration in London

Authors: Sahar Navabakhsh, Rokia Raslan, Yair Schwartz

Abstract:

Decision-making on regeneration of housing estates, whether to refurbish or re-build, has been mostly triggered by economic factors. To enable sustainable growth, it is vital that environmental and social impacts of different scenarios are also taken into account. The methodology used to include all the three sustainable development pillars is called Life Cycle Sustainability Assessment (LCSA), which comprises of Life Cycle Assessment (LCA) for the assessment of environmental impacts of buildings. Current practice of LCA is regularly conducted post design stage and by sustainability experts. Not only is undertaking an LCA at this stage less effective, but issues such as the limited scope for the definition and assessment of environmental impacts, the implication of changes in the system boundary and the alteration of each of the variable metrics, employment of different Life Cycle Impact Assessment Methods and use of various inventory data for Life Cycle Inventory Analysis can result in considerably contrasting results. Given the niche nature and scarce specialist domain of LCA of buildings, the majority of the stakeholders do not contribute to the generation or interpretation of the impact assessment, and the results can be generated and interpreted subjectively due to the mentioned uncertainties. For an effective and democratic assessment of environmental impacts, different stakeholders, and in particular the community and design team should collaborate in the process of data collection, assessment and analysis. This paper examines and evaluates a participatory approach to LCSA through the analysis of a case study of a housing estate in South West London. The study has been conducted throughout tier-based collaborative methods to collect and share data through surveys and co-design workshops with the community members and the design team as the main stakeholders. The assessment of lifecycle impacts is conducted throughout the process and has influenced the decision-making on the design of the Community Plan. The evaluation concludes better assessment transparency and outcome, alongside other socio-economic benefits of identifying and engaging the most contributive stakeholders in the process of conducting LCSA.

Keywords: life cycle assessment, participatory LCA, life cycle sustainability assessment, participatory processes, decision-making, housing estate regeneration

Procedia PDF Downloads 147
288 Material Supply Mechanisms for Contemporary Assembly Systems

Authors: Rajiv Kumar Srivastava

Abstract:

Manufacturing of complex products such as automobiles and computers requires a very large number of parts and sub-assemblies. The design of mechanisms for delivery of these materials to the point of assembly is an important manufacturing system and supply chain challenge. Different approaches to this problem have been evolved for assembly lines designed to make large volumes of standardized products. However, contemporary assembly systems are required to concurrently produce a variety of products using approaches such as mixed model production, and at times even mass customization. In this paper we examine the material supply approaches for variety production in moderate to large volumes. The conventional approach for material delivery to high volume assembly lines is to supply and stock materials line-side. However for certain materials, especially when the same or similar items are used along the line, it is more convenient to supply materials in kits. Kitting becomes more preferable when lines concurrently produce multiple products in mixed model mode, since space requirements could increase as product/ part variety increases. At times such kits may travel along with the product, while in some situations it may be better to have delivery and station-specific kits rather than product-based kits. Further, in some mass customization situations it may even be better to have a single delivery and assembly station, to which an entire kit is delivered for fitment, rather than a normal assembly line. Finally, in low-moderate volume assembly such as in engineered machinery, it may be logistically more economical to gather materials in an order-specific kit prior to launching final assembly. We have studied material supply mechanisms to support assembly systems as observed in case studies of firms with different combinations of volume and variety/ customization. It is found that the appropriate approach tends to be a hybrid between direct line supply and different kitting modes, with the best mix being a function of the manufacturing and supply chain environment, as well as space and handling considerations. In our continuing work we are studying these scenarios further, through the use of descriptive models and progressing towards prescriptive models to help achieve the optimal approach, capturing the trade-offs between inventory, material handling, space, and efficient line supply.

Keywords: assembly systems, kitting, material supply, variety production

Procedia PDF Downloads 226
287 Impacts of Urban Morphologies on Air Pollutants Dispersion in Porto's Urban Area

Authors: Sandra Rafael, Bruno Vicente, Vera Rodrigues, Carlos Borrego, Myriam Lopes

Abstract:

Air pollution is an environmental and social issue at different spatial scales, especially in a climate change context, with an expected decrease of air quality. Air pollution is a combination of high emissions and unfavourable weather conditions, where wind speed and wind direction play a key role. The urban design (location and structure of buildings and trees) can both promote the air pollutants dispersion as well as promote their retention within the urban area. Today, most of the urban areas are applying measures to adapt to future extreme climatic events. Most of these measures are grounded on nature-based solutions, namely green roofs and green areas. In this sense, studies are required to evaluate how the implementation of these actions will influence the wind flow within the urban area and, consequently, how this will influence air pollutants' dispersion. The main goal of this study was to evaluate the influence of a set of urban morphologies in the wind conditions and in the dispersion of air pollutants, in a built-up area in Portugal. For that, two pollutants were analysed (NOx and PM10) and four scenarios were developed: i) a baseline scenario, which characterizes the current status of the study area, ii) an urban green scenario, which implies the implementation of a green area inside the domain, iii) a green roof scenario, which consists in the implementation of green roofs in a specific area of the domain; iv) a 'grey' scenario, which consists in a scenario with absence of vegetation. For that, two models were used, namely the Weather Research and Forecasting model (WRF) and the CFD model VADIS (pollutant dispersion in the atmosphere under variable wind conditions). The WRF model was used to initialize the CFD model, while the last was used to perform the set of numerical simulations, on an hourly basis. The implementation of the green urban area promoted a reduction of air pollutants' concentrations, 16% on average, related to the increase in the wind flow, which promotes air pollutants dispersion; while the application of green roofs showed an increase of concentrations (reaching 60% during specific time periods). Overall the results showed that a strategic placement of vegetation in cities has the potential to make an important contribution to increase air pollutants dispersion and so promote the improvement of air quality and sustainability of urban environments.

Keywords: air pollutants dispersion, wind conditions, urban morphologies, road traffic emissions

Procedia PDF Downloads 346
286 Research on Quality Assurance in African Higher Education: A Bibliometric Mapping from 1999 to 2019

Authors: Luís M. João, Patrício Langa

Abstract:

The article reviews the literature on quality assurance (QA) in African higher education studies (HES) conducted through a bibliometric mapping of published papers between 1999 and 2019. Specifically, the article highlights the nuances of knowledge production in four scientific databases: Scopus, Web of Science (WoS), African Journal Online (AJOL), and Google Scholar. The analysis included 531 papers, of which 127 are from Scopus, 30 are from Web of Science, 85 are from African Journal Online, and 259 are from Google Scholar. In essence, 284 authors wrote these papers from 231 institutions and 69 different countries (i.e., Africa=54 and outside Africa=15). Results indicate the existing knowledge. This analysis allows the readers to understand the growth and development of the field during the two-decade period, identify key contributors, and observe potential trends or gaps in the research. The paper employs bibliometric mapping as its primary analytical lens. By utilizing this method, the study quantitatively assesses the publications related to QA in African HES, helping to identify patterns, collaboration networks, and disparities in research output. The bibliometric approach allows for a systematic and objective analysis of large datasets, offering a comprehensive view of the knowledge production in the field. Furthermore, the study highlights the lack of shared resources available to enhance quality in higher education institutions (HEIs) in Africa. This finding underscores the importance of promoting collaborative research efforts, knowledge exchange, and capacity building within the region to improve the overall quality of higher education. The paper argues that despite the growing quantity of QA research in African higher education, there are challenges related to citation impact and access to high-impact publication avenues for African researchers. It emphasises the need to promote collaborative research and resource-sharing to enhance the quality of HEIs in Africa. The analytical lenses of bibliometric mapping and the examination of publication players' scenarios contribute to a comprehensive understanding of the field and its implications for African higher education.

Keywords: Africa, bibliometric research, higher education studies, quality assurance, scientific database, systematic review

Procedia PDF Downloads 43
285 Internet-Of-Things and Ergonomics, Increasing Productivity and Reducing Waste: A Case Study

Authors: V. Jaime Contreras, S. Iliana Nunez, S. Mario Sanchez

Abstract:

Inside a manufacturing facility, we can find innumerable automatic and manual operations, all of which are relevant to the production process. Some of these processes add more value to the products more than others. Manual operations tend to add value to the product since they can be found in the final assembly area o final operations of the process. In this areas, where a mistake or accident can increase the cost of waste exponentially. To reduce or mitigate these costly mistakes, one approach is to rely on automation to eliminate the operator from the production line - requires a hefty investment and development of specialized machinery. In our approach, the center of the solution is the operator through sufficient and adequate instrumentation, real-time reporting and ergonomics. Efficiency and reduced cycle time can be achieved thorough the integration of Internet-of-Things (IoT) ready technologies into assembly operations to enhance the ergonomics of the workstations. Augmented reality visual aids, RFID triggered personalized workstation dimensions and real-time data transfer and reporting can help achieve these goals. In this case study, a standard work cell will be used for real-life data acquisition and a simulation software to extend the data points beyond the test cycle. Three comparison scenarios will run in the work cell. Each scenario will introduce a dimension of the ergonomics to measure its impact independently. Furthermore, the separate test will determine the limitations of the technology and provide a reference for operating costs and investment required. With the ability, to monitor costs, productivity, cycle time and scrap/waste in real-time the ROI (return on investment) can be determined at the different levels to integration. This case study will help to show that ergonomics in the assembly lines can make significant impact when IoT technologies are introduced. Ergonomics can effectively reduce waste and increase productivity with minimal investment if compared with setting up to custom machine.

Keywords: augmented reality visual aids, ergonomics, real-time data acquisition and reporting, RFID triggered workstation dimensions

Procedia PDF Downloads 214
284 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing

Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou

Abstract:

The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.

Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation

Procedia PDF Downloads 118
283 The Appropriate Number of Test Items That a Classroom-Based Reading Assessment Should Include: A Generalizability Analysis

Authors: Jui-Teng Liao

Abstract:

The selected-response (SR) format has been commonly adopted to assess academic reading in both formal and informal testing (i.e., standardized assessment and classroom assessment) because of its strengths in content validity, construct validity, as well as scoring objectivity and efficiency. When developing a second language (L2) reading test, researchers indicate that the longer the test (e.g., more test items) is, the higher reliability and validity the test is likely to produce. However, previous studies have not provided specific guidelines regarding the optimal length of a test or the most suitable number of test items or reading passages. Additionally, reading tests often include different question types (e.g., factual, vocabulary, inferential) that require varying degrees of reading comprehension and cognitive processes. Therefore, it is important to investigate the impact of question types on the number of items in relation to the score reliability of L2 reading tests. Given the popularity of the SR question format and its impact on assessment results on teaching and learning, it is necessary to investigate the degree to which such a question format can reliably measure learners’ L2 reading comprehension. The present study, therefore, adopted the generalizability (G) theory to investigate the score reliability of the SR format in L2 reading tests focusing on how many test items a reading test should include. Specifically, this study aimed to investigate the interaction between question types and the number of items, providing insights into the appropriate item count for different types of questions. G theory is a comprehensive statistical framework used for estimating the score reliability of tests and validating their results. Data were collected from 108 English as a second language student who completed an English reading test comprising factual, vocabulary, and inferential questions in the SR format. The computer program mGENOVA was utilized to analyze the data using multivariate designs (i.e., scenarios). Based on the results of G theory analyses, the findings indicated that the number of test items had a critical impact on the score reliability of an L2 reading test. Furthermore, the findings revealed that different types of reading questions required varying numbers of test items for reliable assessment of learners’ L2 reading proficiency. Further implications for teaching practice and classroom-based assessments are discussed.

Keywords: second language reading assessment, validity and reliability, Generalizability theory, Academic reading, Question format

Procedia PDF Downloads 88
282 Design of In-House Test Method for Assuring Packing Quality of Bottled Spirits

Authors: S. Ananthakrishnan, U. H. Acharya

Abstract:

Whether shopping in a retail location or via the internet, consumers expect to receive their products intact. When products arrive damaged or over-packaged, the result can be customer dissatisfaction and increased cost for retailers and manufacturers. The packaging performance depends on both the transport situation and the packaging design. During transportation, the packaged products are subjected to the variation in vibration levels from transport vehicles that vary in frequency and acceleration while moving to their destinations. Spirits manufactured by this Company were being transported to various parts of the country by road. There were instances of package breaking and customer complaints. The vibration experienced on a straight road at some speed may not be same as the vibration experienced by the same vehicle on a curve at the same speed. This vibration may negatively affect the product or packing. Hence, it was necessary to conduct a physical road test to understand the effect of vibration in the packaged products. The field transit trial has to be done before the transportations, which results in high investment. The company management was interested in developing an in-house test environment which would adequately represent the transit conditions. With the objective to develop an in-house test condition that can accurately simulate the mechanical loading scenario prevailing during the storage, handling and transportation of the products a brainstorming was done with the concerned people to identify the critical factors affecting vibration rate. Position of corrugated box, the position of bottle and speed of vehicle were identified as factors affecting the vibration rate. Several packing scenarios were identified by Design of Experiment methodology and simulated in the in-house test facility. Each condition was observed for 30 minutes, which was equivalent to 1000 km. The achieved vibration level was considered as the response. The average achieved in the simulated experiments was near to the third quartile (Q3) of the actual data. Thus, we were able to address around three-fourth of the actual phenomenon. Most of the cases in transit could be reproduced. The recommended test condition could generate a vibration level ranging from 9g to 15g as against a maximum of only 7g that was being generated earlier. Thus, the Company was able to test the packaged cartons satisfactorily in the house itself before transporting to the destinations, assuring itself that the breakages of the bottles will not happen.

Keywords: ANOVA, Corrugated box, DOE, Quartile

Procedia PDF Downloads 125
281 Non-Invasive Assessment of Peripheral Arterial Disease: Automated Ankle Brachial Index Measurement and Pulse Volume Analysis Compared to Ultrasound Duplex Scan

Authors: Jane E. A. Lewis, Paul Williams, Jane H. Davies

Abstract:

Introduction: There is, at present, a clear and recognized need to optimize the diagnosis of peripheral arterial disease (PAD), particularly in non-specialist settings such as primary care, and this arises from several key facts. Firstly, PAD is a highly prevalent condition. In 2010, it was estimated that globally, PAD affected more than 202 million people and furthermore, this prevalence is predicted to further escalate. The disease itself, although frequently asymptomatic, can cause considerable patient suffering with symptoms such as lower limb pain, ulceration, and gangrene which, in worse case scenarios, can necessitate limb amputation. A further and perhaps the most eminent consequence of PAD arises from the fact that it is a manifestation of systemic atherosclerosis and therefore is a powerful predictor of coronary heart disease and cerebrovascular disease. Objective: This cross sectional study aimed to individually and cumulatively compare sensitivity and specificity of the (i) ankle brachial index (ABI) and (ii) pulse volume waveform (PVW) recorded by the same automated device, with the presence or absence of peripheral arterial disease (PAD) being verified by an Ultrasound Duplex Scan (UDS). Methods: Patients (n = 205) referred for lower limb arterial assessment underwent an ABI and PVW measurement using volume plethysmography followed by a UDS. Presence of PAD was recorded for ABI if < 0.9 (noted if > 1.30) if PVW was graded as 2, 3 or 4 or a hemodynamically significant stenosis > 50% with UDS. Outcome measure was agreement between measured ABI and interpretation of the PVW for PAD diagnosis, using UDS as the reference standard. Results: Sensitivity of ABI was 80%, specificity 91%, and overall accuracy 88%. Cohen’s kappa revealed good agreement between ABI and UDS (k = 0.7, p < .001). PVW sensitivity 97%, specificity 81%, overall accuracy 84%, with a good level of agreement between PVW and UDS (k = 0.67, p < .001). The combined sensitivity of ABI and PVW was 100%, specificity 76%, and overall accuracy 85% (k = 0.67, p < .001). Conclusions: Combing these two diagnostic modalities within one device provided a highly accurate method of ruling out PAD. Such a device could be utilized within the primary care environment to reduce the number of unnecessary referrals to secondary care with concomitant cost savings, reduced patient inconvenience, and prioritization of urgent PAD cases.

Keywords: ankle brachial index, peripheral arterial disease, pulse volume waveform, ultrasound duplex scan

Procedia PDF Downloads 166
280 Methodological Approach for Historical Building Retrofit Based on Energy and Cost Analysis in the Different Climatic Zones

Authors: Selin Guleroglu, Ilker Kahraman, E. Selahattin Umdu

Abstract:

In today’s world, the building sector has a significant impact on primary energy consumption and CO₂ emissions. While new buildings must have high energy performance as indicated by the Energy Performance Directive in Buildings (EPBD), published by the European Union (EU), the energy performance of the existing buildings must also be enhanced with cost-efficient methods. Turkey has a high historical building density similar to south European countries, and the high energy consumption is the main contributor in the energy consumptioın of Turkey, which is rather higher than European counterparts. Historic buildings spread around Turkey for four main climate zones covering very similar climate characteristics to both the north and south European countries. The case study building is determined as the most common building type in Turkey. This study aims to investigate energy retrofit measures covering but not limited to passive and active measures to improve the energy performance of the historical buildings located in different climatic zones within the limits of preservation of the historical value of the building as a crucial constraint. Passive measures include wall, window, and roof construction elements, and active measures HVAC systems in retrofit scenarios. The proposed methodology can help to reach up to 30% energy saving based on primary energy consumption. DesignBuilder, an energy simulation tool, is used to determine the energy performance of buildings with suggested retrofit measures, and the Net Present Value (NPV) method is used for cost analysis of them. Finally, the most efficient energy retrofit measures for all buildings are determined by analyzing primary energy consumption and the cost performance of them. Results show that heat insulation, glazing type, and HVAC system has an important role in energy saving. Also, it found that these parameters have a different positive or negative effect on building energy consumption in different climate zones. For instance, low e glazing has a positive impact on the energy performance of the building in the first zone, while it has a negative effect on the building in the forth zone. Another important result is applying heat insulation has minimum impact on building energy performance compared to other zones.

Keywords: energy performance, climatic zones, historic building, energy retrofit measures, NPV

Procedia PDF Downloads 174
279 Sea of Light: A Game 'Based Approach for Evidence-Centered Assessment of Collaborative Problem Solving

Authors: Svenja Pieritz, Jakab Pilaszanovich

Abstract:

Collaborative Problem Solving (CPS) is recognized as being one of the most important skills of the 21st century with having a potential impact on education, job selection, and collaborative systems design. Therefore, CPS has been adopted in several standardized tests, including the Programme for International Student Assessment (PISA) in 2015. A significant challenge of evaluating CPS is the underlying interplay of cognitive and social skills, which requires a more holistic assessment. However, the majority of the existing tests are using a questionnaire-based assessment, which oversimplifies this interplay and undermines ecological validity. Two major difficulties were identified: Firstly, the creation of a controllable, real-time environment allowing natural behaviors and communication between at least two people. Secondly, the development of an appropriate method to collect and synthesize both cognitive and social metrics of collaboration. This paper proposes a more holistic and automated approach to the assessment of CPS. To address these two difficulties, a multiplayer problem-solving game called Sea of Light was developed: An environment allowing students to deploy a variety of measurable collaborative strategies. This controlled environment enables researchers to monitor behavior through the analysis of game actions and chat. The according solution for the statistical model is a combined approach of Natural Language Processing (NLP) and Bayesian network analysis. Social exchanges via the in-game chat are analyzed through NLP and fed into the Bayesian network along with other game actions. This Bayesian network synthesizes evidence to track and update different subdimensions of CPS. Major findings focus on the correlations between the evidences collected through in- game actions, the participants’ chat features and the CPS self- evaluation metrics. These results give an indication of which game mechanics can best describe CPS evaluation. Overall, Sea of Light gives test administrators control over different problem-solving scenarios and difficulties while keeping the student engaged. It enables a more complete assessment based on complex, socio-cognitive information on actions and communication. This tool permits further investigations of the effects of group constellations and personality in collaborative problem-solving.

Keywords: bayesian network, collaborative problem solving, game-based assessment, natural language processing

Procedia PDF Downloads 132
278 Chinese Early Childhood Parenting Style as a Moderator of the Development of Social Competence Based on Mindreading

Authors: Arkadiusz Gut, Joanna Afek

Abstract:

The first issue that we discuss in this paper is a battery of research demonstrating that culture influences children’s performance in tasks testing their theory of mind, also known as mindreading. We devote special attention to research done within Chinese culture; namely, studies with children speaking Cantonese and Mandarin natively and growing up in an environment dominated by the Chinese model of informal home education. Our attention focuses on the differences in development and functioning of social abilities and competences between children from China and the West. Another matter we turn to is the description of the nature of Chinese early childhood education. We suggest that the differences between the Chinese model and that of the West reveal a set of modifiers responsible for the variation observed in empirical research on children’s theory of mind (mindreading). The modifiers we identify are the following: (1) early socialization – that is, the transformation of the child into a member of the family and society that set special value by the social and physical environment; (2) the Confucian model of education – that is, the Chinese alphabet and tradition that determine a certain way of education in China; (3) the authoritarian style of upbringing – that is, reinforcing conformism, discouraging voicing of private opinions, and respect for elders; (4) the modesty of children and protectiveness of parents – that is, obedience as a desired characteristic in the child, overprotectiveness of parents, especially mothers; and (5) gender differences – that is, different educational styles for girls and boys. In our study, we conduct a thorough meta-analysis of empirical data on the development of mindreading and ToM (children’s theory of mind), as well as a cultural analysis of early childhood education in China. We support our analyses with questionnaire and narrative studies conducted in China that use the ‘Children’s Social Understanding Scale’ questionnaire, conversations based on the so-called ‘Scenarios Presented to Parents’, and questions designed to measure the ‘my child and I’ relation. With our research we aim to identify the factors in early childhood education that serve as moderators explaining the nature of the development and functioning of social cognition based on mind reading in China. Additionally, our study provides a valuable insight for comparative research of social cognition between China and the West.

Keywords: early childhood education, China, mindreading, parenting

Procedia PDF Downloads 386
277 From Theory to Practice: Harnessing Mathematical and Statistical Sciences in Data Analytics

Authors: Zahid Ullah, Atlas Khan

Abstract:

The rapid growth of data in diverse domains has created an urgent need for effective utilization of mathematical and statistical sciences in data analytics. This abstract explores the journey from theory to practice, emphasizing the importance of harnessing mathematical and statistical innovations to unlock the full potential of data analytics. Drawing on a comprehensive review of existing literature and research, this study investigates the fundamental theories and principles underpinning mathematical and statistical sciences in the context of data analytics. It delves into key mathematical concepts such as optimization, probability theory, statistical modeling, and machine learning algorithms, highlighting their significance in analyzing and extracting insights from complex datasets. Moreover, this abstract sheds light on the practical applications of mathematical and statistical sciences in real-world data analytics scenarios. Through case studies and examples, it showcases how mathematical and statistical innovations are being applied to tackle challenges in various fields such as finance, healthcare, marketing, and social sciences. These applications demonstrate the transformative power of mathematical and statistical sciences in data-driven decision-making. The abstract also emphasizes the importance of interdisciplinary collaboration, as it recognizes the synergy between mathematical and statistical sciences and other domains such as computer science, information technology, and domain-specific knowledge. Collaborative efforts enable the development of innovative methodologies and tools that bridge the gap between theory and practice, ultimately enhancing the effectiveness of data analytics. Furthermore, ethical considerations surrounding data analytics, including privacy, bias, and fairness, are addressed within the abstract. It underscores the need for responsible and transparent practices in data analytics, and highlights the role of mathematical and statistical sciences in ensuring ethical data handling and analysis. In conclusion, this abstract highlights the journey from theory to practice in harnessing mathematical and statistical sciences in data analytics. It showcases the practical applications of these sciences, the importance of interdisciplinary collaboration, and the need for ethical considerations. By bridging the gap between theory and practice, mathematical and statistical sciences contribute to unlocking the full potential of data analytics, empowering organizations and decision-makers with valuable insights for informed decision-making.

Keywords: data analytics, mathematical sciences, optimization, machine learning, interdisciplinary collaboration, practical applications

Procedia PDF Downloads 93
276 Metal Contaminants in River Water and Human Urine after an Episode of Major Pollution by Mining Wastes in the Kasai Province of DR Congo

Authors: Remy Mpulumba Badiambile, Paul Musa Obadia, Malick Useni Mutayo, Jeef Numbi Mukanya, Patient Nkulu Banza, Tony Kayembe Kitenge, Erik Smolders, Jean-François Picron, Vincent Haufroid, Célestin Banza Lubaba Nkulu, Benoit Nemery

Abstract:

Background: In July 2021, the Tshikapa river became heavily polluted by mining wastes from a diamond mine in neighboring Angola, leading to massive killing of fish, as well as disease and even deaths among residents living along the Tshikapa and Kasai rivers, a major contributory of the Congo river. The exact nature of the pollutants was unknown. Methods: In a cross-sectional study conducted in the city of Tshikapa in August 2021, we enrolled by opportunistic sampling 65 residents (11 children < 16y) living alongside the polluted rivers and 65 control residents (5 children) living alongside a non-affected portion of the Kasai river (upstream from the Tshikapa-Kasai confluence). We administered a questionnaire and obtained spot urine samples for measurements of thiocyanate (a metabolite of cyanide) and 26 trace metals (by ICP-MS). Metals (and pH) were also measured in samples of river water. Results: Participants from both groups consumed river water. In the area affected by the pollution, most participants had eaten dead fish. Prevalences of reported health symptoms were higher in the exposed group than among controls: skin rashes (52% vs 0%), diarrhea (40% vs 8%), abdominal pain (8% vs 3%), nausea (3% vs 0%). In polluted water, concentrations [median (range)] were only higher for nickel [(2.2(1.4–3.5)µg/L] and uranium [78(71–91)ng/L] than in non-polluted water [0.8(0.6–1.9)µg/L; 9(7–19)ng/L]. In urine, concentrations [µg/g creatinine, median(IQR)] were significantly higher in the exposed group than in controls for lithium [19.5(12.4–27.3) vs 6.9(5.9–12.1)], thallium [0.41(0.31–0.57) vs 0.19(0.16–0.39)], and uranium [0.026(0.013–0.037)] vs 0.012(0.006–0.024)]. Other elements did not differ between the groups, but levels were higher than reference values for several metals (including manganese, cobalt, nickel, and lead). Urinary thiocyanate concentrations did not differ. Conclusion: This study, after an ecological disaster in the DRC, has documented contamination of river water by nickel and uranium and high urinary levels of some trace metals among affected riverine populations. However, the exact cause of the massive fish kill and disease among residents remains elusive. The capacity to rapidly investigate toxic pollution events must be increased in the area.

Keywords: metal contaminants, river water and human urine, pollution by mining wastes, DR Congo

Procedia PDF Downloads 157
275 Analysis of the Introduction of Carsharing in the Context of Developing Countries: A Case Study Based on On-Board Carsharing Survey in Kabul, Afghanistan

Authors: Mustafa Rezazada, Takuya Maruyama

Abstract:

Cars have a strong integration with the human being since its introduction, and this interaction is more evident in the urban context. Therefore, shifting city residents from driving private vehicles to public transits has been a big challenge. Accordingly, carsharing as an innovative, environmentally friendly transport alternative had a significant contribution to this transition so far. It helped to reduce the numbers of household car ownership, declining demand for on-street parking, dropping the numbers of kilometers traveled by car, and affects the future of mobility by decreasing the Green House Gases (GHS) emissions’ and the numbers of new cars to be purchased otherwise. However, majorities of carsharing researches were conducted in highly developed cities, and less attention has been paid to the cities of developing countries. This study is conducted in the Capital of Afghanistan, Kabul to investigate the current transport pattern, user behavior, and to examine the possibility of introducing the carsharing system. This study established a new survey method called Onboard Carsharing Survey OCS. In this survey, the carpooling passengers aboard are interviewed following the Onboard Transit Survey OTS guideline with a few refinements. The survey focuses on respondents’ daily travel behavior and hypothetical stated choice of carsharing opportunities. Moreover, it followed by an aggregate analysis at the end. The survey results indicate the following: two-thirds of the respondents 62% have been carpooling every day since 5 years or more, more than half of the respondents are not satisfied with current modes, besides other attributes the Traffic Congestion, Environment and Insufficient Public Transport were ranked the most critical in daily transportation by survey participants. Moreover, 68.24% of the respondent chose Carsharing over carpooling under different choice game scenarios. Overall, the findings in this research show that Kabul City is a potential underground for the introduction of Carsharing in the future. Taken together, insufficient public transit, dissatisfaction with current modes, and their stated interest will affect the future of carsharing positively in Kabul City. The modal choice in this study is limited to carpooling and carsharing; more choice sets, including bus, cycling, and walking, will have to be added to evaluate further.

Keywords: carsharing, developing countries, Kabul Afghanistan, onboard carsharing survey, transportation, urban planning

Procedia PDF Downloads 135
274 Define Immersive Need Level for Optimal Adoption of Virtual Words with BIM Methodology

Authors: Simone Balin, Cecilia M. Bolognesi, Paolo Borin

Abstract:

In the construction industry, there is a large amount of data and interconnected information. To manage this information effectively, a transition to the immersive digitization of information processes is required. This transition is important to improve knowledge circulation, product quality, production sustainability and user satisfaction. However, there is currently a lack of a common definition of immersion in the construction industry, leading to misunderstandings and limiting the use of advanced immersive technologies. Furthermore, the lack of guidelines and a common vocabulary causes interested actors to abandon the virtual world after the first collaborative steps. This research aims to define the optimal use of immersive technologies in the AEC sector, particularly for collaborative processes based on the BIM methodology. Additionally, the research focuses on creating classes and levels to structure and define guidelines and a vocabulary for the use of the " Immersive Need Level." This concept, matured by recent technological advancements, aims to enable a broader application of state-of-the-art immersive technologies, avoiding misunderstandings, redundancies, or paradoxes. While the concept of "Informational Need Level" has been well clarified with the recent UNI EN 17412-1:2021 standard, when it comes to immersion, current regulations and literature only provide some hints about the technology and related equipment, leaving the procedural approach and the user's free interpretation completely unexplored. Therefore, once the necessary knowledge and information are acquired (Informational Need Level), it is possible to transition to an Immersive Need Level that involves the practical application of the acquired knowledge, exploring scenarios and solutions in a more thorough and detailed manner, with user involvement, via different immersion scales, in the design, construction or management process of a building or infrastructure. The need for information constitutes the basis for acquiring relevant knowledge and information, while the immersive need can manifest itself later, once a solid information base has been solidified, using the senses and developing immersive awareness. This new approach could solve the problem of inertia among AEC industry players in adopting and experimenting with new immersive technologies, expanding collaborative iterations and the range of available options.

Keywords: AECindustry, immersive technology (IMT), virtual reality, augmented reality, building information modeling (BIM), decision making, collaborative process, information need level, immersive level of need

Procedia PDF Downloads 99
273 Evaluation of Washing Performance of Household Wastewater Purified by Advanced Oxidation Process

Authors: Nazlı Çetindağ, Pelin Yılmaz Çetiner, Metin Mert İlgün, Emine Birci, Gizemnur Yıldız Uysal, Özcan Hatipoğlu, Ehsan Tuzcuoğlu, Gökhan Sır

Abstract:

Stressing the importance of water conservation, emphasizing the need for efficient management of household water, and underlining the significance of alternative solutions are important. In this context, advanced solutions based on technologies such as the advanced oxidation process have emerged as promising methods for treating household wastewater. Evaluating household water usage holds critical importance for the sustainability of water resources. Researchers and experts are examining various technological approaches to effectively treat and reclaim water for reuse. In this framework, the advanced oxidation process has proven to be an effective method for the removal of various organic and inorganic pollutants in the treatment of household wastewater. In this study, performance will be evaluated by comparing it with the reference case. This international criterion simulates the washing of home textile products, determining various performance parameters. The specially designed stain strips, including sebum, carbon black, blood, cocoa, and red wine, used in experiments, represent various household stains. These stain types were carefully selected to represent challenging stain scenarios, ensuring a realistic assessment of washing performance. Experiments conducted under different temperatures and program conditions successfully demonstrate the practical applicability of the advanced oxidation process for treating household wastewater. It is important to note that both adherence to standards and the use of real-life stain types contribute to the broad applicability of the findings. In conclusion, this study strongly supports the effectiveness of treating household wastewater with the advanced oxidation process in terms of washing performance under both standard and practical application conditions. The study underlines the importance of alternative solutions for sustainable water resource management and highlights the potential of the advanced oxidation process in the treatment of household water, contributing significantly to optimizing water usage and developing sustainable water management solutions.

Keywords: advanced oxidation process, household water usage, household appliance waste water, modelling, water reuse

Procedia PDF Downloads 65
272 Assessing Local Authorities’ Interest in Addressing Urban Challenges through Nature Based Solutions in Romania

Authors: Athanasios A. Gavrilidis, Mihai R. Nita, Larissa N. Stoia, Diana A. Onose

Abstract:

Contemporary global environmental challenges must be primarily addressed at local levels. Cities are under continuous pressure as they must ensure high quality of life levels for their citizens and at the same time to adapt and address specific environmental issues. Innovative solutions using natural features or mimicking natural systems are endorsed by the scientific community as efficient approaches for both mitigating climate change effects and the decrease of environmental quality and for maintaining high standards of living for urban dwellers. The aim of this study was to assess whether Romanian cities’ authorities are considering nature-based innovation as solutions for their planning, management, and environmental issues. Data were gathered by applying 140 questionnaires to urban authorities throughout the country. The questionnaire was designed for assessinglocal policy makers’ perspective over the efficiency of nature-based innovations as a tool to address specific challenges. It also focused on extracting data about financing sources and challenges they must overcome for adopting nature-based approaches. The gather results from the municipalities participating in our study were statistically processed, and they revealed that Romanian city managers acknowledge the benefits of nature-based innovations, but investments in this sector are not on top of their priorities. More than 90% of the selected cities have agreed that in the last 10 years, their major concern was to expand the grey infrastructure (roads and public amenities) using traditional approaches. When asked how they would react if faced with different socio-economic and environmental challenges, local urban managers indicated investments nature-based solutions as a priority only in case of biodiversity loss and extreme weather, while for other 14 proposed scenarios, they would embrace the business-as-usual approach. Our study indicates that while new concepts of sustainable urban planning emerge within the scientific community, local authorities need more time to understand and implement them. Without the proper knowledge, personnel, policies, or dedicated budgets, local administrators will not embrace nature-based innovations as solutions for their challenges.

Keywords: nature based innovations, perception analysis, policy making, urban planning

Procedia PDF Downloads 174