Search results for: hardware in loop testing
60 Seroprevalence of Middle East Respiratory Syndrome Coronavirus (MERS-Cov) Infection among Healthy and High Risk Individuals in Qatar
Authors: Raham El-Kahlout, Hadi Yassin, Asmaa Athani, Marwan Abou Madi, Gheyath Nasrallah
Abstract:
Background: Since its first isolation in September 2012, Middle East respiratory syndrome coronavirus (MERS-CoV) has diffused across 27 countries infecting more than two thousand individuals with a high case fatality rate. MERS-CoV–specific antibodies are widely found in Dromedary camel along with viral shedding of similar viruses detected in human at same region, suggesting that MERS epidemiology may be central role by camel. Interestingly, MERS-CoV has also been also reported to be asymptomatic or to cause influenza-like mild illnesses. Therefore, in a country like Qatar (bordered Saudi Arabia), where camels are widely spread, serological surveys are important to explore the role of camels in MERS-CoV transmission. However, widespread strategic serological surveillances of MERS-CoV among populations, particularly in endemic country, are infrequent. In the absence of clear epidemiological view, cross-sectional MERS antibody surveillances in human populations are of global concern. Method: We performed a comparative serological screening of 4719 healthy blood donors, 135 baseline case contacts (high risk individual), and four MERS confirmed patients (by PCR) for the presence of anti-MERS IgG. Initially, samples were screened using Euroimmune anti- MERS-CoV IgG ELISA kit, the only commercial kit available in the market and recommended by the CDC as a screening kit. To confirm ELISA test results, farther serological testing was performed for all borderline and positive samples using two assays; the anti MERS-CoV IgG and IgM Euroimmune indirect immunofluorescent test (IIFT) and pseudoviral particle neutralizing assay (PPNA). Additionally, to test cross reactivity of anti-MERS-CoV antibody with other family members of coronavirus, borderline and positive samples were tested for the presence of the of IgG antibody of the following viruses; SARS, HCoV-229E, HKU1 using the Euroimmune IIFT for SARS and HCoV-229E and ELISA for HKU1. Results: In all of 4858 screened 15 samples [10 donors (0.21%, 10/4719), 1 case contact (0.77 %, 1/130), 3 patients (75%, 3/4)] anti-MERS IgG reactive/borderline samples were seen in ELISA. However, only 7 (0.14%) of them gave positive with in IIFT and only 3 (0.06%) was confirmed by the specific anti-MERS PPNA. One of the interesting findings was, a donor, who was selected in the control group as a negative anti-MERS IgG ELISA, yield reactive for anti-MERS IgM IIFT and was confirmed with the PPNA. Further, our preliminary results showed that there was a strong cross reactivity between anti- MERS-COV IgG with both HCoV-229E or anti-HKU1 IgG, yet, no cross reactivity of SARS were found. Conclusions: Our findings suggest that MERS-CoV is not heavily circulated among the population of Qatar and this is also indicated by low number of confirmed cases (only 18) since 2012. Additionally, the presence of antibody of other pathogenic human coronavirus may cause false positive results of both ELISA and IIFT, which stress the need for more evaluation studies for the available serological assays. Conclusion: this study provides an insight about the epidemiological view for MERS-CoV in Qatar population. It also provides a performance evaluation for the available serologic tests for MERS-CoV in a view of serologic status to other human coronaviruses.Keywords: seroprevalence, MERS-CoV, healthy individuals, Qatar
Procedia PDF Downloads 26959 Skin-to-Skin Contact Simulation: Improving Health Outcomes for Medically Fragile Newborns in the Neonatal Intensive Care Unit
Authors: Gabriella Zarlenga, Martha L. Hall
Abstract:
Introduction: Premature infants are at risk for neurodevelopmental deficits and hospital readmissions, which can increase the financial burden on the health care system and families. Kangaroo care (skin-to-skin contact) is a practice that can improve preterm infant health outcomes. Preterm infants can acquire adequate body temperature, heartbeat, and breathing regulation through lying directly on the mother’s abdomen and in between her breasts. Due to some infant’s condition, kangaroo care is not a feasible intervention. The purpose of this proof-of-concept research project is to create a device which simulates skin-to-skin contact for pre-term infants not eligible for kangaroo care, with the aim of promoting baby’s health outcomes, reducing the incidence of serious neonatal and early childhood illnesses, and/or improving cognitive, social and emotional aspects of development. Methods: The study design is a proof-of-concept based on a three-phase approach; (1) observational study and data analysis of the standard of care for 2 groups of pre-term infants, (2) design and concept development of a novel device for pre-term infants not currently eligible for standard kangaroo care, and (3) prototyping, laboratory testing, and evaluation of the novel device in comparison to current assessment parameters of kangaroo care. A single center study will be conducted in an area hospital offering Level III neonatal intensive care. Eligible participants include newborns born premature (28-30 weeks of age) admitted to the NICU. The study design includes 2 groups: a control group receiving standard kangaroo care and an experimental group not eligible for kangaroo care. Based on behavioral analysis of observational video data collected in the NICU, the device will be created to simulate mother’s body using electrical components in a thermoplastic polymer housing covered in silicone. It will be designed with a microprocessor that controls simulated respiration, heartbeat, and body temperature of the 'simulated caregiver' by using a pneumatic lung, vibration sensors (heartbeat), pressure sensors (weight/position), and resistive film to measure temperature. A slight contour of the simulator surface may be integrated to help position the infant correctly. Control and monitoring of the skin-to-skin contact simulator would be performed locally by an integrated touchscreen. The unit would have built-in Wi-Fi connectivity as well as an optional Bluetooth connection in which the respiration and heart rate could be synced with a parent or caregiver. A camera would be integrated, allowing a video stream of the infant in the simulator to be streamed to a monitoring location. Findings: Expected outcomes are stabilization of respiratory and cardiac rates, thermoregulation of those infants not eligible for skin to skin contact with their mothers, and real time mother Bluetooth to the device to mimic the experience in the womb. Results of this study will benefit clinical practice by creating a new standard of care for premature neonates in the NICU that are deprived of skin to skin contact due to various health restrictions.Keywords: kangaroo care, wearable technology, pre-term infants, medical design
Procedia PDF Downloads 15658 A Randomized, Controlled Trial To Test Behavior Change Techniques (BCTS) To Improve Low Intensity Physical Activity In Older Adults
Authors: Ciaran Friel, Jerry Suls, Patrick Robles, Frank Vicari, Joan Duer-Hefele, Karina W. Davidson
Abstract:
Physical activity guidelines focus on increasing moderate intensity activity for older adults, but adherence to recommendations remains low. This is despite the fact that scientific evidence supports that any increase in physical activity is positively correlated with health benefits. Behavior change techniques (BCTs) have demonstrated effectiveness in reducing sedentary behavior and promoting physical activity. This pilot study uses a Personalized Trials (N-of-1) design to evaluate the efficacy of using four BCTs to promote an increase in low-intensity physical activity (2,000 steps of walking per day) in adults aged 45-75 years old. The 4 BCTs tested were goal setting, action planning, feedback, and self-monitoring. BCTs were tested in random order and delivered by text message prompts requiring participant response. The study recruited health system employees in the target age range, without mobility restrictions and demonstrating interest in increasing their daily activity by a minimum of 2,000 steps per day for a minimum of five days per week. Participants were sent a Fitbit Charge 4 fitness tracker with an established study account and password. Participants were recommended to wear the Fitbit device 24/7, but were required to wear it for a minimum of ten hours per day. Baseline physical activity was measured by the Fitbit for two weeks. Participants then engaged with a clinical research coordinator to review comprehension of the text message content and required actions for each of the BCTs to be tested. Participants then selected a consistent daily time in which they would receive their text message prompt. In the 8 week intervention phase of the study, participants received each of the four BCTs, in random order, for a two week period. Text message prompts were delivered daily at a time selected by the participant. All prompts required an interactive response from participants and may have included recording their detailed plan for walking or daily step goal (action planning, goal setting). Additionally, participants may have been directed to a study dashboard to view their step counts or compare themselves with peers (self-monitoring, feedback). At the end of each two week testing interval, participants were asked to complete the Self-Efficacy for Walking Scale (SEW_Dur), a validated measure that assesses the participant’s confidence in walking incremental distances and a survey measuring their satisfaction with the individual BCT that they tested. At the end of their trial, participants received a personalized summary of their step data in response to each individual BCT. Analysis will examine the novel individual-level heterogeneity of treatment effect made possible by N-of-1 design, and pool results across participants to efficiently estimate the overall efficacy of the selected behavioral change techniques in increasing low-intensity walking by 2,000 steps, 5 days per week. Self-efficacy will be explored as the likely mechanism of action prompting behavior change. This study will inform the providers and demonstrate the feasibility of N-of-1 study design to effectively promote physical activity as a component of healthy aging.Keywords: aging, exercise, habit, walking
Procedia PDF Downloads 12957 Digital Twin for a Floating Solar Energy System with Experimental Data Mining and AI Modelling
Authors: Danlei Yang, Luofeng Huang
Abstract:
The integration of digital twin technology with renewable energy systems offers an innovative approach to predicting and optimising performance throughout the entire lifecycle. A digital twin is a continuously updated virtual replica of a real-world entity, synchronised with data from its physical counterpart and environment. Many digital twin companies today claim to have mature digital twin products, but their focus is primarily on equipment visualisation. However, the core of a digital twin should be its model, which can mirror, shadow, and thread with the real-world entity, which is still underdeveloped. For a floating solar energy system, a digital twin model can be defined in three aspects: (a) the physical floating solar energy system along with environmental factors such as solar irradiance and wave dynamics, (b) a digital model powered by artificial intelligence (AI) algorithms, and (c) the integration of real system data with the AI-driven model and a user interface. The experimental setup for the floating solar energy system, is designed to replicate real-ocean conditions of floating solar installations within a controlled laboratory environment. The system consists of a water tank that simulates an aquatic surface, where a floating catamaran structure supports a solar panel. The solar simulator is set up in three positions: one directly above and two inclined at a 45° angle in front and behind the solar panel. This arrangement allows the simulation of different sun angles, such as sunrise, midday, and sunset. The solar simulator is positioned 400 mm away from the solar panel to maintain consistent solar irradiance on its surface. Stability for the floating structure is achieved through ropes attached to anchors at the bottom of the tank, which simulates the mooring systems used in real-world floating solar applications. The floating solar energy system's sensor setup includes various devices to monitor environmental and operational parameters. An irradiance sensor measures solar irradiance on the photovoltaic (PV) panel. Temperature sensors monitor ambient air and water temperatures, as well as the PV panel temperature. Wave gauges measure wave height, while load cells capture mooring force. Inclinometers and ultrasonic sensors record heave and pitch amplitudes of the floating system’s motions. An electric load measures the voltage and current output from the solar panel. All sensors collect data simultaneously. Artificial neural network (ANN) algorithms are central to developing the digital model, which processes historical and real-time data, identifies patterns, and predicts the system’s performance in real time. The data collected from various sensors are partly used to train the digital model, with the remaining data reserved for validation and testing. The digital twin model combines the experimental setup with the ANN model, enabling monitoring, analysis, and prediction of the floating solar energy system's operation. The digital model mirrors the functionality of the physical setup, running in sync with the experiment to provide real-time insights and predictions. It provides useful industrial benefits, such as informing maintenance plans as well as design and control strategies for optimal energy efficiency. In long term, this digital twin will help improve overall solar energy yield whilst minimising the operational costs and risks.Keywords: digital twin, floating solar energy system, experiment setup, artificial intelligence
Procedia PDF Downloads 756 National Core Indicators - Aging and Disabilities: A Person-Centered Approach to Understanding Quality of Long-Term Services and Supports
Authors: Stephanie Giordano, Rosa Plasencia
Abstract:
In the USA, in 2013, public service systems such as Medicaid, aging, and disability systems undertook an effort to measure the quality of service delivery by examining the experiences and outcomes of those receiving public services. The goal of this effort was to develop a survey to measure the experiences and outcomes of those receiving public services, with the goal of measuring system performance for quality improvement. The performance indicators were developed through with input from directors of state aging and disability service systems, along with experts and stakeholders in the field across the United States. This effort, National Core Indicators –Aging and Disabilities (NCI-AD), grew out of National Core Indicators –Intellectual and Developmental Disabilities, an effort to measure developmental disability (DD) systems across the States. The survey tool and administration protocol underwent multiple rounds of testing and revision between 2013 and 2015. The measures in the final tool – called the Adult Consumer Survey (ACS) – emphasize not just important indicators of healthcare access and personal safety but also includes indicators of system quality based on person-centered outcomes. These measures indicate whether service systems support older adults and people with disabilities to live where they want, maintain relationships and engage in their communities and have choice and control in their everyday lives. Launched in 2015, the NCI-AD Adult Consumer Survey is now used in 23 states in the US. Surveys are conducted by NCI-AD trained surveyors via direct conversation with a person receiving public long-term services and supports (LTSS). Until 2020, surveys were only conducted in person. However, after a pilot to test the reliability of videoconference and telephone survey modes, these modes were adopted as an acceptable practice. The nature of the survey is that of a “guided conversation” survey administration allows for surveyor to use wording and terminology that is best understand by the person surveyed. The survey includes a subset of questions that may be answered by a proxy respondent who knows the person well if the person is receiving services in unable to provide valid responses on their own. Surveyors undergo a standardized training on survey administration to ensure the fidelity of survey administration. In addition to the main survey section, a Background Information section collects data on personal and service-related characteristics of the person receiving services; these data are typically collected through state administrative record. This information is helps provide greater context around the characteristics of people receiving services. It has also been used in conjunction with outcomes measures to look at disparity (including by race and ethnicity, gender, disability, and living arrangements). These measures of quality are critical for public service delivery systems to understand the unique needs of the population of older adults and improving the lives of older adults as well as people with disabilities. Participating states may use these data to identify areas for quality improvement within their service delivery systems, to advocate for specific policy change, and to better understand the experiences of specific populations of people served.Keywords: quality of life, long term services and supports, person-centered practices, aging and disability research, survey methodology
Procedia PDF Downloads 12055 Towards an Effective Approach for Modelling near Surface Air Temperature Combining Weather and Satellite Data
Authors: Nicola Colaninno, Eugenio Morello
Abstract:
The urban environment affects local-to-global climate and, in turn, suffers global warming phenomena, with worrying impacts on human well-being, health, social and economic activities. Physic-morphological features of the built-up space affect urban air temperature, locally, causing the urban environment to be warmer compared to surrounding rural. This occurrence, typically known as the Urban Heat Island (UHI), is normally assessed by means of air temperature from fixed weather stations and/or traverse observations or based on remotely sensed Land Surface Temperatures (LST). The information provided by ground weather stations is key for assessing local air temperature. However, the spatial coverage is normally limited due to low density and uneven distribution of the stations. Although different interpolation techniques such as Inverse Distance Weighting (IDW), Ordinary Kriging (OK), or Multiple Linear Regression (MLR) are used to estimate air temperature from observed points, such an approach may not effectively reflect the real climatic conditions of an interpolated point. Quantifying local UHI for extensive areas based on weather stations’ observations only is not practicable. Alternatively, the use of thermal remote sensing has been widely investigated based on LST. Data from Landsat, ASTER, or MODIS have been extensively used. Indeed, LST has an indirect but significant influence on air temperatures. However, high-resolution near-surface air temperature (NSAT) is currently difficult to retrieve. Here we have experimented Geographically Weighted Regression (GWR) as an effective approach to enable NSAT estimation by accounting for spatial non-stationarity of the phenomenon. The model combines on-site measurements of air temperature, from fixed weather stations and satellite-derived LST. The approach is structured upon two main steps. First, a GWR model has been set to estimate NSAT at low resolution, by combining air temperature from discrete observations retrieved by weather stations (dependent variable) and the LST from satellite observations (predictor). At this step, MODIS data, from Terra satellite, at 1 kilometer of spatial resolution have been employed. Two time periods are considered according to satellite revisit period, i.e. 10:30 am and 9:30 pm. Afterward, the results have been downscaled at 30 meters of spatial resolution by setting a GWR model between the previously retrieved near-surface air temperature (dependent variable), the multispectral information as provided by the Landsat mission, in particular the albedo, and Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM), both at 30 meters. Albedo and DEM are now the predictors. The area under investigation is the Metropolitan City of Milan, which covers an area of approximately 1,575 km2 and encompasses a population of over 3 million inhabitants. Both models, low- (1 km) and high-resolution (30 meters), have been validated according to a cross-validation that relies on indicators such as R2, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). All the employed indicators give evidence of highly efficient models. In addition, an alternative network of weather stations, available for the City of Milano only, has been employed for testing the accuracy of the predicted temperatures, giving and RMSE of 0.6 and 0.7 for daytime and night-time, respectively.Keywords: urban climate, urban heat island, geographically weighted regression, remote sensing
Procedia PDF Downloads 19454 Effect of Thermal Treatment on Mechanical Properties of Reduced Activation Ferritic/Martensitic Eurofer Steel Grade
Authors: Athina Puype, Lorenzo Malerba, Nico De Wispelaere, Roumen Petrov, Jilt Sietsma
Abstract:
Reduced activation ferritic/martensitic (RAFM) steels like EUROFER97 are primary candidate structural materials for first wall application in the future demonstration (DEMO) fusion reactor. Existing steels of this type obtain their functional properties by a two-stage heat treatment, which consists of an annealing stage at 980°C for thirty minutes followed by quenching and an additional tempering stage at 750°C for two hours. This thermal quench and temper (Q&T) treatment creates a microstructure of tempered martensite with, as main precipitates, M23C6 carbides, with M = Fe, Cr and carbonitrides of MX type, e.g. TaC and VN. The resulting microstructure determines the mechanical properties of the steel. The ductility is largely determined by the tempered martensite matrix, while the resistance to mechanical degradation, determined by the spatial and size distribution of precipitates and the martensite crystals, plays a key role in the high temperature properties of the steel. Unfortunately, the high temperature response of EUROFER97 is currently insufficient for long term use in fusion reactors, due to instability of the matrix phase and coarsening of the precipitates at prolonged high temperature exposure. The objective of this study is to induce grain refinement by appropriate modifications of the processing route in order to increase the high temperature strength of a lab-cast EUROFER RAFM steel grade. The goal of the work is to obtain improved mechanical behavior at elevated temperatures with respect to conventionally heat treated EUROFER97. A dilatometric study was conducted to study the effect of the annealing temperature on the mechanical properties after a Q&T treatment. The microstructural features were investigated with scanning electron microscopy (SEM), electron back-scattered diffraction (EBSD) and transmission electron microscopy (TEM). Additionally, hardness measurements, tensile tests at elevated temperatures and Charpy V-notch impact testing of KLST-type MCVN specimens were performed to study the mechanical properties of the furnace-heated lab-cast EUROFER RAFM steel grade. A significant prior austenite grain (PAG) refinement was obtained by lowering the annealing temperature of the conventionally used Q&T treatment for EUROFER97. The reduction of the PAG results in finer martensitic constituents upon quenching, which offers more nucleation sites for carbide and carbonitride formation upon tempering. The ductile-to-brittle transition temperature (DBTT) was found to decrease with decreasing martensitic block size. Additionally, an increased resistance against high temperature degradation was accomplished in the fine grained martensitic materials with smallest precipitates obtained by tailoring the annealing temperature of the Q&T treatment. It is concluded that the microstructural refinement has a pronounced effect on the DBTT without significant loss of strength and ductility. Further investigation into the optimization of the processing route is recommended to improve the mechanical behavior of RAFM steels at elevated temperatures.Keywords: ductile-to-brittle transition temperature (DBTT), EUROFER, reduced activation ferritic/martensitic (RAFM) steels, thermal treatments
Procedia PDF Downloads 29953 Particle Size Characteristics of Aerosol Jets Produced by a Low Powered E-Cigarette
Authors: Mohammad Shajid Rahman, Tarik Kaya, Edgar Matida
Abstract:
Electronic cigarettes, also known as e-cigarettes, may have become a tool to improve smoking cessation due to their ability to provide nicotine at a selected rate. Unlike traditional cigarettes, which produce toxic elements from tobacco combustion, e-cigarettes generate aerosols by heating a liquid solution (commonly a mixture of propylene glycol, vegetable glycerin, nicotine and some flavoring agents). However, caution still needs to be taken when using e-cigarettes due to the presence of addictive nicotine and some harmful substances produced from the heating process. Particle size distribution (PSD) and associated velocities generated by e-cigarettes have significant influence on aerosol deposition in different regions of human respiratory tracts. On another note, low actuation power is beneficial in aerosol generating devices since it exhibits a reduced emission of toxic chemicals. In case of e-cigarettes, lower heating powers can be considered as powers lower than 10 W compared to a wide range of powers (0.6 to 70.0 W) studied in literature. Due to the importance regarding inhalation risk reduction, deeper understanding of particle size characteristics of e-cigarettes demands thorough investigation. However, comprehensive study on PSD and velocities of e-cigarettes with a standard testing condition at relatively low heating powers is still lacking. The present study aims to measure particle number count and size distribution of undiluted aerosols of a latest fourth-generation e-cigarette at low powers, within 6.5 W using real-time particle counter (time-of-flight method). Also, temporal and spatial evolution of particle size and velocity distribution of aerosol jets are examined using phase Doppler anemometry (PDA) technique. To the authors’ best knowledge, application of PDA in e-cigarette aerosol measurement is rarely reported. In the present study, preliminary results about particle number count of undiluted aerosols measured by time-of-flight method depicted that an increase of heating power from 3.5 W to 6.5 W resulted in an enhanced asymmetricity in PSD, deviating from log-normal distribution. This can be considered as an artifact of rapid vaporization, condensation and coagulation processes on aerosols caused by higher heating power. A novel mathematical expression, combining exponential, Gaussian and polynomial (EGP) distributions, was proposed to describe asymmetric PSD successfully. The value of count median aerodynamic diameter and geometric standard deviation laid within a range of about 0.67 μm to 0.73 μm, and 1.32 to 1.43, respectively while the power varied from 3.5 W to 6.5 W. Laser Doppler velocimetry (LDV) and PDA measurement suggested a typical centerline streamwise mean velocity decay of aerosol jet along with a reduction of particle sizes. In the final submission, a thorough literature review, detailed description of experimental procedure and discussion of the results will be provided. Particle size and turbulent characteristics of aerosol jets will be further examined, analyzing arithmetic mean diameter, volumetric mean diameter, volume-based mean diameter, streamwise mean velocity and turbulence intensity. The present study has potential implications in PSD simulation and validation of aerosol dosimetry model, leading to improving related aerosol generating devices.Keywords: E-cigarette aerosol, laser doppler velocimetry, particle size distribution, particle velocity, phase Doppler anemometry
Procedia PDF Downloads 4952 Micro-Oculi Facades as a Sustainable Urban Facade
Authors: Ok-Kyun Im, Kyoung Hee Kim
Abstract:
We live in an era that faces global challenges of climate changes and resource depletion. With the rapid urbanization and growing energy consumption in the built environment, building facades become ever more important in architectural practice and environmental stewardship. Furthermore, building facade undergoes complex dynamics of social, cultural, environmental and technological changes. Kinetic facades have drawn attention of architects, designers, and engineers in the field of adaptable, responsive and interactive architecture since 1980’s. Materials and building technologies have gradually evolved to address the technical implications of kinetic facades. The kinetic façade is becoming an independent system of the building, transforming the design methodology to sustainable building solutions. Accordingly, there is a need for a new design methodology to guide the design of a kinetic façade and evaluate its sustainable performance. The research objectives are two-fold: First, to establish a new design methodology for kinetic facades and second, to develop a micro-oculi façade system and assess its performance using the established design method. The design approach to the micro-oculi facade is comprised of 1) façade geometry optimization and 2) dynamic building energy simulation. The façade geometry optimization utilizes multi-objective optimization process, aiming to balance the quantitative and qualitative performances to address the sustainability of the built environment. The dynamic building energy simulation was carried out using EnergyPlus and Radiance simulation engines with scripted interfaces. The micro-oculi office was compared with an office tower with a glass façade in accordance with ASHRAE 90.1 2013 to understand its energy efficiency. The micro-oculi facade is constructed with an array of circular frames attached to a pair of micro-shades called a micro-oculus. The micro-oculi are encapsulated between two glass panes to protect kinetic mechanisms with longevity. The micro-oculus incorporates rotating gears that transmit the power to adjacent micro-oculi to minimize the number of mechanical parts. The micro-oculus rotates around its center axis with a step size of 15deg depending on the sun’s position while maximizing daylighting potentials and view-outs. A 2 ft by 2ft prototyping was undertaken to identify operational challenges and material implications of the micro-oculi facade. In this research, a systematic design methodology was proposed, that integrates multi-objectives of kinetic façade design criteria and whole building energy performance simulation within a holistic design process. This design methodology is expected to encourage multidisciplinary collaborations between designers and engineers to collaborate issues of the energy efficiency, daylighting performance and user experience during design phases. The preliminary energy simulation indicated that compared to a glass façade, the micro-oculi façade showed energy savings due to its improved thermal properties, daylighting attributes, and dynamic solar performance across the day and seasons. It is expected that the micro oculi façade provides a cost-effective, environmentally-friendly, sustainable, and aesthetically pleasing alternative to glass facades. Recommendations for future studies include lab testing to validate the simulated data of energy and optical properties of the micro-oculi façade. A 1:1 performance mock-up of the micro-oculi façade can suggest in-depth understanding of long-term operability and new development opportunities applicable for urban façade applications.Keywords: energy efficiency, kinetic facades, sustainable architecture, urban facades
Procedia PDF Downloads 25751 A Two-Step, Temperature-Staged, Direct Coal Liquefaction Process
Authors: Reyna Singh, David Lokhat, Milan Carsky
Abstract:
The world crude oil demand is projected to rise to 108.5 million bbl/d by the year 2035. With reserves estimated at 869 billion tonnes worldwide, coal is an abundant resource. This work was aimed at producing a high value hydrocarbon liquid product from the Direct Coal Liquefaction (DCL) process at, comparatively, mild operating conditions. Via hydrogenation, the temperature-staged approach was investigated. In a two reactor lab-scale pilot plant facility, the objectives included maximising thermal dissolution of the coal in the presence of a hydrogen donor solvent in the first stage, subsequently promoting hydrogen saturation and hydrodesulphurization (HDS) performance in the second. The feed slurry consisted of high grade, pulverized bituminous coal on a moisture-free basis with a size fraction of < 100μm; and Tetralin mixed in 2:1 and 3:1 solvent/coal ratios. Magnetite (Fe3O4) at 0.25wt% of the dry coal feed was added for the catalysed runs. For both stages, hydrogen gas was used to maintain a system pressure of 100barg. In the first stage, temperatures of 250℃ and 300℃, reaction times of 30 and 60 minutes were investigated in an agitated batch reactor. The first stage liquid product was pumped into the second stage vertical reactor, which was designed to counter-currently contact the hydrogen rich gas stream and incoming liquid flow in the fixed catalyst bed. Two commercial hydrotreating catalysts; Cobalt-Molybdenum (CoMo) and Nickel-Molybdenum (NiMo); were compared in terms of their conversion, selectivity and HDS performance at temperatures 50℃ higher than the respective first stage tests. The catalysts were activated at 300°C with a hydrogen flowrate of approximately 10 ml/min prior to the testing. A gas-liquid separator at the outlet of the reactor ensured that the gas was exhausted to the online VARIOplus gas analyser. The liquid was collected and sampled for analysis using Gas Chromatography-Mass Spectrometry (GC-MS). Internal standard quantification methods for the sulphur content, the BTX (benzene, toluene, and xylene) and alkene quality; alkanes and polycyclic aromatic hydrocarbon (PAH) compounds in the liquid products were guided by ASTM standards of practice for hydrocarbon analysis. In the first stage, using a 2:1 solvent/coal ratio, an increased coal to liquid conversion was favoured by a lower operating temperature of 250℃, 60 minutes and a system catalysed by magnetite. Tetralin functioned effectively as the hydrogen donor solvent. A 3:1 ratio favoured increased concentrations of the long chain alkanes undecane and dodecane, unsaturated alkenes octene and nonene and PAH compounds such as indene. The second stage product distribution showed an increase in the BTX quality of the liquid product, branched chain alkanes and a reduction in the sulphur concentration. As an HDS performer and selectivity to the production of long and branched chain alkanes, NiMo performed better than CoMo. CoMo is selective to a higher concentration of cyclohexane. For 16 days on stream each, NiMo had a higher activity than CoMo. The potential to cover the demand for low–sulphur, crude diesel and solvents from the production of high value hydrocarbon liquid in the said process, is thus demonstrated.Keywords: catalyst, coal, liquefaction, temperature-staged
Procedia PDF Downloads 64850 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain
Authors: Zachary Blanks, Solomon Sonya
Abstract:
Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection
Procedia PDF Downloads 29249 Understanding Different Facets of Chromosome Abnormalities: A 17-year Cytogenetic Study and Indian Perspectives
Authors: Lakshmi Rao Kandukuri, Mamata Deenadayal, Suma Prasad, Bipin Sethi, Srinadh Buragadda, Lalji Singh
Abstract:
Worldwide; at least 7.6 million children are born annually with severe genetic or congenital malformations and among them 90% of these are born in mid and low-income countries. Precise prevalence data are difficult to collect, especially in developing countries, owing to the great diversity of conditions and also because many cases remain undiagnosed. The genetic and congenital disorder is the second most common cause of infant and childhood mortality and occurs with a prevalence of 25-60 per 1000 births. The higher prevalence of genetic diseases in a particular community may, however, be due to some social or cultural factors. Such factors include the tradition of consanguineous marriage, which results in a higher rate of autosomal recessive conditions including congenital malformations, stillbirths, or mental retardation. Genetic diseases can vary in severity, from being fatal before birth to requiring continuous management; their onset covers all life stages from infancy to old age. Those presenting at birth are particularly burdensome and may cause early death or life-long chronic morbidity. Genetic testing for several genetic diseases identifies changes in chromosomes, genes, or proteins. The results of a genetic test can confirm or rule out a suspected genetic condition or help determine a person's chance of developing or passing on a genetic disorder. Several hundred genetic tests are currently in use and more are being developed. Chromosomal abnormalities are the major cause of human suffering, which are implicated in mental retardation, congenital malformations, dysmorphic features, primary and secondary amenorrhea, reproductive wastage, infertility neoplastic diseases. Cytogenetic evaluation of patients is helpful in the counselling and management of affected individuals and families. We present here especially chromosomal abnormalities which form a major part of genetic disease burden in India. Different programmes on chromosome research and human reproductive genetics primarily relate to infertility since this is a major public health problem in our country, affecting 10-15 percent of couples. Prenatal diagnosis of chromosomal abnormalities in high-risk pregnancies helps in detecting chromosomally abnormal foetuses. Such couples are counselled regarding the continuation of pregnancy. In addition to the basic research, the team is providing chromosome diagnostic services that include conventional and advanced techniques for identifying various genetic defects. Other than routine chromosome diagnosis for infertility, also include patients with short stature, hypogonadism, undescended testis, microcephaly, delayed developmental milestones, familial, and isolated mental retardation, and cerebral palsy. Thus, chromosome diagnostics has found its applicability not only in disease prevention and management but also in guiding the clinicians in certain aspects of treatment. It would be appropriate to affirm that chromosomes are the images of life and they unequivocally mirror the states of human health. The importance of genetic counseling is increasing with the advancement in the field of genetics. The genetic counseling can help families to cope with emotional, psychological, and medical consequences of genetic diseases.Keywords: India, chromosome abnormalities, genetic disorders, cytogenetic study
Procedia PDF Downloads 31548 i2kit: A Tool for Immutable Infrastructure Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.Keywords: container, deployment, immutable infrastructure, microservice
Procedia PDF Downloads 17947 Simulation and Analysis of Mems-Based Flexible Capacitive Pressure Sensors with COMSOL
Authors: Ding Liangxiao
Abstract:
The technological advancements in Micro-Electro-Mechanical Systems (MEMS) have significantly contributed to the development of new, flexible capacitive pressure sensors,which are pivotal in transforming wearable and medical device technologies. This study employs the sophisticated simulation tools available in COMSOL Multiphysics® to develop and analyze a MEMS-based sensor with a tri-layered design. This sensor comprises top and bottom electrodes made from gold (Au), noted for their excellent conductivity, a middle dielectric layer made from a composite of Silver Nanowires (AgNWs) embedded in Thermoplastic Polyurethane (TPU), and a flexible, durable substrate of Polydimethylsiloxane (PDMS). This research was directed towards understanding how changes in the physical characteristics of the AgNWs/TPU dielectric layer—specifically, its thickness and surface area—impact the sensor's operational efficacy. We assessed several key electrical properties: capacitance, electric potential, and membrane displacement under varied pressure conditions. These investigations are crucial for enhancing the sensor's sensitivity and ensuring its adaptability across diverse applications, including health monitoring systems and dynamic user interface technologies. To ensure the reliability of our simulations, we applied the Effective Medium Theory to calculate the dielectric constant of the AgNWs/TPU composite accurately. This approach is essential for predicting how the composite material will perform under different environmental and operational stresses, thus facilitating the optimization of the sensor design for enhanced performance and longevity. Moreover, we explored the potential benefits of innovative three-dimensional structures for the dielectric layer compared to traditional flat designs. Our hypothesis was that 3D configurations might improve the stress distribution and optimize the electrical field interactions within the sensor, thereby boosting its sensitivity and accuracy. Our simulation protocol includes comprehensive performance testing under simulated environmental conditions, such as temperature fluctuations and mechanical pressures, which mirror the actual operational conditions. These tests are crucial for assessing the sensor's robustness and its ability to function reliably over extended periods, ensuring high reliability and accuracy in complex real-world environments. In our current research, although a full dynamic simulation analysis of the three-dimensional structures has not yet been conducted, preliminary explorations through three-dimensional modeling have indicated the potential for mechanical and electrical performance improvements over traditional planar designs. These initial observations emphasize the potential advantages and importance of incorporating advanced three-dimensional modeling techniques in the development of Micro-Electro-Mechanical Systems (MEMS)sensors, offering new directions for the design and functional optimization of future sensors. Overall, this study not only highlights the powerful capabilities of COMSOL Multiphysics® for modeling sophisticated electronic devices but also underscores the potential of innovative MEMS technology in advancing the development of more effective, reliable, and adaptable sensor solutions for a broad spectrum of technological applications.Keywords: MEMS, flexible sensors, COMSOL Multiphysics, AgNWs/TPU, PDMS, 3D modeling, sensor durability
Procedia PDF Downloads 4446 Numerical Modeling of Phase Change Materials Walls under Reunion Island's Tropical Weather
Authors: Lionel Trovalet, Lisa Liu, Dimitri Bigot, Nadia Hammami, Jean-Pierre Habas, Bruno Malet-Damour
Abstract:
The MCP-iBAT1 project is carried out to study the behavior of Phase Change Materials (PCM) integrated in building envelopes in a tropical environment. Through the phase transitions (melting and freezing) of the material, thermal energy can be absorbed or released. This process enables the regulation of indoor temperatures and the improvement of thermal comfort for the occupants. Most of the commercially available PCMs are more suitable to temperate climates than to tropical climates. The case of Reunion Island is noteworthy as there are multiple micro-climates. This leads to our key question: developing one or multiple bio-based PCMs that cover the thermal needs of the different locations of the island. The present paper focuses on the numerical approach to select the PCM properties relevant to tropical areas. Numerical simulations have been carried out with two softwares: EnergyPlusTM and Isolab. The latter has been developed in the laboratory, with the implicit Finite Difference Method, in order to evaluate different physical models. Both are Thermal Dynamic Simulation (TDS) softwares that predict the building’s thermal behavior with one-dimensional heat transfers. The parameters used in this study are the construction’s characteristics (dimensions and materials) and the environment’s description (meteorological data and building surroundings). The building is modeled in accordance with the experimental setup. It is divided into two rooms, cells A and B, with same dimensions. Cell A is the reference, while in cell B, a layer of commercial PCM (Thermo Confort of MCI Technologies) has been applied to the inner surface of the North wall. Sensors are installed in each room to retrieve temperatures, heat flows, and humidity rates. The collected data are used for the comparison with the numerical results. Our strategy is to implement two similar buildings at different altitudes (Saint-Pierre: 70m and Le Tampon: 520m) to measure different temperature ranges. Therefore, we are able to collect data for various seasons during a condensed time period. The following methodology is used to validate the numerical models: calibration of the thermal and PCM models in EnergyPlusTM and Isolab based on experimental measures, then numerical testing with a sensitivity analysis of the parameters to reach the targeted indoor temperatures. The calibration relies on the past ten months’ measures (from September 2020 to June 2021), with a focus on one-week study on November (beginning of summer) when the effect of PCM on inner surface temperatures is more visible. A first simulation with the PCM model of EnergyPlus gave results approaching the measurements with a mean error of 5%. The studied property in this paper is the melting temperature of the PCM. By determining the representative temperature of winter, summer and inter-seasons with past annual’s weather data, it is possible to build a numerical model of multi-layered PCM. Hence, the combined properties of the materials will provide an optimal scenario for the application on PCM in tropical areas. Future works will focus on the development of bio-based PCMs with the selected properties followed by experimental and numerical validation of the materials. 1Materiaux ´ a Changement de Phase, une innovation pour le B ` ati TropicalKeywords: energyplus, multi-layer of PCM, phase changing materials, tropical area
Procedia PDF Downloads 9545 Optimizing Solids Control and Cuttings Dewatering for Water-Powered Percussive Drilling in Mineral Exploration
Authors: S. J. Addinell, A. F. Grabsch, P. D. Fawell, B. Evans
Abstract:
The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising down-hole water-powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barren cover. This system has shown superior rates of penetration in water-rich, hard rock formations at depths exceeding 500 metres. With fluid flow rates of up to 120 litres per minute at 200 bar operating pressure to energise the bottom hole tooling, excessive quantities of high quality drilling fluid (water) would be required for a prolonged drilling campaign. As a result, drilling fluid recovery and recycling has been identified as a necessary option to minimise costs and logistical effort. While the majority of the cuttings report as coarse particles, a significant fines fraction will typically also be present. To maximise tool life longevity, the percussive bottom hole assembly requires high quality fluid with minimal solids loading and any recycled fluid needs to have a solids cut point below 40 microns and a concentration less than 400 ppm before it can be used to reenergise the system. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process shows a strong power law relationship for particle size distributions. This data is critical in optimising solids control strategies and cuttings dewatering techniques. Optimisation of deployable solids control equipment is discussed and how the required centrate clarity was achieved in the presence of pyrite-rich metasediment cuttings. Key results were the successful pre-aggregation of fines through the selection and use of high molecular weight anionic polyacrylamide flocculants and the techniques developed for optimal dosing prior to scroll decanter centrifugation, thus keeping sub 40 micron solids loading within prescribed limits. Experiments on maximising fines capture in the presence of thixotropic drilling fluid additives (e.g. Xanthan gum and other biopolymers) are also discussed. As no core is produced during the drilling process, it is intended that the particle laden returned drilling fluid is used for top-of-hole geochemical and mineralogical assessment. A discussion is therefore presented on the biasing and latency of cuttings representivity by dewatering techniques, as well as the resulting detrimental effects on depth fidelity and accuracy. Data pertaining to the sample biasing with respect to geochemical signatures due to particle size distributions is presented and shows that, depending on the solids control and dewatering techniques used, it can have unwanted influence on top-of-hole analysis. Strategies are proposed to overcome these effects, improving sample quality. Successful solids control and cuttings dewatering for water-powered percussive drilling is presented, contributing towards the successful advancement of coiled tubing based greenfields mineral exploration.Keywords: cuttings, dewatering, flocculation, percussive drilling, solids control
Procedia PDF Downloads 24844 Prevalence and Diagnostic Evaluation of Schistosomiasis in School-Going Children in Nelson Mandela Bay Municipality: Insights from Urinalysis and Point-of-Care Testing
Authors: Maryline Vere, Wilma ten Ham-Baloyi, Lucy Ochola, Opeoluwa Oyedele, Lindsey Beyleveld, Siphokazi Tili, Takafira Mduluza, Paula E. Melariri
Abstract:
Schistosomiasis, caused by Schistosoma (S.) haematobium and Schistosoma (S.) mansoni parasites poses a significant public health challenge in low-income regions. Diagnosis typically relies on identifying specific urine biomarkers such as haematuria, protein, and leukocytes for S. haematobium, while the Point-of-Care Circulating Cathodic Antigen (POC-CCA) assay is employed for detecting S. mansoni. Urinalysis and the POC-CCA assay are favoured for their rapid, non-invasive nature and cost-effectiveness. However, traditional diagnostic methods such as Kato-Katz and urine filtration lack sensitivity in low-transmission areas, which can lead to underreporting of cases and hinder effective disease control efforts. Therefore, in this study, urinalysis and the POC-CCA assay was utilised to diagnose schistosomiasis effectively among school-going children in Nelson Mandela Bay Municipality. This was a cross-sectional study with a total of 759 children, aged 5 to 14 years, who provided urine samples. Urinalysis was performed using urinary dipstick tests, which measure multiple parameters, including haematuria, protein, leukocytes, bilirubin, urobilinogen, ketones, pH, specific gravity and other biomarkers. Urinalysis was performed by dipping the strip into the urine sample and observing colour changes on specific reagent pads. The POC-CCA test was conducted by applying a drop of urine onto a cassette containing CCA-specific antibodies, and the presence of a visible test line indicated a positive result for S. mansoni infection. Descriptive statistics were used to summarize urine parameters, and Pearson correlation coefficients (r) were calculated to analyze associations among urine parameters using R software (version 4.3.1). Among the 759 children, the prevalence of S. haematobium using haematuria as a diagnostic marker was 33.6%. Additionally, leukocytes were detected in 21.3% of the samples, and protein was present in 15%. The prevalence of positive POC-CCA test results for S. mansoni was 3.7%. Urine parameters exhibited low to moderate associations, suggesting complex interrelationships. For instance, specific gravity and pH showed a negative correlation (r = -0.37), indicating that higher specific gravity was associated with lower pH. Weak correlations were observed between haematuria and pH (r = -0.10), bilirubin and ketones (r = 0.14), protein and bilirubin (r = 0.13), and urobilinogen and pH (r = 0.12). A mild positive correlation was found between leukocytes and blood (r = 0.23), reflecting some association between these inflammation markers. In conclusion, the study identified a significant prevalence of schistosomiasis among school-going children in Nelson Mandela Bay Municipality, with S. haematobium detected through haematuria and S. mansoni identified using the POC-CCA assay. The detection of leukocytes and protein in urine samples serves as critical biomarkers for schistosomiasis infections, reinforcing the presence of schistosomiasis in the study area when considered alongside haematuria. These urine parameters are indicative of inflammatory responses associated with schistosomiasis, underscoring the necessity for effective diagnostic methodologies. Such findings highlight the importance of comprehensive diagnostic assessments to accurately identify and monitor schistosomiasis prevalence and its associated health impacts. The significant burden of schistosomiasis in this population highlights the urgent need to develop targeted control interventions to effectively reduce its prevalence in the study area.Keywords: schistosomiasis, urinalysis, haematuria, POC-CCA
Procedia PDF Downloads 1943 Hydrogen Production Using an Anion-Exchange Membrane Water Electrolyzer: Mathematical and Bond Graph Modeling
Authors: Hugo Daneluzzo, Christelle Rabbat, Alan Jean-Marie
Abstract:
Water electrolysis is one of the most advanced technologies for producing hydrogen and can be easily combined with electricity from different sources. Under the influence of electric current, water molecules can be split into oxygen and hydrogen. The production of hydrogen by water electrolysis favors the integration of renewable energy sources into the energy mix by compensating for their intermittence through the storage of the energy produced when production exceeds demand and its release during off-peak production periods. Among the various electrolysis technologies, anion exchange membrane (AEM) electrolyser cells are emerging as a reliable technology for water electrolysis. Modeling and simulation are effective tools to save time, money, and effort during the optimization of operating conditions and the investigation of the design. The modeling and simulation become even more important when dealing with multiphysics dynamic systems. One of those systems is the AEM electrolysis cell involving complex physico-chemical reactions. Once developed, models may be utilized to comprehend the mechanisms to control and detect flaws in the systems. Several modeling methods have been initiated by scientists. These methods can be separated into two main approaches, namely equation-based modeling and graph-based modeling. The former approach is less user-friendly and difficult to update as it is based on ordinary or partial differential equations to represent the systems. However, the latter approach is more user-friendly and allows a clear representation of physical phenomena. In this case, the system is depicted by connecting subsystems, so-called blocks, through ports based on their physical interactions, hence being suitable for multiphysics systems. Among the graphical modelling methods, the bond graph is receiving increasing attention as being domain-independent and relying on the energy exchange between the components of the system. At present, few studies have investigated the modelling of AEM systems. A mathematical model and a bond graph model were used in previous studies to model the electrolysis cell performance. In this study, experimental data from literature were simulated using OpenModelica using bond graphs and mathematical approaches. The polarization curves at different operating conditions obtained by both approaches were compared with experimental ones. It was stated that both models predicted satisfactorily the polarization curves with error margins lower than 2% for equation-based models and lower than 5% for the bond graph model. The activation polarization of hydrogen evolution reactions (HER) and oxygen evolution reactions (OER) were behind the voltage loss in the AEM electrolyzer, whereas ion conduction through the membrane resulted in the ohmic loss. Therefore, highly active electro-catalysts are required for both HER and OER while high-conductivity AEMs are needed for effectively lowering the ohmic losses. The bond graph simulation of the polarisation curve for operating conditions at various temperatures has illustrated that voltage increases with temperature owing to the technology of the membrane. Simulation of the polarisation curve can be tested virtually, hence resulting in reduced cost and time involved due to experimental testing and improved design optimization. Further improvements can be made by implementing the bond graph model in a real power-to-gas-to-power scenario.Keywords: hydrogen production, anion-exchange membrane, electrolyzer, mathematical modeling, multiphysics modeling
Procedia PDF Downloads 9142 Fuzzy Multi-Objective Approach for Emergency Location Transportation Problem
Authors: Bidzina Matsaberidze, Anna Sikharulidze, Gia Sirbiladze, Bezhan Ghvaberidze
Abstract:
In the modern world emergency management decision support systems are actively used by state organizations, which are interested in extreme and abnormal processes and provide optimal and safe management of supply needed for the civil and military facilities in geographical areas, affected by disasters, earthquakes, fires and other accidents, weapons of mass destruction, terrorist attacks, etc. Obviously, these kinds of extreme events cause significant losses and damages to the infrastructure. In such cases, usage of intelligent support technologies is very important for quick and optimal location-transportation of emergency service in order to avoid new losses caused by these events. Timely servicing from emergency service centers to the affected disaster regions (response phase) is a key task of the emergency management system. Scientific research of this field takes the important place in decision-making problems. Our goal was to create an expert knowledge-based intelligent support system, which will serve as an assistant tool to provide optimal solutions for the above-mentioned problem. The inputs to the mathematical model of the system are objective data, as well as expert evaluations. The outputs of the system are solutions for Fuzzy Multi-Objective Emergency Location-Transportation Problem (FMOELTP) for disasters’ regions. The development and testing of the Intelligent Support System were done on the example of an experimental disaster region (for some geographical zone of Georgia) which was generated using a simulation modeling. Four objectives are considered in our model. The first objective is to minimize an expectation of total transportation duration of needed products. The second objective is to minimize the total selection unreliability index of opened humanitarian aid distribution centers (HADCs). The third objective minimizes the number of agents needed to operate the opened HADCs. The fourth objective minimizes the non-covered demand for all demand points. Possibility chance constraints and objective constraints were constructed based on objective-subjective data. The FMOELTP was constructed in a static and fuzzy environment since the decisions to be made are taken immediately after the disaster (during few hours) with the information available at that moment. It is assumed that the requests for products are estimated by homeland security organizations, or their experts, based upon their experience and their evaluation of the disaster’s seriousness. Estimated transportation times are considered to take into account routing access difficulty of the region and the infrastructure conditions. We propose an epsilon-constraint method for finding the exact solutions for the problem. It is proved that this approach generates the exact Pareto front of the multi-objective location-transportation problem addressed. Sometimes for large dimensions of the problem, the exact method requires long computing times. Thus, we propose an approximate method that imposes a number of stopping criteria on the exact method. For large dimensions of the FMOELTP the Estimation of Distribution Algorithm’s (EDA) approach is developed.Keywords: epsilon-constraint method, estimation of distribution algorithm, fuzzy multi-objective combinatorial programming problem, fuzzy multi-objective emergency location/transportation problem
Procedia PDF Downloads 32141 Sensorless Machine Parameter-Free Control of Doubly Fed Reluctance Wind Turbine Generator
Authors: Mohammad R. Aghakashkooli, Milutin G. Jovanovic
Abstract:
The brushless doubly-fed reluctance generator (BDFRG) is an emerging, medium-speed alternative to a conventional wound rotor slip-ring doubly-fed induction generator (DFIG) in wind energy conversion systems (WECS). It can provide competitive overall performance and similar low failure rates of a typically 30% rated back-to-back power electronics converter in 2:1 speed ranges but with the following important reliability and cost advantages over DFIG: the maintenance-free operation afforded by its brushless structure, 50% synchronous speed with the same number of rotor poles (allowing the use of a more compact, and more efficient two-stage gearbox instead of a vulnerable three-stage one), and superior grid integration properties including simpler protection for the low voltage ride through compliance of the fractional converter due to the comparatively higher leakage inductances and lower fault currents. Vector controlled pulse-width-modulated converters generally feature a much lower total harmonic distortion relative to hysteresis counterparts with variable switching rates and as such have been a predominant choice for BDFRG (and DFIG) wind turbines. Eliminating a shaft position sensor, which is often required for control implementation in this case, would be desirable to address the associated reliability issues. This fact has largely motivated the recent growing research of sensorless methods and developments of various rotor position and/or speed estimation techniques for this purpose. The main limitation of all the observer-based control approaches for grid-connected wind power applications of the BDFRG reported in the open literature is the requirement for pre-commissioning procedures and prior knowledge of the machine inductances, which are usually difficult to accurately identify by off-line testing. A model reference adaptive system (MRAS) based sensor-less vector control scheme to be presented will overcome this shortcoming. The true machine parameter independence of the proposed field-oriented algorithm, offering robust, inherently decoupled real and reactive power control of the grid-connected winding, is achieved by on-line estimation of the inductance ratio, the underlying rotor angular velocity and position MRAS observer being reliant upon. Such an observer configuration will be more practical to implement and clearly preferable to the existing machine parameter dependent solutions, and especially bearing in mind that with very little modifications it can be adapted for commercial DFIGs with immediately obvious further industrial benefits and prospects of this work. The excellent encoder-less controller performance with maximum power point tracking in the base speed region will be demonstrated by realistic simulation studies using large-scale BDFRG design data and verified by experimental results on a small laboratory prototype of the WECS emulation facility.Keywords: brushless doubly fed reluctance generator, model reference adaptive system, sensorless vector control, wind energy conversion
Procedia PDF Downloads 6240 Horizontal Cooperative Game Theory in Hotel Revenue Management
Authors: Ririh Rahma Ratinghayu, Jayu Pramudya, Nur Aini Masruroh, Shi-Woei Lin
Abstract:
This research studies pricing strategy in cooperative setting of hotel duopoly selling perishable product under fixed capacity constraint by using the perspective of managers. In hotel revenue management, competitor’s average room rate and occupancy rate should be taken into manager’s consideration in determining pricing strategy to generate optimum revenue. This information is not provided by business intelligence or available in competitor’s website. Thus, Information Sharing (IS) among players might result in improved performance of pricing strategy. IS is widely adopted in the logistics industry, but IS within hospitality industry has not been well-studied. This research put IS as one of cooperative game schemes, besides Mutual Price Setting (MPS) scheme. In off-peak season, hotel manager arranges pricing strategy to offer promotion package and various kinds of discounts up to 60% of full-price to attract customers. Competitor selling homogenous product will react the same, then triggers a price war. Price war which generates lower revenue may be avoided by creating collaboration in pricing strategy to optimize payoff for both players. In MPS cooperative game, players collaborate to set a room rate applied for both players. Cooperative game may avoid unfavorable players’ payoff caused by price war. Researches on horizontal cooperative game in logistics show better performance and payoff for the players, however, horizontal cooperative game in hotel revenue management has not been demonstrated. This paper aims to develop hotel revenue management models under duopoly cooperative schemes (IS & MPS), which are compared to models under non-cooperative scheme too. Each scheme has five models, Capacity Allocation Model; Demand Model; Revenue Model; Optimal Price Model; and Equilibrium Price Model. Capacity Allocation Model and Demand Model employs self-hotel and competitor’s full and discount price as predictors under non-linear relation. Optimal price is obtained by assuming revenue maximization motive. Equilibrium price is observed by interacting self-hotel’s and competitor’s optimal price under reaction equation. Equilibrium is analyzed using game theory approach. The sequence applies for three schemes. MPS Scheme differently aims to optimize total players’ payoff. The case study in which theoretical models are applied observes two hotels offering homogenous product in Indonesia during a year. The Capacity Allocation, Demand, and Revenue Models are built using multiple regression and statistically tested for validation. Case study data confirms that price behaves within demand model in a non-linear manner. IS Models can represent the actual demand and revenue data better than Non-IS Models. Furthermore, IS enables hotels to earn significantly higher revenue. Thus, duopoly hotel players in general, might have reasonable incentives to share information horizontally. During off-peak season, MPS Models are able to predict the optimal equal price for both hotels. However, Nash equilibrium may not always exist depending on actual payoff of adhering or betraying mutual agreement. To optimize performance, horizontal cooperative game may be chosen over non-cooperative game. Mathematical models can be used to detect collusion among business players. Empirical testing can be used as policy input for market regulator in preventing unethical business practices potentially harming society welfare.Keywords: horizontal cooperative game theory, hotel revenue management, information sharing, mutual price setting
Procedia PDF Downloads 28939 The Potential of Rhizospheric Bacteria for Mycotoxigenic Fungi Suppression
Authors: Vanja Vlajkov, Ivana PajčIn, Mila Grahovac, Marta Loc, Dragana Budakov, Jovana Grahovac
Abstract:
The rhizosphere soil refers to the plant roots' dynamic environment characterized by their inhabitants' high biological activity. Rhizospheric bacteria are recognized as effective biocontrol agents and considered cardinal in alternative strategies for securing ecological plant diseases management. The need to suppress fungal pathogens is an urgent task, not only because of the direct economic losses caused by infection but also due to their ability to produce mycotoxins with harmful effects on human health. Aspergillus and Fusarium species are well-known producers of toxigenic metabolites with a high capacity to colonize crops and enter the food chain. The bacteria belonging to the Bacillus genus has been conceded as a plant beneficial species in agricultural practice and identified as plant growth-promoting rhizobacteria (PGPR). Besides incontestable potential, the full commercialization of microbial biopesticides is in the preliminary phase. Thus, there is a constant need for estimating the suitability of novel strains to be used as a central point of viable bioprocess leading to market-ready product development. In the present study, 76 potential producing strains were isolated from the rhizosphere soil, sampled from different localities in the Autonomous Province of Vojvodina, Republic of Serbia. The selective isolation process of strains started by resuspending 1 g of soil samples in 9 ml of saline and incubating at 28° C for 15 minutes at 150 rpm. After homogenization, thermal treatment at 100° C for 7 minutes was performed. Dilution series (10-1-10-3) were prepared, and 500 µl of each was inoculated on nutrient agar plates and incubated at 28° C for 48 h. The pure cultures of morphologically different strains indicating belonging to the Bacillus genus were obtained by the spread-plate technique. The cultivation of the isolated strains was carried out in an Erlenmeyer flask for 96 h, at 28 °C, 170 rpm. The antagonistic activity screening included two phytopathogenic fungi as test microorganisms: Aspergillus sp. and Fusarium sp. The mycelial growth inhibition was estimated based on the antimicrobial activity testing of cultivation broth by the diffusion method. For the Aspergillus sp., the highest antifungal activity was recorded for the isolates Kro-4a and Mah-1a. In contrast, for the Fusarium sp., following 15 isolates exhibited the highest antagonistic effect Par-1, Par-2, Par-3, Par-4, Kup-4, Paš-1b, Pap-3, Kro-2, Kro-3a, Kro-3b, Kra-1a, Kra-1b, Šar-1, Šar-2b and Šar-4. One-way ANOVA was performed to determine the antagonists' effect statistical significance on inhibition zone diameter. Duncan's multiple range test was conducted to define homogenous groups of antagonists with the same level of statistical significance regarding their effect on antimicrobial activity of the tested cultivation broth against tested pathogens. The study results have pointed out the significant in vitro potential of the isolated strains to be used as biocontrol agents for the suppression of the tested mycotoxigenic fungi. Further research should include the identification and detailed characterization of the most promising isolates and mode of action of the selected strains as biocontrol agents. The following research should also involve bioprocess optimization steps to fully reach the selected strains' potential as microbial biopesticides and design cost-effective biotechnological production.Keywords: Bacillus, biocontrol, bioprocess, mycotoxigenic fungi
Procedia PDF Downloads 19638 Geovisualization of Human Mobility Patterns in Los Angeles Using Twitter Data
Authors: Linna Li
Abstract:
The capability to move around places is doubtless very important for individuals to maintain good health and social functions. People’s activities in space and time have long been a research topic in behavioral and socio-economic studies, particularly focusing on the highly dynamic urban environment. By analyzing groups of people who share similar activity patterns, many socio-economic and socio-demographic problems and their relationships with individual behavior preferences can be revealed. Los Angeles, known for its large population, ethnic diversity, cultural mixing, and entertainment industry, faces great transportation challenges such as traffic congestion, parking difficulties, and long commuting. Understanding people’s travel behavior and movement patterns in this metropolis sheds light on potential solutions to complex problems regarding urban mobility. This project visualizes people’s trajectories in Greater Los Angeles (L.A.) Area over a period of two months using Twitter data. A Python script was used to collect georeferenced tweets within the Greater L.A. Area including Ventura, San Bernardino, Riverside, Los Angeles, and Orange counties. Information associated with tweets includes text, time, location, and user ID. Information associated with users includes name, the number of followers, etc. Both aggregated and individual activity patterns are demonstrated using various geovisualization techniques. Locations of individual Twitter users were aggregated to create a surface of activity hot spots at different time instants using kernel density estimation, which shows the dynamic flow of people’s movement throughout the metropolis in a twenty-four-hour cycle. In the 3D geovisualization interface, the z-axis indicates time that covers 24 hours, and the x-y plane shows the geographic space of the city. Any two points on the z axis can be selected for displaying activity density surface within a particular time period. In addition, daily trajectories of Twitter users were created using space-time paths that show the continuous movement of individuals throughout the day. When a personal trajectory is overlaid on top of ancillary layers including land use and road networks in 3D visualization, the vivid representation of a realistic view of the urban environment boosts situational awareness of the map reader. A comparison of the same individual’s paths on different days shows some regular patterns on weekdays for some Twitter users, but for some other users, their daily trajectories are more irregular and sporadic. This research makes contributions in two major areas: geovisualization of spatial footprints to understand travel behavior using the big data approach and dynamic representation of activity space in the Greater Los Angeles Area. Unlike traditional travel surveys, social media (e.g., Twitter) provides an inexpensive way of data collection on spatio-temporal footprints. The visualization techniques used in this project are also valuable for analyzing other spatio-temporal data in the exploratory stage, thus leading to informed decisions about generating and testing hypotheses for further investigation. The next step of this research is to separate users into different groups based on gender/ethnic origin and compare their daily trajectory patterns.Keywords: geovisualization, human mobility pattern, Los Angeles, social media
Procedia PDF Downloads 11837 Empowering and Educating Young People Against Cybercrime by Playing: The Rayuela Method
Authors: Jose L. Diego, Antonio Berlanga, Gregorio López, Diana López
Abstract:
The Rayuela method is a success story, as it is part of a project selected by the European Commission to face the challenge launched by itself for achieving a better understanding of human factors, as well as social and organisational aspects that are able to solve issues in fighting against crime. Rayuela's method specifically focuses on the drivers of cyber criminality, including approaches to prevent, investigate, and mitigate cybercriminal behavior. As the internet has become an integral part of young people’s lives, they are the key target of the Rayuela method because they (as a victim or as a perpetrator) are the most vulnerable link of the chain. Considering the increased time spent online and the control of their internet usage and the low level of awareness of cyber threats and their potential impact, it is understandable the proliferation of incidents due to human mistakes. 51% of Europeans feel not well informed about cyber threats, and 86% believe that the risk of becoming a victim of cybercrime is rapidly increasing. On the other hand, Law enforcement has noted that more and more young people are increasingly committing cybercrimes. This is an international problem that has considerable cost implications; it is estimated that crimes in cyberspace will cost the global economy $445B annually. Understanding all these phenomena drives to the necessity of a shift in focus from sanctions to deterrence and prevention. As a research project, Rayuela aims to bring together law enforcement agencies (LEAs), sociologists, psychologists, anthropologists, legal experts, computer scientists, and engineers, to develop novel methodologies that allow better understanding the factors affecting online behavior related to new ways of cyber criminality, as well as promoting the potential of these young talents for cybersecurity and technologies. Rayuela’s main goal is to better understand the drivers and human factors affecting certain relevant ways of cyber criminality, as well as empower and educate young people in the benefits, risks, and threats intrinsically linked to the use of the Internet by playing, thus preventing and mitigating cybercriminal behavior. In order to reach that goal it´s necessary an interdisciplinary consortium (formed by 17 international partners) carries out researches and actions like Profiling and case studies of cybercriminals and victims, risk assessments, studies on Internet of Things and its vulnerabilities, development of a serious gaming environment, training activities, data analysis and interpretation using Artificial intelligence, testing and piloting, etc. For facilitating the real implementation of the Rayuela method, as a community policing strategy, is crucial to count on a Police Force with a solid background in trust-building and community policing in order to do the piloting, specifically with young people. In this sense, Valencia Local Police is a pioneer Police Force working with young people in conflict solving, through providing police mediation and peer mediation services and advice. As an example, it is an official mediation institution, so agreements signed by their police mediators have once signed by the parties, the value of a judicial decision.Keywords: fight against crime and insecurity, avert and prepare young people against aggression, ICT, serious gaming and artificial intelligence against cybercrime, conflict solving and mediation with young people
Procedia PDF Downloads 12836 Optimizing Machine Learning Algorithms for Defect Characterization and Elimination in Liquids Manufacturing
Authors: Tolulope Aremu
Abstract:
The key process steps to produce liquid detergent products will introduce potential defects, such as formulation, mixing, filling, and packaging, which might compromise product quality, consumer safety, and operational efficiency. Real-time identification and characterization of such defects are of prime importance for maintaining high standards and reducing waste and costs. Usually, defect detection is performed by human inspection or rule-based systems, which is very time-consuming, inconsistent, and error-prone. The present study overcomes these limitations in dealing with optimization in defect characterization within the process for making liquid detergents using Machine Learning algorithms. Performance testing of various machine learning models was carried out: Support Vector Machine, Decision Trees, Random Forest, and Convolutional Neural Network on defect detection and classification of those defects like wrong viscosity, color deviations, improper filling of a bottle, packaging anomalies. These algorithms have significantly benefited from a variety of optimization techniques, including hyperparameter tuning and ensemble learning, in order to greatly improve detection accuracy while minimizing false positives. Equipped with a rich dataset of defect types and production parameters consisting of more than 100,000 samples, our study further includes information from real-time sensor data, imaging technologies, and historic production records. The results are that optimized machine learning models significantly improve defect detection compared to traditional methods. Take, for instance, the CNNs, which run at 98% and 96% accuracy in detecting packaging anomaly detection and bottle filling inconsistency, respectively, by fine-tuning the model with real-time imaging data, through which there was a reduction in false positives of about 30%. The optimized SVM model on detecting formulation defects gave 94% in viscosity variation detection and color variation. These values of performance metrics correspond to a giant leap in defect detection accuracy compared to the usual 80% level achieved up to now by rule-based systems. Moreover, this optimization with models can hasten defect characterization, allowing for detection time to be below 15 seconds from an average of 3 minutes using manual inspections with real-time processing of data. With this, the reduction in time will be combined with a 25% reduction in production downtime because of proactive defect identification, which can save millions annually in recall and rework costs. Integrating real-time machine learning-driven monitoring drives predictive maintenance and corrective measures for a 20% improvement in overall production efficiency. Therefore, the optimization of machine learning algorithms in defect characterization optimum scalability and efficiency for liquid detergent companies gives improved operational performance to higher levels of product quality. In general, this method could be conducted in several industries within the Fast moving consumer Goods industry, which would lead to an improved quality control process.Keywords: liquid detergent manufacturing, defect detection, machine learning, support vector machines, convolutional neural networks, defect characterization, predictive maintenance, quality control, fast-moving consumer goods
Procedia PDF Downloads 1835 Italian Speech Vowels Landmark Detection through the Legacy Tool 'xkl' with Integration of Combined CNNs and RNNs
Authors: Kaleem Kashif, Tayyaba Anam, Yizhi Wu
Abstract:
This paper introduces a methodology for advancing Italian speech vowels landmark detection within the distinctive feature-based speech recognition domain. Leveraging the legacy tool 'xkl' by integrating combined convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the study presents a comprehensive enhancement to the 'xkl' legacy software. This integration incorporates re-assigned spectrogram methodologies, enabling meticulous acoustic analysis. Simultaneously, our proposed model, integrating combined CNNs and RNNs, demonstrates unprecedented precision and robustness in landmark detection. The augmentation of re-assigned spectrogram fusion within the 'xkl' software signifies a meticulous advancement, particularly enhancing precision related to vowel formant estimation. This augmentation catalyzes unparalleled accuracy in landmark detection, resulting in a substantial performance leap compared to conventional methods. The proposed model emerges as a state-of-the-art solution in the distinctive feature-based speech recognition systems domain. In the realm of deep learning, a synergistic integration of combined CNNs and RNNs is introduced, endowed with specialized temporal embeddings, harnessing self-attention mechanisms, and positional embeddings. The proposed model allows it to excel in capturing intricate dependencies within Italian speech vowels, rendering it highly adaptable and sophisticated in the distinctive feature domain. Furthermore, our advanced temporal modeling approach employs Bayesian temporal encoding, refining the measurement of inter-landmark intervals. Comparative analysis against state-of-the-art models reveals a substantial improvement in accuracy, highlighting the robustness and efficacy of the proposed methodology. Upon rigorous testing on a database (LaMIT) speech recorded in a silent room by four Italian native speakers, the landmark detector demonstrates exceptional performance, achieving a 95% true detection rate and a 10% false detection rate. A majority of missed landmarks were observed in proximity to reduced vowels. These promising results underscore the robust identifiability of landmarks within the speech waveform, establishing the feasibility of employing a landmark detector as a front end in a speech recognition system. The synergistic integration of re-assigned spectrogram fusion, CNNs, RNNs, and Bayesian temporal encoding not only signifies a significant advancement in Italian speech vowels landmark detection but also positions the proposed model as a leader in the field. The model offers distinct advantages, including unparalleled accuracy, adaptability, and sophistication, marking a milestone in the intersection of deep learning and distinctive feature-based speech recognition. This work contributes to the broader scientific community by presenting a methodologically rigorous framework for enhancing landmark detection accuracy in Italian speech vowels. The integration of cutting-edge techniques establishes a foundation for future advancements in speech signal processing, emphasizing the potential of the proposed model in practical applications across various domains requiring robust speech recognition systems.Keywords: landmark detection, acoustic analysis, convolutional neural network, recurrent neural network
Procedia PDF Downloads 6334 Structural Behavior of Subsoil Depending on Constitutive Model in Calculation Model of Pavement Structure-Subsoil System
Authors: M. Kadela
Abstract:
The load caused by the traffic movement should be transferred in the road constructions in a harmless way to the pavement as follows: − on the stiff upper layers of the structure (e.g. layers of asphalt: abrading and binding), and − through the layers of principal and secondary substructure, − on the subsoil, directly or through an improved subsoil layer. Reliable description of the interaction proceeding in a system “road construction – subsoil” should be in such case one of the basic requirements of the assessment of the size of internal forces of structure and its durability. Analyses of road constructions are based on: − elements of mechanics, which allows to create computational models, and − results of the experiments included in the criteria of fatigue life analyses. Above approach is a fundamental feature of commonly used mechanistic methods. They allow to use in the conducted evaluations of the fatigue life of structures arbitrarily complex numerical computational models. Considering the work of the system “road construction – subsoil”, it is commonly accepted that, as a result of repetitive loads on the subsoil under pavement, the growth of relatively small deformation in the initial phase is recognized, then this increase disappears, and the deformation takes the character completely reversible. The reliability of calculation model is combined with appropriate use (for a given type of analysis) of constitutive relationships. Phenomena occurring in the initial stage of the system “road construction – subsoil” is unfortunately difficult to interpret in the modeling process. The classic interpretation of the behavior of the material in the elastic-plastic model (e-p) is that elastic phase of the work (e) is undergoing to phase (e-p) by increasing the load (or growth of deformation in the damaging structure). The paper presents the essence of the calibration process of cooperating subsystem in the calculation model of the system “road construction – subsoil”, created for the mechanistic analysis. Calibration process was directed to show the impact of applied constitutive models on its deformation and stress response. The proper comparative base for assessing the reliability of created. This work was supported by the on-going research project “Stabilization of weak soil by application of layer of foamed concrete used in contact with subsoil” (LIDER/022/537/L-4/NCBR/2013) financed by The National Centre for Research and Development within the LIDER Programme. M. Kadela is with the Department of Building Construction Elements and Building Structures on Mining Areas, Building Research Institute, Silesian Branch, Katowice, Poland (phone: +48 32 730 29 47; fax: +48 32 730 25 22; e-mail: m.kadela@ itb.pl). models should be, however, the actual, monitored system “road construction – subsoil”. The paper presents too behavior of subsoil under cyclic load transmitted by pavement layers. The response of subsoil to cyclic load is recorded in situ by the observation system (sensors) installed on the testing ground prepared for this purpose, being a part of the test road near Katowice, in Poland. A different behavior of the homogeneous subsoil under pavement is observed for different seasons of the year, when pavement construction works as a flexible structure in summer, and as a rigid plate in winter. Albeit the observed character of subsoil response is the same regardless of the applied load and area values, this response can be divided into: - zone of indirect action of the applied load; this zone extends to the depth of 1,0 m under the pavement, - zone of a small strain, extending to about 2,0 m.Keywords: road structure, constitutive model, calculation model, pavement, soil, FEA, response of soil, monitored system
Procedia PDF Downloads 35533 Septic Pulmonary Emboli as a Complication of Peripheral Venous Cannula Insertion
Authors: Ankita Baidya, Vanishri Ganakumar, Ranveer S. Jadon, Piyush Ranjan, Rita Sood
Abstract:
Septic embolism can have varied presentations and clinical considerations. Infected central venous catheters are commonly associated with septic emboli but peripheral vascular catheters are rarely implicated. We describe a rare case of septic pulmonary emboli related to infected peripheral venous cannulation caused by an unusual etiological agent. A young male presented with complaints of fever, productive cough, sudden onset shortness of breath and cellulitis in both the upper limbs. He was recently hospitalised for dengue fever and administered intravenous fluids through peripheral venous line. The patient was febrile, tachypneic and in respiratory distress, there were multiple pus filled bullae in left hand alongwith swelling and erythema involving right forearm that started at the site of cannulation. Chest examination showed active accessory muscles of respiration, stony dull percussion at the base of right lung and decreased breath sounds at right infrascapular, infraaxillary and mammary area. Other system examination was within normal limits. Chest X-ray revealed bilateral multiple patchy heterogenous peripheral opacities and infiltrates with right-sided pleural effusion. Contrast-enhanced computed tomography (CECT) chest showed feeding vessel sign confirming the diagnosis as septic emboli. Venous Doppler and 2D-echocardiogarm were normal. Laboratory findings showed marked leucocytosis (22000/mm3). Pus aspirate, blood sample, and sputum sample were sent for microbiological testing. The patient was started empirically on ceftriaxone, vancomycin, and clindamycin. The Pus culture and sputum culture showed Klebsiella pneumoniae sensitive to cefoperazone-sulbactum, piperacillin-tazobactum, meropenem and amikacin. The antibiotics were modified accordingly to antimicrobial sensitivity profile to Cefoperazone-sulbactum. Bronchoalveolar lavage (BAL) was done and sent for microbiological investigations. BAL culture showed Klebsiella pneumoniae with same antimicrobial resistance profile. On day 6 of starting cefoperazone-sulbactum, he became afebrile. The skin lesions improved significantly. He was administered 2 weeks of cefoperazone–sulbactum and discharged on oral faropenem for 4 weeks. At the time of discharge, TLC was 11200/mm3 with marked radiological resolution of infection and healed skin lesions. He was kept in regular follow up. Chest X-ray and skin lesions showed complete resolution after 8 weeks. Till date, only couple of case reports of septic emboli through peripheral intravenous line have been reported in English literature. This case highlights that a simple procedure of peripheral intravenous cannulation can lead to catastrophic complication of septic pulmonary emboli and widespread cellulitis if not done with proper care and precautions. Also, the usual pathogens in such clinical settings are gram positive bacteria, but with the history of recent hospitalization, empirical therapy should also cover drug resistant gram negative microorganisms. It also emphasise the importance of appropriate healthcare practices to be taken care during all procedures.Keywords: antibiotics, cannula, Klebsiella pneumoniae, septic emboli
Procedia PDF Downloads 16032 Familiarity with Intercultural Conflicts and Global Work Performance: Testing a Theory of Recognition Primed Decision-Making
Authors: Thomas Rockstuhl, Kok Yee Ng, Guido Gianasso, Soon Ang
Abstract:
Two meta-analyses show that intercultural experience is not related to intercultural adaptation or performance in international assignments. These findings have prompted calls for a deeper grounding of research on international experience in the phenomenon of global work. Two issues, in particular, may limit current understanding of the relationship between international experience and global work performance. First, intercultural experience is too broad a construct that may not sufficiently capture the essence of global work, which to a large part involves sensemaking and managing intercultural conflicts. Second, the psychological mechanisms through which intercultural experience affects performance remains under-explored, resulting in a poor understanding of how experience is translated into learning and performance outcomes. Drawing on recognition primed decision-making (RPD) research, the current study advances a cognitive processing model to highlight the importance of intercultural conflict familiarity. Compared to intercultural experience, intercultural conflict familiarity is a more targeted construct that captures individuals’ previous exposure to dealing with intercultural conflicts. Drawing on RPD theory, we argue that individuals’ intercultural conflict familiarity enhances their ability to make accurate judgments and generate effective responses when intercultural conflicts arise. In turn, the ability to make accurate situation judgements and effective situation responses is an important predictor of global work performance. A relocation program within a multinational enterprise provided the context to test these hypotheses using a time-lagged, multi-source field study. Participants were 165 employees (46% female; with an average of 5 years of global work experience) from 42 countries who relocated from country to regional offices as part a global restructuring program. Within the first two weeks of transfer to the regional office, employees completed measures of their familiarity with intercultural conflicts, cultural intelligence, cognitive ability, and demographic information. They also completed an intercultural situational judgment test (iSJT) to assess their situation judgment and situation response. The iSJT comprised four validated multimedia vignettes of challenging intercultural work conflicts and prompted employees to provide protocols of their situation judgment and situation response. Two research assistants, trained in intercultural management but blind to the study hypotheses, coded the quality of employee’s situation judgment and situation response. Three months later, supervisors rated employees’ global work performance. Results using multilevel modeling (vignettes nested within employees) support the hypotheses that greater familiarity with intercultural conflicts is positively associated with better situation judgment, and that situation judgment mediates the effect of intercultural familiarity on situation response quality. Also, aggregated situation judgment and situation response quality both predicted supervisor-rated global work performance. Theoretically, our findings highlight the important but under-explored role of familiarity with intercultural conflicts; a shift in attention from the general nature of international experience assessed in terms of number and length of overseas assignments. Also, our cognitive approach premised on RPD theory offers a new theoretical lens to understand the psychological mechanisms through which intercultural conflict familiarity affects global work performance. Third, and importantly, our study contributes to the global talent identification literature by demonstrating that the cognitive processes engaged in resolving intercultural conflicts predict actual performance in the global workplace.Keywords: intercultural conflict familiarity, job performance, judgment and decision making, situational judgment test
Procedia PDF Downloads 17931 Production, Characterisation, and in vitro Degradation and Biocompatibility of a Solvent-Free Polylactic-Acid/Hydroxyapatite Composite for 3D-Printed Maxillofacial Bone-Regeneration Implants
Authors: Carlos Amnael Orozco-Diaz, Robert David Moorehead, Gwendolen Reilly, Fiona Gilchrist, Cheryl Ann Miller
Abstract:
The current gold-standard for maxillofacial reconstruction surgery (MRS) utilizes auto-grafted cancellous bone as a filler. This study was aimed towards developing a polylactic-acid/hydroxyapatite (PLA-HA) composite suitable for fused-deposition 3D printing. Functionalization of the polymer through the addition of HA was directed to promoting bone-regeneration properties so that the material can rival the performance of cancellous bone grafts in terms of bone-lesion repair. This kind of composite enables the production of MRS implants based off 3D-reconstructions from image studies – namely computed tomography – for anatomically-correct fitting. The present study encompassed in-vitro degradation and in-vitro biocompatibility profiling for 3D-printed PLA and PLA-HA composites. PLA filament (Verbatim Co.) and Captal S hydroxyapatite micro-scale HA powder (Plasma Biotal Ltd) were used to produce PLA-HA composites at 5, 10, and 20%-by-weight HA concentration. These were extruded into 3D-printing filament, and processed in a BFB-3000 3D-Printer (3D Systems Co.) into tensile specimens, and were mechanically challenged as per ASTM D638-03. Furthermore, tensile specimens were subjected to accelerated degradation in phosphate-buffered saline solution at 70°C for 23 days, as per ISO-10993-13-2010. This included monitoring of mass loss (through dry-weighing), crystallinity (through thermogravimetric analysis/differential thermal analysis), molecular weight (through gel-permeation chromatography), and tensile strength. In-vitro biocompatibility analysis included cell-viability and extracellular matrix deposition, which were performed both on flat surfaces and on 3D-constructs – both produced through 3D-printing. Discs of 1 cm in diameter and cubic 3D-meshes of 1 cm3 were 3D printed in PLA and PLA-HA composites (n = 6). The samples were seeded with 5000 MG-63 osteosarcoma-like cells, with cell viability extrapolated throughout 21 days via resazurin reduction assays. As evidence of osteogenicity, collagen and calcium deposition were indirectly estimated through Sirius Red staining and Alizarin Red staining respectively. Results have shown that 3D printed PLA loses structural integrity as early as the first day of accelerated degradation, which was significantly faster than the literature suggests. This was reflected in the loss of tensile strength down to untestable brittleness. During degradation, mass loss, molecular weight, and crystallinity behaved similarly to results found in similar studies for PLA. All composite versions and pure PLA were found to perform equivalent to tissue-culture plastic (TCP) in supporting the seeded-cell population. Significant differences (p = 0.05) were found on collagen deposition for higher HA concentrations, with composite samples performing better than pure PLA and TCP. Additionally, per-cell-calcium deposition on the 3D-meshes was significantly lower when comparing 3D-meshes to discs of the same material (p = 0.05). These results support the idea that 3D-printable PLA-HA composites are a viable resorbable material for artificial grafts for bone-regeneration. Degradation data suggests that 3D-printing of these materials – as opposed to other manufacturing methods – might result in faster resorption than currently-used PLA implants.Keywords: bone regeneration implants, 3D-printing, in vitro testing, biocompatibility, polymer degradation, polymer-ceramic composites
Procedia PDF Downloads 155