Search results for: contact angle measurements
1023 Role of SiOx Interlayer on Lead Oxide Electrodeposited on Stainless Steel for Promoting Electrochemical Treatment of Wastewater Containing Textile Dye
Authors: Hanene Akrout, Ines Elaissaoui, Sabrina Grassini, Daniele Fulginiti, Latifa Bousselmi
Abstract:
The main objective of this work is to investigate the efficiency of depollution power related to PbO₂ layer deposited onto a stainless steel (SS) substrate with SiOx as interlayer. The elaborated electrode was used as anode for anodic oxidation of wastewater containing Amaranth dye, as recalcitrant organic pollutant model. SiOx interlayer was performed using Plasma Enhanced Chemical Vapor Deposition ‘PECVD’ in plasma fed with argon, oxygen, and tetraethoxysilane (TEOS, Si precursor) in different ratios, onto the SS substrate. PbO₂ layer was produced by pulsed electrodeposition on SS/SiOx. The morphological of different surfaces are depicted with Field Emission Scanning Electron Microscope (FESEM) and the composition of the lead oxide layer was investigated by X-Ray Diffractometry (XRD). The results showed that the SiOx interlayer with more rich oxygen content improved better the nucleation of β-PbO₂ form. Electrochemical Impedance Spectroscopy (EIS) measurements undertaken on different interfaces (at optimized conditions) revealed a decrease of Rfilm while CPE film increases for SiOx interlayer, characterized by a more inorganic nature and deposited in a plasma fed by higher O2-to-TEOS ratios. Quantitative determinations of the Amaranth dye degradation rate were performed in terms of colour and COD removals, reaching a 95% and an 80% respectively removal at pH = 2 in 300 min. Results proved the improvement of the degradation wastewater containing the amaranth dye. During the electrolysis, the Amaranth dye solution was sampled at 30 min intervals and analyzed by ‘High-performance Liquid Chromatography’ HPLC. The gradual degradation of the Amaranth dye confirmed by the decrease in UV absorption using the SS/SiOx(20:20:1)/PbO₂ anode, the reaction exhibited an apparent first-order kinetic for electrolysis time of 5 hours, with an initial rate constant of about 0.02 min⁻¹.Keywords: electrochemical treatment, PbO₂ anodes, COD removal, plasma
Procedia PDF Downloads 1931022 Ambient Electrospray Deposition: An Efficient Technique to Immobilize Laccase on Cheap Electrodes With Unprecedented Reuse and Storage Performances
Authors: Mattea Carmen Castrovilli, Antonella Cartoni
Abstract:
Electrospray ionisation (ESI), a well-established technique widely used to produce ion beams of biomolecules in mass spectrometry (ESI-MS), can be used for ambient soft landing of enzymes on a specific substrate. In this work, we show how the ambient electrospray deposition (ESD) technique can be successfully exploited for manufacturing a promising, green-friendly electrochemical amperometric laccase-based biosensor with unprecedented reuse and storage performance. These biosensors have been manufactured by spraying a laccase solution of 2μg/μL at 20% of methanol on a commercial carbon screen printed electrode (C-SPE) using a custom ESD set-up. The laccase-based ESD biosensor has been tested against catechol compounds in the linear range 2-100 μM, with a limit of detection of 1.7 μM, without interference from cadmium, chrome, arsenic, and zinc and without any memory effects, but showing a matrix effect in lake and well water. The ESD biosensor shows enhanced performances compared to the ones fabricated with other immobilization methods, like drop-casting. Indeed, it retains 100% activity up to two months of storage at ambient conditions without any special care and working stability up to 63 measurements on the same electrode just prepared and 20 on a one-year-old electrode subjected to redeposition together with a 100% resistance to use of the same electrode in subsequent days. The ESD method is a one-step, environmentally friendly method that allows the deposition of the bio-recognition layer without using any additional chemicals. The promising results in terms of storage and working stability also obtained with the more fragile lactate oxidase enzyme suggest these improvements should be attributed to the ESD technique rather than to the bioreceptor, highlighting how the ESD could be useful in reducing pollution from disposable devices. Acknowledgment: The understanding at the molecular level of this promising biosensor by using different spectroscopies, microscopies and analytical techniques is the subject of our PRIN 2022 project ESILARANTE.Keywords: reuse, storage performance, immobilization, electrospray deposition, biosensor, laccase, catechol detection, green chemistry
Procedia PDF Downloads 621021 The Basketball Show in the North of France: When the NBA Globalized Culture Meets the Local Carnival Culture
Authors: David Sudre
Abstract:
Today, the National Basketball Association (NBA) is the cultural model of reference for most of the French basketball community stakeholders (players, coaches, team and league managers). In addition to the strong impact it has on how this sport is played and perceived, the NBA also influences the ways professional basketball shows are organized in France (within the Jeep Elite league). The objective of this research is to see how and to what extent the NBA show, as a globalized cultural product, disrupts Jeep Elite's professional basketball cultural codes in the organization of its shows. The article will aim at questioning the intercultural phenomenon at stake in sports cultures in France through the prism of the basketball match. This angle will shed some light on the underlying relationships between local and global elements. The results of this research come from a one-year survey conducted in a small town in northern France, Le Portel, where the Etoile Sportive Saint Michel (ESSM), a Jeep Elite's club, operates. An ethnographic approach was favored. It entailed many participating observations and semi-directive interviews with supporters of the ESSM Le Portel. Through this ethnographic work with the team's fan groups (before the games, during the games and after the games), it was possible for the researchers to understand better all the cultural and identity issues that play out in the "Cauldron," the basketball arena of the ESSM Le Portel. The results demonstrate, at first glance, that many basketball events organized in France are copied from the American model. It seems difficult not to try to imitate the American reference that the NBA represents, whether it be at the French All-Star Game or a Jeep Elite Game at Le Portel. In this case, an acculturation process seems to occur, not only in the way people play but also in the creation of the show (cheerleaders, animations, etc.). However, this American culture of globalized basketball, although re-appropriated, is also being modified by the members of ESSM Le Portel within their locality. Indeed, they juggle between their culture of origin and their culture of reference to build their basketball show within their sociocultural environment. In this way, Le Portel managers and supporters introduce elements that are characteristic of their local culture into the show, such as carnival customs and celebrations, two ingredients that fully contribute to the creation of their identity. Ultimately, in this context of "glocalization," this research will ascertain, on the one hand, that the identity of French basketball becomes harder to outline, and, on the other hand, that the "Cauldron" turns out to be a place to preserve (fantasized) local identities, or even a place of (unconscious) resistance to the dominant model of American basketball culture.Keywords: basketball, carnival, culture, globalization, identity, show, sport, supporters.
Procedia PDF Downloads 1511020 Subjective Temporal Resources: On the Relationship Between Time Perspective and Chronic Time Pressure to Burnout
Authors: Diamant Irene, Dar Tamar
Abstract:
Burnout, conceptualized within the framework of stress research, is to a large extent a result of a threat on resources of time or a feeling of time shortage. In reaction to numerous tasks, deadlines, high output, management of different duties encompassing work-home conflicts, many individuals experience ‘time pressure’. Time pressure is characterized as the perception of a lack of available time in relation to the amount of workload. It can be a result of local objective constraints, but it can also be a chronic attribute in coping with life. As such, time pressure is associated in the literature with general stress experience and can therefore be a direct, contributory burnout factor. The present study examines the relation of chronic time pressure – feeling of time shortage and of being rushed, with another central aspect in subjective temporal experience - time perspective. Time perspective is a stable personal disposition, capturing the extent to which people subjectively remember the past, live the present and\or anticipate the future. Based on Hobfoll’s Conservation of Resources Theory, it was hypothesized that individuals with chronic time pressure would experience a permanent threat on their time resources resulting in relatively increased burnout. In addition, it was hypothesized that different time perspective profiles, based on Zimbardo’s typology of five dimensions – Past Positive, Past Negative, Present Hedonistic, Present Fatalistic, and Future, would be related to different magnitudes of chronic time pressure and of burnout. We expected that individuals with ‘Past Negative’ or ‘Present Fatalist’ time perspectives would experience more burnout, with chronic time pressure being a moderator variable. Conversely, individuals with a ‘Present Hedonistic’ - with little concern with the future consequences of actions, would experience less chronic time pressure and less burnout. Another temporal experience angle examined in this study is the difference between the actual distribution of time (as in a typical day) versus desired distribution of time (such as would have been distributed optimally during a day). It was hypothesized that there would be a positive correlation between the gap between these time distributions and chronic time pressure and burnout. Data was collected through an online self-reporting survey distributed on social networks, with 240 participants (aged 21-65) recruited through convenience and snowball sampling methods from various organizational sectors. The results of the present study support the hypotheses and constitute a basis for future debate regarding the elements of burnout in the modern work environment, with an emphasis on subjective temporal experience. Our findings point to the importance of chronic and stable temporal experiences, as time pressure and time perspective, in occupational experience. The findings are also discussed with a view to the development of practical methods of burnout prevention.Keywords: conservation of resources, burnout, time pressure, time perspective
Procedia PDF Downloads 1741019 Surface Tension and Bulk Density of Ammonium Nitrate Solutions: A Molecular Dynamics Study
Authors: Sara Mosallanejad, Bogdan Z. Dlugogorski, Jeff Gore, Mohammednoor Altarawneh
Abstract:
Ammonium nitrate (NH₄NO₃, AN) is commonly used as the main component of AN emulsion and fuel oil (ANFO) explosives, that use extensively in civilian and mining operations for underground development and tunneling applications. The emulsion formulation and wettability of AN prills, which affect the physical stability and detonation of ANFO, highly depend on the surface tension, density, viscosity of the used liquid. Therefore, for engineering applications of this material, the determination of density and surface tension of concentrated aqueous solutions of AN is essential. The molecular dynamics (MD) simulation method have been used to investigate the density and the surface tension of high concentrated ammonium nitrate solutions; up to its solubility limit in water. Non-polarisable models for water and ions have carried out the simulations, and the electronic continuum correction model (ECC) uses a scaling of the charges of the ions to apply the polarisation implicitly into the non-polarisable model. The results of calculated density and the surface tension of the solutions have been compared to available experimental values. Our MD simulations show that the non-polarisable model with full-charge ions overestimates the experimental results while the reduce-charge model for the ions fits very well with the experimental data. Ions in the solutions show repulsion from the interface using the non-polarisable force fields. However, when charges of the ions in the original model are scaled in line with the scaling factor of the ECC model, the ions create a double ionic layer near the interface by the migration of anions toward the interface while cations stay in the bulk of the solutions. Similar ions orientations near the interface were observed when polarisable models were used in simulations. In conclusion, applying the ECC model to the non-polarisable force field yields the density and surface tension of the AN solutions with high accuracy in comparison to the experimental measurements.Keywords: ammonium nitrate, electronic continuum correction, non-polarisable force field, surface tension
Procedia PDF Downloads 2301018 Remote Sensing of Aerated Flows at Large Dams: Proof of Concept
Authors: Ahmed El Naggar, Homyan Saleh
Abstract:
Dams are crucial for flood control, water supply, and the creation of hydroelectric power. Every dam has a water conveyance system, such as a spillway, providing the safe discharge of catastrophic floods when necessary. Spillway design has historically been investigated in laboratory research owing to the absence of suitable full-scale flow monitoring equipment and safety problems. Prototype measurements of aerated flows are urgently needed to quantify projected scale effects and provide missing validation data for design guidelines and numerical simulations. In this work, an image-based investigation of free-surface flows on a tiered spillway was undertaken at the laboratory (fixed camera installation) and prototype size (drone video) (drone footage) (drone footage). The drone videos were generated using data from citizen science. Analyses permitted the measurement of the free-surface aeration inception point, air-water surface velocities, fluctuations, and residual energy at the chute's downstream end from a remote site. The prototype observations offered full-scale proof of concept, while laboratory results were efficiently confirmed against invasive phase-detection probe data. This paper stresses the efficacy of image-based analyses at prototype spillways. It highlights how citizen science data may enable academics better understand real-world air-water flow dynamics and offers a framework for a small collection of long-missing prototype data.Keywords: remote sensing, aerated flows, large dams, proof of concept, dam spillways, air-water flows, prototype operation, remote sensing, inception point, optical flow, turbulence, residual energy
Procedia PDF Downloads 921017 DTI Connectome Changes in the Acute Phase of Aneurysmal Subarachnoid Hemorrhage Improve Outcome Classification
Authors: Sarah E. Nelson, Casey Weiner, Alexander Sigmon, Jun Hua, Haris I. Sair, Jose I. Suarez, Robert D. Stevens
Abstract:
Graph-theoretical information from structural connectomes indicated significant connectivity changes and improved acute prognostication in a Random Forest (RF) model in aneurysmal subarachnoid hemorrhage (aSAH), which can lead to significant morbidity and mortality and has traditionally been fraught by poor methods to predict outcome. This study’s hypothesis was that structural connectivity changes occur in canonical brain networks of acute aSAH patients, and that these changes are associated with functional outcome at six months. In a prospective cohort of patients admitted to a single institution for management of acute aSAH, patients underwent diffusion tensor imaging (DTI) as part of a multimodal MRI scan. A weighted undirected structural connectome was created of each patient’s images using Constant Solid Angle (CSA) tractography, with 176 regions of interest (ROIs) defined by the Johns Hopkins Eve atlas. ROIs were sorted into four networks: Default Mode Network, Executive Control Network, Salience Network, and Whole Brain. The resulting nodes and edges were characterized using graph-theoretic features, including Node Strength (NS), Betweenness Centrality (BC), Network Degree (ND), and Connectedness (C). Clinical (including demographics and World Federation of Neurologic Surgeons scale) and graph features were used separately and in combination to train RF and Logistic Regression classifiers to predict two outcomes: dichotomized modified Rankin Score (mRS) at discharge and at six months after discharge (favorable outcome mRS 0-2, unfavorable outcome mRS 3-6). A total of 56 aSAH patients underwent DTI a median (IQR) of 7 (IQR=8.5) days after admission. The best performing model (RF) combining clinical and DTI graph features had a mean Area Under the Receiver Operator Characteristic Curve (AUROC) of 0.88 ± 0.00 and Area Under the Precision Recall Curve (AUPRC) of 0.95 ± 0.00 over 500 trials. The combined model performed better than the clinical model alone (AUROC 0.81 ± 0.01, AUPRC 0.91 ± 0.00). The highest-ranked graph features for prediction were NS, BC, and ND. These results indicate reorganization of the connectome early after aSAH. The performance of clinical prognostic models was increased significantly by the inclusion of DTI-derived graph connectivity metrics. This methodology could significantly improve prognostication of aSAH.Keywords: connectomics, diffusion tensor imaging, graph theory, machine learning, subarachnoid hemorrhage
Procedia PDF Downloads 1891016 Development of an Atmospheric Radioxenon Detection System for Nuclear Explosion Monitoring
Authors: V. Thomas, O. Delaune, W. Hennig, S. Hoover
Abstract:
Measurement of radioactive isotopes of atmospheric xenon is used to detect, locate and identify any confined nuclear tests as part of the Comprehensive Nuclear Test-Ban Treaty (CTBT). In this context, the Alternative Energies and French Atomic Energy Commission (CEA) has developed a fixed device to continuously measure the concentration of these fission products, the SPALAX process. During its atmospheric transport, the radioactive xenon will undergo a significant dilution between the source point and the measurement station. Regarding the distance between fixed stations located all over the globe, the typical volume activities measured are near 1 mBq m⁻³. To avoid the constraints induced by atmospheric dilution, the development of a mobile detection system is in progress; this system will allow on-site measurements in order to confirm or infringe a suspicious measurement detected by a fixed station. Furthermore, this system will use beta/gamma coincidence measurement technique in order to drastically reduce environmental background (which masks such activities). The detector prototype consists of a gas cell surrounded by two large silicon wafers, coupled with two square NaI(Tl) detectors. The gas cell has a sample volume of 30 cm³ and the silicon wafers are 500 µm thick with an active surface area of 3600 mm². In order to minimize leakage current, each wafer has been segmented into four independent silicon pixels. This cell is sandwiched between two low background NaI(Tl) detectors (70x70x40 mm³ crystal). The expected Minimal Detectable Concentration (MDC) for each radio-xenon is in the order of 1-10 mBq m⁻³. Three 4-channels digital acquisition modules (Pixie-NET) are used to process all the signals. Time synchronization is ensured by a dedicated PTP-network, using the IEEE 1588 Precision Time Protocol. We would like to present this system from its simulation to the laboratory tests.Keywords: beta/gamma coincidence technique, low level measurement, radioxenon, silicon pixels
Procedia PDF Downloads 1251015 Stable Isotope Ratios Data for Tracing the Origin of Greek Olive Oils and Table Olives
Authors: Efthimios Kokkotos, Kostakis Marios, Beis Alexandros, Angelos Patakas, Antonios Avgeris, Vassilios Triantafyllidis
Abstract:
H, C, and O stable isotope ratios were measured in different olive oils and table olives originating from different regions of Greece. In particular, the stable isotope ratios of different olive oils produced in the Lakonia region (Peloponesse – South Greece) from different varieties, i.e., cvs ‘Athinolia’ and ‘koroneiki’, were determined. Additionally, stable isotope ratios were also measured in different table olives (cvs ‘koroneiki’ and ‘kalamon’) produced in the same region (Messinia). The aim of this study was to provide sufficient isotope ratio data regarding each variety and region of origin that could be used in discriminative studies of oil olives and table olives produced by different varieties in other regions. In total, 97 samples of olive oil (cv ‘Athinolia’ and ‘koroneiki’) and 67 samples of table olives (cvs ‘kalmon’ and ‘koroneiki’) collected during two consecutive sampling periods (2021-2022 and 2022-2023) were measured. The C, H, and O isotope ratios were measured using Isotope Ratio Mass Spectrometry (IRMS), and the results obtained were analyzed using chemometric techniques. The measurements of the isotope ratio analyses were expressed in permille (‰) using the delta δ notation (δ=Rsample/Rstandard-1, where Rsample and Rstandardis represent the isotope ratio of sample and standard). Results indicate that stable isotope ratios of C, H, and O ranged between -28,5+0,45‰, -142,83+2,82‰, 25,86+0,56‰ and -29,78+0,71‰, -143,62+1,4‰, 26,32+0,55‰ in olive oils produced in Lakonia region from ‘Athinolia’ and ‘koroneiki ‘varieties, respectively. The C, H, and O values from table olives originated from Messinia region were -28,58+0,63‰, -138,09+3,27‰, 25,45+0,62‰ and -29,41+0,59‰,-137,67+1,15‰, 24,37+0,6‰ for ‘Kalamon’ and ‘koroneiki’ olives respectively. Acknowledgments: This research has been co-financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH—CREATE—INNOVATE (Project code: T2EDK-02637; MIS 5075094, Title: ‘Innovative Methodological Tools for Traceability, Certification and Authenticity Assessment of Olive Oil and Olives’).Keywords: olive oil, table olives, Isotope ratio, IRMS, geographical origin
Procedia PDF Downloads 571014 Structural Properties of Surface Modified PVA: Zn97Pr3O Polymer Nanocomposite Free Standing Films
Authors: Pandiyarajan Thangaraj, Mangalaraja Ramalinga Viswanathan, Karthikeyan Balasubramanian, Héctor D. Mansilla, José Ruiz
Abstract:
Rare earth ions doped semiconductor nanostructures gained much attention due to their novel physical and chemical properties which lead to potential applications in laser technology as inexpensive luminescent materials. Doping of rare earth ions into ZnO semiconductor alter its electronic structure and emission properties. Surface modification (polymer covering) is one of the simplest techniques to modify the emission characteristics of host materials. The present work reports the synthesis and structural properties of PVA:Zn97Pr3O polymer nanocomposite free standing films. To prepare Pr3+ doped ZnO nanostructures and PVA:Zn97Pr3O polymer nanocomposite free standing films, the colloidal chemical and solution casting techniques were adopted, respectively. The formation of PVA:Zn97Pr3O films were confirmed through X-ray diffraction (XRD), absorption and Fourier transform infrared (FTIR) spectroscopy analyses. XRD measurements confirm the prepared materials are crystalline having hexagonal wurtzite structure. Polymer composite film exhibits the diffraction peaks of both PVA and ZnO structures. TEM images reveal the pure and Pr3+ doped ZnO nanostructures exhibit sheet like morphology. Optical absorption spectra show free excitonic absorption band of ZnO at 370 nm and, the PVA:Zn97Pr3O polymer film shows absorption bands at ~282 and 368 nm and these arise due to the presence of carbonyl containing structures connected to the PVA polymeric chains, mainly at the ends and free excitonic absorption of ZnO nanostructures, respectively. Transmission spectrum of as prepared film shows 57 to 69% of transparency in the visible and near IR region. FTIR spectral studies confirm the presence of A1 (TO) and E1 (TO) modes of Zn-O bond vibration and the formation of polymer composite materials.Keywords: rare earth doped ZnO, polymer composites, structural characterization, surface modification
Procedia PDF Downloads 3621013 Tsunami Wave Height and Flow Velocity Calculations Based on Density Measurements of Boulders: Case Studies from Anegada and Pakarang Cape
Authors: Zakiul Fuady, Michaela Spiske
Abstract:
Inundation events, such as storms and tsunamis can leave onshore sedimentary evidence like sand deposits or large boulders. These deposits store indirect information on the related inundation parameters (e.g., flow velocity, flow depth, wave height). One tool to reveal these parameters are inverse models that use the physical characteristics of the deposits to refer to the magnitude of inundation. This study used boulders of the 2004 Indian Ocean Tsunami from Thailand (Pakarang Cape) and form a historical tsunami event that inundated the outer British Virgin Islands (Anegada). For the largest boulder found in Pakarang Cape with a volume of 26.48 m³ the required tsunami wave height is 0.44 m and storm wave height are 1.75 m (for a bulk density of 1.74 g/cm³. In Pakarang Cape the highest tsunami wave height is 0.45 m and storm wave height are 1.8 m for transporting a 20.07 m³ boulder. On Anegada, the largest boulder with a diameter of 2.7 m is the asingle coral head (species Diploria sp.) with a bulk density of 1.61 g/cm³, and requires a minimum tsunami wave height of 0.31 m and storm wave height of 1.25 m. The highest required tsunami wave height on Anegada is 2.12 m for a boulder with a bulk density of 2.46 g/cm³ (volume 0.0819 m³) and the highest storm wave height is 5.48 m (volume 0.216 m³) from the same bulk density and the coral type is limestone. Generally, the higher the bulk density, volume, and weight of the boulders, the higher the minimum tsunami and storm wave heights required to initiate transport. It requires 4.05 m/s flow velocity by Nott’s equation (2003) and 3.57 m/s by Nandasena et al. (2011) to transport the largest boulder in Pakarang Cape, whereas on Anegada, it requires 3.41 m/s to transport a boulder with diameter 2.7 m for both equations. Thus, boulder equations need to be handled with caution because they make many assumptions and simplifications. Second, the physical boulder parameters, such as density and volume need to be determined carefully to minimize any errors.Keywords: tsunami wave height, storm wave height, flow velocity, boulders, Anegada, Pakarang Cape
Procedia PDF Downloads 2371012 Evaluation of the Energy Performance and Emissions of an Aircraft Engine: J69 Using Fuel Blends of Jet A1 and Biodiesel
Authors: Gabriel Fernando Talero Rojas, Vladimir Silva Leal, Camilo Bayona-Roa, Juan Pava, Mauricio Lopez Gomez
Abstract:
The substitution of conventional aviation fuels with biomass-derived alternative fuels is an emerging field of study in the aviation transport, mainly due to its energy consumption, the contribution to the global Greenhouse Gas - GHG emissions and the fossil fuel price fluctuations. Nevertheless, several challenges remain as the biofuel production cost and its degradative effect over the fuel systems that alter the operating safety. Moreover, experimentation on full-scale aeronautic turbines are expensive and complex, leading to most of the research to the testing of small-size turbojets with a major absence of information regarding the effects in the energy performance and the emissions. The main purpose of the current study is to present the results of experimentation in a full-scale military turbojet engine J69-T-25A (presented in Fig. 1) with 640 kW of power rating and using blends of Jet A1 with oil palm biodiesel. The main findings are related to the thrust specific fuel consumption – TSFC, the engine global efficiency – η, the air/fuel ratio – AFR and the volume fractions of O2, CO2, CO, and HC. Two fuels are used in the present study: a commercial Jet A1 and a Colombian palm oil biodiesel. The experimental plan is conducted using the biodiesel volume contents - w_BD from 0 % (B0) to 50 % (B50). The engine operating regimes are set to Idle, Cruise, and Take-off conditions. The turbojet engine J69 is used by the Colombian Air Force and it is installed in a testing bench with the instrumentation that corresponds to the technical manual of the engine. The increment of w_BD from 0 % to 50 % reduces the η near 3,3 % and the thrust force in a 26,6 % at Idle regime. These variations are related to the reduction of the 〖HHV〗_ad of the fuel blend. The evolved CO and HC tend to be reduced in all the operating conditions when increasing w_BD. Furthermore, a reduction of the atomization angle is presented in Fig. 2, indicating a poor atomization in the fuel nozzle injectors when using a higher biodiesel content as the viscosity of fuel blend increases. An evolution of cloudiness is also observed during the shutdown procedure as presented in Fig. 3a, particularly after 20 % of biodiesel content in the fuel blend. This promotes the contamination of some components of the combustion chamber of the J69 engine with soot and unburned matter (Fig. 3). Thus, the substitution of biodiesel content above 20 % is not recommended in order to avoid a significant decrease of η and the thrust force. A more detail examination of the mechanical wearing of the main components of the engine is advised in further studies.Keywords: aviation, air to fuel ratio, biodiesel, energy performance, fuel atomization, gas turbine
Procedia PDF Downloads 1091011 Impact of Air Flow Structure on Distinct Shape of Differential Pressure Devices
Authors: A. Bertašienė
Abstract:
Energy harvesting from any structure makes a challenge. Different structure of air/wind flows in industrial, environmental and residential applications emerge the real flow investigation in detail. Many of the application fields are hardly achievable to the detailed description due to the lack of up-to-date statistical data analysis. In situ measurements aim crucial investments thus the simulation methods come to implement structural analysis of the flows. Different configurations of testing environment give an overview how important is the simple structure of field in limited area on efficiency of the system operation and the energy output. Several configurations of modeled working sections in air flow test facility was implemented in CFD ANSYS environment to compare experimentally and numerically air flow development stages and forms that make effects on efficiency of devices and processes. Effective form and amount of these flows under different geometry cases define the manner of instruments/devices that measure fluid flow parameters for effective operation of any system and emission flows to define. Different fluid flow regimes were examined to show the impact of fluctuations on the development of the whole volume of the flow in specific environment. The obtained results rise the discussion on how these simulated flow fields are similar to real application ones. Experimental results have some discrepancies from simulation ones due to the models implemented to fluid flow analysis in initial stage, not developed one and due to the difficulties of models to cover transitional regimes. Recommendations are essential for energy harvesting systems in both, indoor and outdoor cases. Further investigations aim to be shifted to experimental analysis of flow under laboratory conditions using state-of-the-art techniques as flow visualization tool and later on to in situ situations that is complicated, cost and time consuming study.Keywords: fluid flow, initial region, tube coefficient, distinct shape
Procedia PDF Downloads 3371010 Study of Hypertension at Sohag City: Upper Egypt Experience
Authors: Aly Kassem, Eman Sapet, Eman Abdelbaset, Hosam Mahmoud
Abstract:
Objective: Hypertension is an important public health challenge being one of the most common worldwide disease-affecting human. Our aim is to study the clinical characteristics, therapeutic regimens, treatment compliance, and risk factors in a sector of of hypertensive patients at Sohag City. Subject and Methods: A cross sectional study; conducted in Sohag city; it involved 520 patients; males (45.7 %) and females (54.3 %). Their ages ranged between 35-85 years. BP measurements, BMI, blood glucose, Serum creatinine, urine analysis, serum Lipids, blood picture and ECG were done all the studied patients. Results: Hypertension presented more between non-smokers (72.55%), females (54.3%), educated patients (50.99%) and patients with low SES (54.9%). CAD presented in (51.63%) of patients, while laboratory investigations showed hyperglycaemia in (28.7%), anemia in (18.3%), high serum creatinine level in (8.49%) and proteinuria in (10.45%) of patient. Adequate BP control was achieved in (49.67%); older patients had lower adequacy of BP control in spite of the extensive use of multiple-drug therapy. Most hypertensive patients had more than one coexistent CV risk factor. Aging, being a female (54.3%), DM (32.3%), family history of hypertension (28.7%), family history of CAD (25.4%), and obesity (10%) were the common contributing risk factors. ACE-inhibitors were prescribed in (58.16%), Beta-blockers in (34.64%) of the patients. Monotherapy was prescribed for (41.17%) of the patients. (75.81%) of patients had regular use of their drug regimens. (49.67%) only of patients had their condition under control, the number of drugs was inversely related to BP control. Conclusion: Hypertensive patients in Sohag city had a profile of high CV risks, and poor blood pressure control particularly in the elderly. A multidisciplinary approach for routine clinical check-up, follow-up, physicians and patients training, prescribing simple once-daily regimens and encouraging life style modifications are recommended. Anti hypertensives, hypertension, elderly patients, risk factors, treatment compliance.Keywords: anti hypertensives, hypertension, elderly patients, risk factors, treatment compliance
Procedia PDF Downloads 3051009 Numerical Investigation on Anchored Sheet Pile Quay Wall with Separated Relieving Platform
Authors: Mahmoud Roushdy, Mohamed El Naggar, Ahmed Yehia Abdelaziz
Abstract:
Anchored sheet pile has been used worldwide as front quay walls for decades. With the increase in vessel drafts and weights, those sheet pile walls need to be upgraded by increasing the depth of the dredging line in front of the wall. A system has recently been used to increase the depth in front of the wall by installing a separated platform supported on a deep foundation (so called Relieving Platform) behind the sheet pile wall. The platform is structurally separated from the front wall. This paper presents a numerical investigation utilizing finite element analysis on the behavior of separated relieve platforms installed within existing anchored sheet pile quay walls. The investigation was done in two steps: a verification step followed by a parametric study. In the verification step, the numerical model was verified based on field measurements performed by others. The validated model was extended within the parametric study to a series of models with different backfill soils, separation gap width, and number of pile rows supporting the platform. The results of the numerical investigation show that using stiff clay as backfill soil (neglecting consolidation) gives better performance for the front wall and the first pile row adjacent to sandy backfills. The degree of compaction of the sandy backfill slightly increases lateral deformations but reduces bending moment acting on pile rows, while the effect is minor on the front wall. In addition, the increase in the separation gap width gradually increases bending moments on the front wall regardless of the backfill soil type, while this effect is reversed on pile rows (gradually decrease). Finally, the paper studies the possibility of reducing the number of pile rows along with the separation to take advantage of the positive separation effect on piles.Keywords: anchored sheet pile, relieving platform, separation gap, upgrade quay wall
Procedia PDF Downloads 851008 Window Seat: Examining Public Space, Politics, and Social Identity through Urban Public Transportation
Authors: Sabrina Howard
Abstract:
'Window Seat' uses public transportation as an entry point for understanding the relationship between public space, politics, and social identity construction. This project argues that by bringing people of different races, classes, and genders in 'contact' with one another, public transit operates as a site of exposure, as people consciously and unconsciously perform social identity within these spaces. These performances offer a form of freedom that we associate with being in urban spaces while simultaneously rendering certain racialized, gendered, and classed bodies vulnerable to violence. Furthermore, due to its exposing function, public transit operates as a site through which we, as urbanites and scholars, can read social injustice and reflect on the work that is necessary to become a truly democratic society. The major questions guiding this research are: How does using public transit as the entry point provide unique insights into the relationship between social identity, politics, and public space? What ideas do Americans hold about public space and how might these ideas reflect a liberal yearning for a more democratic society? To address these research questions, 'Window Seat' critically examines ethnographic data collected on public buses and trains in Los Angeles, California, and online news media. It analyzes these sources through literature in socio-cultural psychology, sociology, and political science. It investigates the 'everyday urban hero' narrative or popular news stories that feature an individual or group of people acting against discriminatory or 'Anti-American' behavior on public buses and trains. 'Window Seat' studies these narratives to assert that by circulating stories of civility in news media, United Statsians construct and maintain ideas of the 'liberal city,' which is characterized by ideals of freedom and democracy. Furthermore, for those involved, these moments create an opportunity to perform the role of the Good Samaritan, an identity that is wrapped up in liberal beliefs in diversity and inclusion. This research expands conversations in urban studies by making a case for the political significance of urban public space. It demonstrates how these sites serve as spaces through which liberal beliefs are circulated and upheld through identity performance.Keywords: social identity, public space, public transportation, liberalism
Procedia PDF Downloads 2041007 Correlates of Modes of Transportation to Work among Working Adults in Ernakulam District, Kerala
Authors: Anjaly Joseph, Elezebeth Mathews
Abstract:
Transportation and urban planning is the least recognised area for physical activity promotion in India, unlike developed regions. Identifying the preferred transportation modalities and factors associated with it is essential to address these lacunae. The objective of the study was to assess the prevalence of modes of transportation to work, and its correlates among working adults in Ernakulam District, Kerala. A cross sectional study was conducted among 350 working individuals in the age group of 18-60 years, selected through multi-staged stratified random sampling in Ernakulam district of Kerala. The inclusion criteria were working individuals 18-60 years, workplace at a distance of more than 1 km from the home and who worked five or more days a week. Pregnant women/women on maternity leave and drivers (taxi drivers, autorickshaw drivers, and lorry drivers) were excluded. An interview schedule was used to capture the modes of transportation namely, public, private and active transportation, socio demographic details, travel behaviour, anthropometric measurements and health status. Nearly two-thirds (64 percent) of them used private transportation to work, while active commuters were only 6.6 percent. The correlates identified for active commuting compared to other modes were low socio-economic status (OR=0.22, CI=0.5-0.85) and presence of a driving license (OR=4.95, CI= 1.59-15.45). The correlates identified for public transportation compared to private transportation were female gender (OR= 17.79, CI= 6.26-50.31), low income (OR=0.33, CI= 0.11-0.93), being unmarried (OR=5.19, CI=1.46-8.37), presence of no or only one private vehicle in the house (OR=4.23, CI=1.24-20.54) and presence of convenient public transportation facility to workplace (OR=3.97, CI= 1.66-9.47). The association between body mass index (BMI) and public transportation were explored and found that public transport users had lesser BMI than private commuters (OR=2.30, CI=1.23-4.29). Policies that encourage active and public transportation needs to be introduced such as discouraging private vehicle through taxes, introduction of convenient and safe public transportation facility, walking/cycling paths, and paid parking facility.Keywords: active transportation, correlates, India, public transportation, transportation modes
Procedia PDF Downloads 1641006 Measurement of Ionospheric Plasma Distribution over Myanmar Using Single Frequency Global Positioning System Receiver
Authors: Win Zaw Hein, Khin Sandar Linn, Su Su Yi Mon, Yoshitaka Goto
Abstract:
The Earth ionosphere is located at the altitude of about 70 km to several 100 km from the ground, and it is composed of ions and electrons called plasma. In the ionosphere, these plasma makes delay in GPS (Global Positioning System) signals and reflect in radio waves. The delay along the signal path from the satellite to the receiver is directly proportional to the total electron content (TEC) of plasma, and this delay is the largest error factor in satellite positioning and navigation. Sounding observation from the top and bottom of the ionosphere was popular to investigate such ionospheric plasma for a long time. Recently, continuous monitoring of the TEC using networks of GNSS (Global Navigation Satellite System) observation stations, which are basically built for land survey, has been conducted in several countries. However, in these stations, multi-frequency support receivers are installed to estimate the effect of plasma delay using their frequency dependence and the cost of multi-frequency support receivers are much higher than single frequency support GPS receiver. In this research, single frequency GPS receiver was used instead of expensive multi-frequency GNSS receivers to measure the ionospheric plasma variation such as vertical TEC distribution. In this measurement, single-frequency support ublox GPS receiver was used to probe ionospheric TEC. The location of observation was assigned at Mandalay Technological University in Myanmar. In the method, the ionospheric TEC distribution is represented by polynomial functions for latitude and longitude, and parameters of the functions are determined by least-squares fitting on pseudorange data obtained at a known location under an assumption of thin layer ionosphere. The validity of the method was evaluated by measurements obtained by the Japanese GNSS observation network called GEONET. The performance of measurement results using single-frequency of GPS receiver was compared with the results by dual-frequency measurement.Keywords: ionosphere, global positioning system, GPS, ionospheric delay, total electron content, TEC
Procedia PDF Downloads 1371005 Ytterbium Advantages for Brachytherapy
Authors: S. V. Akulinichev, S. A. Chaushansky, V. I. Derzhiev
Abstract:
High dose rate (HDR) brachytherapy is a method of contact radiotherapy, when a single sealed source with an activity of about 10 Ci is temporarily inserted in the tumor area. The isotopes Ir-192 and (much less) Co-60 are used as active material for such sources. The other type of brachytherapy, the low dose rate (LDR) brachytherapy, implies the insertion of many permanent sources (up to 200) of lower activity. The pulse dose rate (PDR) brachytherapy can be considered as a modification of HDR brachytherapy, when the single source is repeatedly introduced in the tumor region in a pulse regime during several hours. The PDR source activity is of the order of one Ci and the isotope Ir-192 is currently used for these sources. The PDR brachytherapy is well recommended for the treatment of several tumors since, according to oncologists, it combines the medical benefits of both HDR and LDR types of brachytherapy. One of the main problems for the PDR brachytherapy progress is the shielding of the treatment area since the longer stay of patients in a shielded canyon is not enough comfortable for them. The use of Yb-169 as an active source material is the way to resolve the shielding problem for PDR, as well as for HRD brachytherapy. The isotope Yb-169 has the average photon emission energy of 93 KeV and the half-life of 32 days. Compared to iridium and cobalt, this isotope has a significantly lower emission energy and therefore requires a much lighter shielding. Moreover, the absorption cross section of different materials has a strong Z-dependence in that photon energy range. For example, the dose distributions of iridium and ytterbium have a quite similar behavior in the water or in the body. But the heavier material as lead absorbs the ytterbium radiation much stronger than the iridium or cobalt radiation. For example, only 2 mm of lead layer is enough to reduce the ytterbium radiation by a couple of orders of magnitude but is not enough to protect from iridium radiation. We have created an original facility to produce the start stable isotope Yb-168 using the laser technology AVLIS. This facility allows to raise the Yb-168 concentration up to 50 % and consumes much less of electrical power than the alternative electromagnetic enrichment facilities. We also developed, in cooperation with the Institute of high pressure physics of RAS, a new technology for manufacturing high-density ceramic cores of ytterbium oxide. Ceramics density reaches the limit of the theoretical values: 9.1 g/cm3 for the cubic phase of ytterbium oxide and 10 g/cm3 for the monoclinic phase. Source cores from this ceramics have high mechanical characteristics and a glassy surface. The use of ceramics allows to increase the source activity with fixed external dimensions of sources.Keywords: brachytherapy, high, pulse dose rates, radionuclides for therapy, ytterbium sources
Procedia PDF Downloads 4911004 Investigation of the Growth Kinetics of Phases in Ni–Sn System
Authors: Varun A Baheti, Sanjay Kashyap, Kamanio Chattopadhyay, Praveen Kumar, Aloke Paul
Abstract:
Ni–Sn system finds applications in the microelectronics industry, especially with respect to flip–chip or direct chip, attach technology. Here the region of interest is under bump metallization (UBM), and solder bump (Sn) interface due to the formation of brittle intermetallic phases there. Understanding the growth of these phases at UBM/Sn interface is important, as in many cases it controls the electro–mechanical properties of the product. Cu and Ni are the commonly used UBM materials. Cu is used for good bonding because of fast reaction with solder and Ni often acts as a diffusion barrier layer due to its inherently slower reaction kinetics with Sn–based solders. Investigation on the growth kinetics of phases in Ni–Sn system is reported in this study. Just for simplicity, Sn being major solder constituent is chosen. Ni–Sn electroplated diffusion couples are prepared by electroplating pure Sn on Ni substrate. Bulk diffusion couples prepared by the conventional method are also studied along with Ni–Sn electroplated diffusion couples. Diffusion couples are annealed for 25–1000 h at 50–215°C to study the phase evolutions and growth kinetics of various phases. The interdiffusion zone was analysed using field emission gun equipped scanning electron microscope (FE–SEM) for imaging. Indexing of selected area diffraction (SAD) patterns obtained from transmission electron microscope (TEM) and composition measurements done in electron probe micro−analyser (FE–EPMA) confirms the presence of various product phases grown across the interdiffusion zone. Time-dependent experiments indicate diffusion controlled growth of the product phase. The estimated activation energy in the temperature range 125–215°C for parabolic growth constants (and hence integrated interdiffusion coefficients) of the Ni₃Sn₄ phase shed light on the growth mechanism of the phase; whether its grain boundary controlled or lattice controlled diffusion. The location of the Kirkendall marker plane indicates that the Ni₃Sn₄ phase grows mainly by diffusion of Sn in the binary Ni–Sn system.Keywords: diffusion, equilibrium phase, metastable phase, the Ni-Sn system
Procedia PDF Downloads 3071003 Coupling Static Multiple Light Scattering Technique With the Hansen Approach to Optimize Dispersibility and Stability of Particle Dispersions
Authors: Guillaume Lemahieu, Matthias Sentis, Giovanni Brambilla, Gérard Meunier
Abstract:
Static Multiple Light Scattering (SMLS) has been shown to be a straightforward technique for the characterization of colloidal dispersions without dilution, as multiply scattered light in backscattered and transmitted mode is directly related to the concentration and size of scatterers present in the sample. In this view, the use of SMLS for stability measurement of various dispersion types has already been widely described in the literature. Indeed, starting from a homogeneous dispersion, the variation of backscattered or transmitted light can be attributed to destabilization phenomena, such as migration (sedimentation, creaming) or particle size variation (flocculation, aggregation). In a view to investigating more on the dispersibility of colloidal suspensions, an experimental set-up for “at the line” SMLS experiment has been developed to understand the impact of the formulation parameters on particle size and dispersibility. The SMLS experiment is performed with a high acquisition rate (up to 10 measurements per second), without dilution, and under direct agitation. Using such experimental device, SMLS detection can be combined with the Hansen approach to optimize the dispersing and stabilizing properties of TiO₂ particles. It appears that the dispersibility and the stability spheres generated are clearly separated, arguing that lower stability is not necessarily a consequence of poor dispersibility. Beyond this clarification, this combined SMLS-Hansen approach is a major step toward the optimization of dispersibility and stability of colloidal formulations by finding solvents having the best compromise between dispersing and stabilizing properties. Such study can be intended to find better dispersion media, greener and cheaper solvents to optimize particles suspensions, reduce the content of costly stabilizing additives or satisfy product regulatory requirements evolution in various industrial fields using suspensions (paints & inks, coatings, cosmetics, energy).Keywords: dispersibility, stability, Hansen parameters, particles, solvents
Procedia PDF Downloads 1091002 Investigation of Elastic Properties of 3D Full Five Directional (f5d) Braided Composite Materials
Authors: Apeng Dong, Shu Li, Wenguo Zhu, Ming Qi, Qiuyi Xu
Abstract:
The primary objective of this paper is to focus on the elasticity properties of three-dimensional full five directional (3Df5d) braided composite. A large body of research has been focused on the 3D four directional (4d) and 3D five directional (5d) structure but not much research on the 3Df5d material. Generally, the influence of the yarn shape on mechanical properties of braided materials tends to be ignored, which makes results too ideal. Besides, with the improvement of the computational ability, people are accustomed to using computers to predict the material parameters, which fails to give an explicit and concise result facilitating production and application. Based on the traditional mechanics, this paper firstly deduced the functional relation between elasticity properties and braiding parameters. In addition, considering the actual shape of yarns after consolidation, the longitudinal modulus is modified and defined practically. Firstly, the analytic model is established based on the certain assumptions for the sake of clarity, this paper assumes that: A: the cross section of axial yarns is square; B: The cross section of braiding yarns is hexagonal; C: the characters of braiding yarns and axial yarns are the same; D: The angle between the structure boundary and the projection of braiding yarns in transverse plane is 45°; E: The filling factor ε of composite yarns is π/4; F: The deformation of unit cell is under constant strain condition. Then, the functional relation between material constants and braiding parameters is systematically deduced aimed at the yarn deformation mode. Finally, considering the actual shape of axial yarns after consolidation, the concept of technology factor is proposed and the longitudinal modulus of the material is modified based on the energy theory. In this paper, the analytic solution of material parameters is given for the first time, which provides a good reference for further research and application for 3Df5d materials. Although the analysis model is established based on certain assumptions, the analysis method is also applicable for other braided structures. Meanwhile, it is crucial that the cross section shape and straightness of axial yarns play dominant roles in the longitudinal elastic property. So in the braiding and solidifying process, the stability of the axial yarns should be guaranteed to increase the technology factor to reduce the dispersion of material parameters. Overall, the elastic properties of this materials are closely related to the braiding parameters and can be strongly designable, and although the longitudinal modulus of the material is greatly influenced by the technology factors, it can be defined to certain extent.Keywords: analytic solution, braided composites, elasticity properties, technology factor
Procedia PDF Downloads 2371001 The Effect of Artificial Intelligence on Digital Factory
Authors: Sherif Fayez Lewis Ghaly
Abstract:
up to datefacupupdated planning has the mission of designing merchandise, plant life, procedures, enterprise, regions, and the development of a up to date. The requirements for up-to-date planning and the constructing of a updated have changed in recent years. everyday restructuring is turning inupupdated greater essential up-to-date hold the competitiveness of a manufacturing facilityupdated. restrictions in new regions, shorter existence cycles of product and manufacturing generation up-to-date a VUCA global (Volatility, Uncertainty, Complexity & Ambiguity) up-to-date greater frequent restructuring measures inside a manufacturing facilityupdated. A virtual up-to-date model is the making plans basis for rebuilding measures and up-to-date an fundamental up-to-date. short-time period rescheduling can now not be handled through on-web site inspections and manual measurements. The tight time schedules require 3177227fc5dac36e3e5ae6cd5820dcaa making plans fashions. updated the high variation fee of facup-to-dateries defined above, a method for rescheduling facupdatedries on the idea of a modern-day digital up to datery dual is conceived and designed for sensible software in updated restructuring projects. the point of interest is on rebuild approaches. The purpose is up-to-date preserve the planning basis (virtual up-to-date model) for conversions within a up to datefacupupdated updated. This calls for the application of a methodology that reduces the deficits of present techniques. The goal is up-to-date how a digital up to datery version may be up to date up to date during ongoing up to date operation. a method up-to-date on phoup to dategrammetry technology is presented. the focus is on developing a easy and fee-powerful up to date tune the numerous adjustments that arise in a manufacturing unit constructing in the course of operation. The method is preceded with the aid of a hardware and software assessment up-to-date become aware of the most cost effective and quickest version.Keywords: building information modeling, digital factory model, factory planning, maintenance digital factory model, photogrammetry, restructuring
Procedia PDF Downloads 271000 Effectiveness of Clinical Practice Guidelines for Jellyfish Stings Treatment at the Emergency Room of Songkhla Hospital Thailand
Authors: Prataksitorn Chonlakan, Tiparat Wongsilarat
Abstract:
The traditional clinical practice guideline used at the emergency room at Songkhla Hospital in caring for patients who come in contact with jellyfish venom took a long time for the pain to reduce to the level that patients can cope with. To investigate the effectiveness of clinical practice guidelines by comparing the effectiveness of a newly developed clinical practice guideline with the traditional clinical practice guideline in the following aspects: 1) pain reduction, 2) length of pain, 3) the rate of patient’s re-visit, 4) the rate of severe complications such as anaphylactic shock, and cardiac arrest, and death, and 5) patient satisfaction. This study employed a quasi-experimental research design. Thirty subjects were selected with purposive sampling from jellyfish-sting patients who came for treatment at the Emergency Room of Songkhla Hospital. The subjects were divided using random assignment into two groups of 15 each: an experimental group, and the control group. The control group was treated using the traditional clinical practice guideline consisting of rinsing the affected area with 0.9% normal saline, using a cloth soaked with vinegar to press against the affected area, and controlling pain using tramadol or diclofenac intramuscular injection. The data were analyzed using descriptive statistics and paired t-test at the significance level p < 0.05. The results of the study revealed the following. The pain level in the experimental group was significantly lower than that of the control group (the average pain score of the experimental group was 3.46 while that of the control group was 6.33) (p < 0.05).The length of pain in the experimental group was significantly lower than that of the control group (the average length of pain in the experimental group was 48.67 minutes while that of the control group was 105.35 minutes) (p < 0.05). The rate of re-visit within 12 hours in the experimental group was significantly lower than that of the control group (the rate of re-visit within 12 hours of the experimental group was 0.07 while that of the control group was 0.00) (p < 0.05).No severe complications such as anaphylactic shock, and cardiac arrest were found in the two groups of subjects.The rate of satisfaction among the subjects in the experimental group was significantly higher than that of the control group (the rate of satisfaction among the subjects of the experimental group was 90.00 percent while that among the control group was 66.33 percent) (p < 0.05). The newly develop clinical practice guideline could reduce pain and increase satisfaction among jellyfish-sting patients better than the traditional clinical practice guideline.Keywords: effectiveness, clinical practice guideline, jellyfish-sting patients, cardiac arrest
Procedia PDF Downloads 351999 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 127998 Impact of Ethnic and Religious Identity on Coping Behavior in Young Adults: Cross-Cultural Research
Authors: Yuliya Kovalenko
Abstract:
Given the social nature of people, it is interesting to explore strategies of responding to psycho-traumatic situations in individuals of different ethnic and religious identity. This would allow to substantially expand the idea of human behavior in general, and coping behavior, in particular. This paper investigated the weighted impact of ethnic and religious identities on the patterns of coping behavior. This cross-cultural research empirically revealed intergroup differences in coping strategies and behavior in the samples of young students and teachers of different ethnic identities (Egyptians N=216 and Ukrainians N=109) and different religious identities (Egyptian Muslims N=147 and Christians, including Egyptian Christians N=68 and Ukrainian Christians N = 109). The empirical data were obtained using the questionnaires SACS and COPE. Statistical analysis and interpretation of the results were performed with IBM SPSS-23.0. It was found that, compared to the religious identity, the ethnic identity of the subjects appeared more predictive of coping behavior. It was shown that the constant exchange of information and the unity of biological and social contributed to a more homogeneous picture in the society where Christians and Muslims were integrated into a single cultural space. It was concluded that depending on their ethnic identity, individuals would form a specific hierarchy of coping strategies resulting in a specific pattern of coping with certain stressors. The Egyptian subjects revealed the following pattern of coping with various kinds of academic stress: 'seeking social support', 'problem solving', 'adapting', 'seeking information'. The coping pattern demonstrated by the Ukrainian subjects could be presented as 'seeking information', 'adapting', 'seeking social support', 'problem solving'. There was a tendency in the group of Egyptians to engage in more collectivist coping strategies (with the predominant coping strategy 'religious coping'), in contrast to the Ukrainians who displayed more individualistic coping strategies (with 'planning' and 'active coping' as the mostly used coping strategies). At the same time, it was obvious that Ukrainians should not be unambiguously attributed to the individualistic coping behavior due to their reliance on 'seeking social support' and 'social contact'. The final conclusion was also drawn from the peculiarities of developing religious identity, including religiosity, in Egyptians (formal religious education of both Muslims and Christians) and Ukrainians (more spontaneous process): Egyptians seem to learn to resort to the religious coping, which could be an indication that, in principle, it is possible and necessary to train individuals in desirable coping behavior.Keywords: coping behavior, cross-cultural research, ethnic and religious identity, hierarchical pattern of coping
Procedia PDF Downloads 162997 GBKMeans: A Genetic Based K-Means Applied to the Capacitated Planning of Reading Units
Authors: Anderson S. Fonseca, Italo F. S. Da Silva, Robert D. A. Santos, Mayara G. Da Silva, Pedro H. C. Vieira, Antonio M. S. Sobrinho, Victor H. B. Lemos, Petterson S. Diniz, Anselmo C. Paiva, Eliana M. G. Monteiro
Abstract:
In Brazil, the National Electric Energy Agency (ANEEL) establishes that electrical energy companies are responsible for measuring and billing their customers. Among these regulations, it’s defined that a company must bill your customers within 27-33 days. If a relocation or a change of period is required, the consumer must be notified in writing, in advance of a billing period. To make it easier to organize a workday’s measurements, these companies create a reading plan. These plans consist of grouping customers into reading groups, which are visited by an employee responsible for measuring consumption and billing. The creation process of a plan efficiently and optimally is a capacitated clustering problem with constraints related to homogeneity and compactness, that is, the employee’s working load and the geographical position of the consuming unit. This process is a work done manually by several experts who have experience in the geographic formation of the region, which takes a large number of days to complete the final planning, and because it’s human activity, there is no guarantee of finding the best optimization for planning. In this paper, the GBKMeans method presents a technique based on K-Means and genetic algorithms for creating a capacitated cluster that respects the constraints established in an efficient and balanced manner, that minimizes the cost of relocating consumer units and the time required for final planning creation. The results obtained by the presented method are compared with the current planning of a real city, showing an improvement of 54.71% in the standard deviation of working load and 11.97% in the compactness of the groups.Keywords: capacitated clustering, k-means, genetic algorithm, districting problems
Procedia PDF Downloads 198996 A Low-Cost of Foot Plantar Shoes for Gait Analysis
Authors: Zulkifli Ahmad, Mohd Razlan Azizan, Nasrul Hadi Johari
Abstract:
This paper presents a study on development and conducting of a wearable sensor system for gait analysis measurement. For validation, the method of plantar surface measurement by force plate was prepared. In general gait analysis, force plate generally represents a studies about barefoot in whole steps and do not allow analysis of repeating movement step in normal walking and running. The measurements that were usually perform do not represent the whole daily plantar pressures in the shoe insole and only obtain the ground reaction force. The force plate measurement is usually limited a few step and it is done indoor and obtaining coupling information from both feet during walking is not easily obtained. Nowadays, in order to measure pressure for a large number of steps and obtain pressure in each insole part, it could be done by placing sensors within an insole. With this method, it will provide a method for determine the plantar pressures while standing, walking or running of a shoe wearing subject. Inserting pressure sensors in the insole will provide specific information and therefore the point of the sensor placement will result in obtaining the critical part under the insole. In the wearable shoe sensor project, the device consists left and right shoe insole with ten FSR. Arduino Mega was used as a micro-controller that read the analog input from FSR. The analog inputs were transmitted via bluetooth data transmission that gains the force data in real time on smartphone. Blueterm software which is an android application was used as an interface to read the FSR reading on the shoe wearing subject. The subject consist of two healthy men with different age and weight doing test while standing, walking (1.5 m/s), jogging (5 m/s) and running (9 m/s) on treadmill. The data obtain will be saved on the android device and for making an analysis and comparison graph.Keywords: gait analysis, plantar pressure, force plate, earable sensor
Procedia PDF Downloads 453995 The Impact of Nutrition Education Intervention in Improving the Nutritional Status of Sickle Cell Patients
Authors: Lindy Adoma Dampare, Marina Aferiba Tandoh
Abstract:
Sickle cell disease (SCD) is an inherited blood disorder that mostly affects individuals in sub-Saharan Africa. Nutritional deficiencies have been well established in SCD patients. In Ghana, studies have revealed the prevalence of malnutrition, especially amongst children with SCD and hence the need to develop an evidence-based comprehensive nutritional therapy for SCD to improve their nutritional status. The aim of the study was to develop and assess the effect of a nutrition education material on the nutritional status of SCD patients in Ghana. This was a pre-post interventional study. Patients between the ages of 2 to 60 years were recruited from the Tema General Hospital. Following a baseline nutrition knowledge (NK), beliefs, sanitary practice and dietary consumption pattern assessment, a twice-monthly nutrition education was carried out for 3 months, followed by a post-intervention assessment. Nutritional status of SCD patients was assessed using a 3-days dietary recall and anthropometric measurements. Nutrition education (NE) was given to SCD adults and caregivers of SCD children. Majority of the caregivers (69%) and SCD adult (82%) at baseline had low NK. The level of NK improved significantly in SCD adults (4.18±1.83 vs. 10.00±1.00, p<0.001) and caregivers (5.58 ± 2.25 vs.10.44± 0.846, p<0.001) after NE. Increase in NK improved dietary intake and dietary consumption pattern of SCD patients. Significant increase in weight (23.2±11.6 vs. 25.9±12.1, p=0.036) and height (118.5±21.9 vs. 123.5±22.2, p=0.011) was observed in SCD children at post intervention. Stunting (10.5% vs. 8.6%, p=0.62) and wasting (22.1% vs. 14.4%, p=0.30) reduced in SCD children after NE although not statistically significant. Reduction (18.2% vs. 9.1%) in underweight and an increase (18.2% vs. 27.3%) in overweight SCD adults was recorded at post intervention. Fat mass remained the same while high muscle mass increased (18.2% vs. 27.3%) at post intervention in SCD adult. Anaemic status of SCD patients improved at post intervention and the improvement was statistically significant amongst SCD children. Nutrition education improved the NK of SCD caregivers and adults hence, improving the dietary consumption pattern and nutrient intake of SCD patients. Overall, NE improved the nutritional status of SCD patients. This study shows the potential of nutrition education in improving the nutritional knowledge, dietary consumption pattern, dietary intake and nutritional status of SCD patients, and should be further explored.Keywords: sickle cell disease, nutrition education, dietary intake, nutritional status
Procedia PDF Downloads 102994 Cassava Plant Architecture: Insights from Genome-Wide Association Studies
Authors: Abiodun Olayinka, Daniel Dzidzienyo, Pangirayi Tongoona, Samuel Offei, Edwige Gaby Nkouaya Mbanjo, Chiedozie Egesi, Ismail Yusuf Rabbi
Abstract:
Cassava (Manihot esculenta Crantz) is a major source of starch for various industrial applications. However, the traditional cultivation and harvesting methods of cassava are labour-intensive and inefficient, limiting the supply of fresh cassava roots for industrial starch production. To achieve improved productivity and quality of fresh cassava roots through mechanized cultivation, cassava cultivars with compact plant architecture and moderate plant height are needed. Plant architecture-related traits, such as plant height, harvest index, stem diameter, branching angle, and lodging tolerance, are critical for crop productivity and suitability for mechanized cultivation. However, the genetics of cassava plant architecture remain poorly understood. This study aimed to identify the genetic bases of the relationships between plant architecture traits and productivity-related traits, particularly starch content. A panel of 453 clones developed at the International Institute of Tropical Agriculture, Nigeria, was genotyped and phenotyped for 18 plant architecture and productivity-related traits at four locations in Nigeria. A genome-wide association study (GWAS) was conducted using the phenotypic data from a panel of 453 clones and 61,238 high-quality Diversity Arrays Technology sequencing (DArTseq) derived Single Nucleotide Polymorphism (SNP) markers that are evenly distributed across the cassava genome. Five significant associations between ten SNPs and three plant architecture component traits were identified through GWAS. We found five SNPs on chromosomes 6 and 16 that were significantly associated with shoot weight, harvest index, and total yield through genome-wide association mapping. We also discovered an essential candidate gene that is co-located with peak SNPs linked to these traits in M. esculenta. A review of the cassava reference genome v7.1 revealed that the SNP on chromosome 6 is in proximity to Manes.06G101600.1, a gene that regulates endodermal differentiation and root development in plants. The findings of this study provide insights into the genetic basis of plant architecture and yield in cassava. Cassava breeders could leverage this knowledge to optimize plant architecture and yield in cassava through marker-assisted selection and targeted manipulation of the candidate gene.Keywords: Manihot esculenta Crantz, plant architecture, DArtseq, SNP markers, genome-wide association study
Procedia PDF Downloads 69