Search results for: density measurements
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5918

Search results for: density measurements

1448 Temperature and Substrate Orientation Effects on the Thermal Stability of Graphene Sheet Attached on the Si Surface

Authors: Wen-Jay Lee, Kuo-Ning Chiang

Abstract:

The graphene binding with silicon substrate has apparently Schottky barriers property, which can be used in the application of solar cell and light source. Because graphene has only one atom layer, the atomistic structure of graphene binding with the silicon surface plays an important role to affect the properties of graphene. In this work, temperature effect on the morphology of graphene sheet attached on different crystal planes of silicon substrates are investigated by Molecular dynamics (MD) (LAMMPS, developed by Sandia National Laboratories). The results show that the covered graphene sheet would cause the structural deformation of the surface Si atoms of stubtrate. To achieve a stable state in the binding process, the surface Si atoms would adjust their position and fit the honeycomb structure of graphene after the graphene attaches to the Si surface. The height contour of graphene on different plane of silicon surfaces presents different pattern, leading the local residual stress at the interface. Due to the high density of dangling bond on the Si (111)7x7 surface, the surface of Si(111)7x7 is not matching with the graphene so well in contrast with Si(100)2x1and Si(111)2x1. Si(111)7x7 is found that only partial silicon adatoms are rearranged on surface after the attachment when the temperature is lower than 200K, As the temperature gradually increases, the deformation of surface structure becomes significant, as well as the residue stress. With increasing temperature till the 815K, the graphene sheet begins to destroy and mixes with the silicon atoms. For the Si(100)2x1 and Si(111)2x1, the silicon surface structure keep its structural arrangement with a higher temperature. With increasing temperature, the residual stress gradually decrease till a critical temperatures. When the temperature is higher than the critical temperature, the residual stress gradually increases and the structural deformation is found on the surface of the Si substrates.

Keywords: molecular dynamics, graphene, silicon, Schottky barriers, interface

Procedia PDF Downloads 311
1447 Investigation of Electrospun Composites Nanofiber of Poly (Lactic Acid)/Hazelnut Shell Powder/Zinc Oxide

Authors: Ibrahim Sengor, Sumeyye Cesur, Ilyas Kartal, Faik Nuzhet Oktar, Nazmi Ekren, Ahmet Talat Inan, Oguzhan Gunduz

Abstract:

In recent years, many researchers focused on nano-size fiber production. Nanofibers have been studied due to their different and superior physical, chemical and mechanical properties. Poly (lactic acid) (PLA), is a type of biodegradable thermoplastic polyester derived from renewable sources used in biomedical owing to its biocompatibility and biodegradability. In addition, zinc oxide is an antibacterial material and hazelnut shell powder is a filling material. In this study, nanofibers were obtained by adding of different ratio Zinc oxide, (ZnO) and hazelnut shell powder at different concentration into Poly (lactic acid) (PLA) by using electrospinning method which is the most common method to obtain nanofibers. After dissolving the granulated polylactic acids in % 1,% 2,% 3 and% 4 with chloroform solvent, they are homogenized by adding tween and hazelnut shell powder at different ratios and then by electrospinning, nanofibers are obtained. Scanning electron microscope (SEM), Fourier transform infrared spectroscopy (FTIR), Differential scanning calorimeter (DSC) and physical analysis such as density, electrical conductivity, surface tension, viscosity measurement and antimicrobial test were carried out after production process. The resulting structures of the nanofiber possess antimicrobial and antiseptic properties, which are attractive for biomedical applications. The resulting structures of the nanofiber possess antimicrobial, non toxic, self-cleaning and rigid properties, which are attractive for biomedical applications.

Keywords: electrospinning, hazelnut shell powder, nanofibers, poly (lactic acid), zinc oxide

Procedia PDF Downloads 155
1446 Remote Sensing of Aerated Flows at Large Dams: Proof of Concept

Authors: Ahmed El Naggar, Homyan Saleh

Abstract:

Dams are crucial for flood control, water supply, and the creation of hydroelectric power. Every dam has a water conveyance system, such as a spillway, providing the safe discharge of catastrophic floods when necessary. Spillway design has historically been investigated in laboratory research owing to the absence of suitable full-scale flow monitoring equipment and safety problems. Prototype measurements of aerated flows are urgently needed to quantify projected scale effects and provide missing validation data for design guidelines and numerical simulations. In this work, an image-based investigation of free-surface flows on a tiered spillway was undertaken at the laboratory (fixed camera installation) and prototype size (drone video) (drone footage) (drone footage). The drone videos were generated using data from citizen science. Analyses permitted the measurement of the free-surface aeration inception point, air-water surface velocities, fluctuations, and residual energy at the chute's downstream end from a remote site. The prototype observations offered full-scale proof of concept, while laboratory results were efficiently confirmed against invasive phase-detection probe data. This paper stresses the efficacy of image-based analyses at prototype spillways. It highlights how citizen science data may enable academics better understand real-world air-water flow dynamics and offers a framework for a small collection of long-missing prototype data.

Keywords: remote sensing, aerated flows, large dams, proof of concept, dam spillways, air-water flows, prototype operation, remote sensing, inception point, optical flow, turbulence, residual energy

Procedia PDF Downloads 76
1445 Development of an Atmospheric Radioxenon Detection System for Nuclear Explosion Monitoring

Authors: V. Thomas, O. Delaune, W. Hennig, S. Hoover

Abstract:

Measurement of radioactive isotopes of atmospheric xenon is used to detect, locate and identify any confined nuclear tests as part of the Comprehensive Nuclear Test-Ban Treaty (CTBT). In this context, the Alternative Energies and French Atomic Energy Commission (CEA) has developed a fixed device to continuously measure the concentration of these fission products, the SPALAX process. During its atmospheric transport, the radioactive xenon will undergo a significant dilution between the source point and the measurement station. Regarding the distance between fixed stations located all over the globe, the typical volume activities measured are near 1 mBq m⁻³. To avoid the constraints induced by atmospheric dilution, the development of a mobile detection system is in progress; this system will allow on-site measurements in order to confirm or infringe a suspicious measurement detected by a fixed station. Furthermore, this system will use beta/gamma coincidence measurement technique in order to drastically reduce environmental background (which masks such activities). The detector prototype consists of a gas cell surrounded by two large silicon wafers, coupled with two square NaI(Tl) detectors. The gas cell has a sample volume of 30 cm³ and the silicon wafers are 500 µm thick with an active surface area of 3600 mm². In order to minimize leakage current, each wafer has been segmented into four independent silicon pixels. This cell is sandwiched between two low background NaI(Tl) detectors (70x70x40 mm³ crystal). The expected Minimal Detectable Concentration (MDC) for each radio-xenon is in the order of 1-10 mBq m⁻³. Three 4-channels digital acquisition modules (Pixie-NET) are used to process all the signals. Time synchronization is ensured by a dedicated PTP-network, using the IEEE 1588 Precision Time Protocol. We would like to present this system from its simulation to the laboratory tests.

Keywords: beta/gamma coincidence technique, low level measurement, radioxenon, silicon pixels

Procedia PDF Downloads 118
1444 Analysis of Intra-Varietal Diversity for Some Lebanese Grapevine Cultivars

Authors: Stephanie Khater, Ali Chehade, Lamis Chalak

Abstract:

The progressive replacement of the Lebanese autochthonous grapevine cultivars during the last decade by the imported foreign varieties almost resulted in the genetic erosion of the local germplasm and the confusion with cultivars' names. Hence there is a need to characterize these local cultivars and to assess the possible existing variability at the cultivar level. This work was conducted in an attempt to evaluate the intra-varietal diversity within Lebanese traditional cultivars 'Aswad', 'Maghdoushe', 'Maryame', 'Merweh', 'Meksese' and 'Obeide'. A total of 50 accessions distributed over five main geographical areas in Lebanon were collected and submitted to both ampelographic description and ISSR DNA analysis. A set of 35 ampelographic descriptors previously established by the International Office of Vine and Wine and related to leaf, bunch, berry, and phenological stages, were examined. Variability was observed between accessions within cultivars for blade shape, density of prostrate and erect hairs, teeth shape, berry shape, size and color, cluster shape and size, and flesh juiciness. At the molecular level, nine ISSR (inter-simple sequence repeat) primers, previously developed for grapevine, were used in this study. These primers generated a total of 35 bands, of which 30 (85.7%) were polymorphic. Totally, 29 genetic profiles were differentiated, of which 9 revealed within 'Obeide', 6 for 'Maghdoushe', 5 for 'Merweh', 4 within 'Maryame', 3 for 'Aswad' and 2 within 'Meksese'. Findings of this study indicate the existence of several genotypes that form the basis of the main indigenous cultivars grown in Lebanon and which should be further considered in the establishment of new vineyards and selection programs.

Keywords: ampelography, autochthonous cultivars, ISSR markers, Lebanon, Vitis vinifera L.

Procedia PDF Downloads 132
1443 Exploring 1,2,4-Triazine-3(2H)-One Derivatives as Anticancer Agents for Breast Cancer: A QSAR, Molecular Docking, ADMET, and Molecular Dynamics

Authors: Said Belaaouad

Abstract:

This study aimed to explore the quantitative structure-activity relationship (QSAR) of 1,2,4-Triazine-3(2H)-one derivative as a potential anticancer agent against breast cancer. The electronic descriptors were obtained using the Density Functional Theory (DFT) method, and a multiple linear regression techniques was employed to construct the QSAR model. The model exhibited favorable statistical parameters, including R2=0.849, R2adj=0.656, MSE=0.056, R2test=0.710, and Q2cv=0.542, indicating its reliability. Among the descriptors analyzed, absolute electronegativity (χ), total energy (TE), number of hydrogen bond donors (NHD), water solubility (LogS), and shape coefficient (I) were identified as influential factors. Furthermore, leveraging the validated QSAR model, new derivatives of 1,2,4-Triazine-3(2H)-one were designed, and their activity and pharmacokinetic properties were estimated. Subsequently, molecular docking (MD) and molecular dynamics (MD) simulations were employed to assess the binding affinity of the designed molecules. The Tubulin colchicine binding site, which plays a crucial role in cancer treatment, was chosen as the target protein. Through the simulation trajectory spanning 100 ns, the binding affinity was calculated using the MMPBSA script. As a result, fourteen novel Tubulin-colchicine inhibitors with promising pharmacokinetic characteristics were identified. Overall, this study provides valuable insights into the QSAR of 1,2,4-Triazine-3(2H)-one derivative as potential anticancer agent, along with the design of new compounds and their assessment through molecular docking and dynamics simulations targeting the Tubulin-colchicine binding site.

Keywords: QSAR, molecular docking, ADMET, 1, 2, 4-triazin-3(2H)-ones, breast cancer, anticancer, molecular dynamic simulations, MMPBSA calculation

Procedia PDF Downloads 78
1442 Stable Isotope Ratios Data for Tracing the Origin of Greek Olive Oils and Table Olives

Authors: Efthimios Kokkotos, Kostakis Marios, Beis Alexandros, Angelos Patakas, Antonios Avgeris, Vassilios Triantafyllidis

Abstract:

H, C, and O stable isotope ratios were measured in different olive oils and table olives originating from different regions of Greece. In particular, the stable isotope ratios of different olive oils produced in the Lakonia region (Peloponesse – South Greece) from different varieties, i.e., cvs ‘Athinolia’ and ‘koroneiki’, were determined. Additionally, stable isotope ratios were also measured in different table olives (cvs ‘koroneiki’ and ‘kalamon’) produced in the same region (Messinia). The aim of this study was to provide sufficient isotope ratio data regarding each variety and region of origin that could be used in discriminative studies of oil olives and table olives produced by different varieties in other regions. In total, 97 samples of olive oil (cv ‘Athinolia’ and ‘koroneiki’) and 67 samples of table olives (cvs ‘kalmon’ and ‘koroneiki’) collected during two consecutive sampling periods (2021-2022 and 2022-2023) were measured. The C, H, and O isotope ratios were measured using Isotope Ratio Mass Spectrometry (IRMS), and the results obtained were analyzed using chemometric techniques. The measurements of the isotope ratio analyses were expressed in permille (‰) using the delta δ notation (δ=Rsample/Rstandard-1, where Rsample and Rstandardis represent the isotope ratio of sample and standard). Results indicate that stable isotope ratios of C, H, and O ranged between -28,5+0,45‰, -142,83+2,82‰, 25,86+0,56‰ and -29,78+0,71‰, -143,62+1,4‰, 26,32+0,55‰ in olive oils produced in Lakonia region from ‘Athinolia’ and ‘koroneiki ‘varieties, respectively. The C, H, and O values from table olives originated from Messinia region were -28,58+0,63‰, -138,09+3,27‰, 25,45+0,62‰ and -29,41+0,59‰,-137,67+1,15‰, 24,37+0,6‰ for ‘Kalamon’ and ‘koroneiki’ olives respectively. Acknowledgments: This research has been co-financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH—CREATE—INNOVATE (Project code: T2EDK-02637; MIS 5075094, Title: ‘Innovative Methodological Tools for Traceability, Certification and Authenticity Assessment of Olive Oil and Olives’).

Keywords: olive oil, table olives, Isotope ratio, IRMS, geographical origin

Procedia PDF Downloads 42
1441 Computer Simulation of Hydrogen Superfluidity through Binary Mixing

Authors: Sea Hoon Lim

Abstract:

A superfluid is a fluid of bosons that flows without resistance. In order to be a superfluid, a substance’s particles must behave like bosons, yet remain mobile enough to be considered a superfluid. Bosons are low-temperature particles that can be in all energy states at the same time. If bosons were to be cooled down, then the particles will all try to be on the lowest energy state, which is called the Bose Einstein condensation. The temperature when bosons start to matter is when the temperature has reached its critical temperature. For example, when Helium reaches its critical temperature of 2.17K, the liquid density drops and becomes a superfluid with zero viscosity. However, most materials will solidify -and thus not remain fluids- at temperatures well above the temperature at which they would otherwise become a superfluid. Only a few substances currently known to man are capable of at once remaining a fluid and manifesting boson statistics. The most well-known of these is helium and its isotopes. Because hydrogen is lighter than helium, and thus expected to manifest Bose statistics at higher temperatures than helium, one might expect hydrogen to also be a superfluid. As of today, however, no one has yet been able to produce a bulk, hydrogen superfluid. The reason why hydrogen did not form a superfluid in the past is its intermolecular interactions. As a result, hydrogen molecules are much more likely to crystallize than their helium counterparts. The key to creating a hydrogen superfluid is therefore finding a way to reduce the effect of the interactions among hydrogen molecules, postponing the solidification to lower temperature. In this work, we attempt via computer simulation to produce bulk superfluid hydrogen through binary mixing. Binary mixture is a technique of mixing two pure substances in order to avoid crystallization and enhance super fluidity. Our mixture here is KALJ H2. We then sample the partition function using this Path Integral Monte Carlo (PIMC), which is well-suited for the equilibrium properties of low-temperature bosons and captures not only the statistics but also the dynamics of Hydrogen. Via this sampling, we will then produce a time evolution of the substance and see if it exhibits superfluid properties.

Keywords: superfluidity, hydrogen, binary mixture, physics

Procedia PDF Downloads 311
1440 Structural Properties of Surface Modified PVA: Zn97Pr3O Polymer Nanocomposite Free Standing Films

Authors: Pandiyarajan Thangaraj, Mangalaraja Ramalinga Viswanathan, Karthikeyan Balasubramanian, Héctor D. Mansilla, José Ruiz

Abstract:

Rare earth ions doped semiconductor nanostructures gained much attention due to their novel physical and chemical properties which lead to potential applications in laser technology as inexpensive luminescent materials. Doping of rare earth ions into ZnO semiconductor alter its electronic structure and emission properties. Surface modification (polymer covering) is one of the simplest techniques to modify the emission characteristics of host materials. The present work reports the synthesis and structural properties of PVA:Zn97Pr3O polymer nanocomposite free standing films. To prepare Pr3+ doped ZnO nanostructures and PVA:Zn97Pr3O polymer nanocomposite free standing films, the colloidal chemical and solution casting techniques were adopted, respectively. The formation of PVA:Zn97Pr3O films were confirmed through X-ray diffraction (XRD), absorption and Fourier transform infrared (FTIR) spectroscopy analyses. XRD measurements confirm the prepared materials are crystalline having hexagonal wurtzite structure. Polymer composite film exhibits the diffraction peaks of both PVA and ZnO structures. TEM images reveal the pure and Pr3+ doped ZnO nanostructures exhibit sheet like morphology. Optical absorption spectra show free excitonic absorption band of ZnO at 370 nm and, the PVA:Zn97Pr3O polymer film shows absorption bands at ~282 and 368 nm and these arise due to the presence of carbonyl containing structures connected to the PVA polymeric chains, mainly at the ends and free excitonic absorption of ZnO nanostructures, respectively. Transmission spectrum of as prepared film shows 57 to 69% of transparency in the visible and near IR region. FTIR spectral studies confirm the presence of A1 (TO) and E1 (TO) modes of Zn-O bond vibration and the formation of polymer composite materials.

Keywords: rare earth doped ZnO, polymer composites, structural characterization, surface modification

Procedia PDF Downloads 353
1439 Evaluating the Perception of Roma in Europe through Social Network Analysis

Authors: Giulia I. Pintea

Abstract:

The Roma people are a nomadic ethnic group native to India, and they are one of the most prevalent minorities in Europe. In the past, Roma were enslaved and they were imprisoned in concentration camps during the Holocaust; today, Roma are subject to hate crimes and are denied access to healthcare, education, and proper housing. The aim of this project is to analyze how the public perception of the Roma people may be influenced by antiziganist and pro-Roma institutions in Europe. In order to carry out this project, we used social network analysis to build two large social networks: The antiziganist network, which is composed of institutions that oppress and racialize Roma, and the pro-Roma network, which is composed of institutions that advocate for and protect Roma rights. Measures of centrality, density, and modularity were obtained to determine which of the two social networks is exerting the greatest influence on the public’s perception of Roma in European societies. Furthermore, data on hate crimes on Roma were gathered from the Organization for Security and Cooperation in Europe (OSCE). We analyzed the trends in hate crimes on Roma for several European countries for 2009-2015 in order to see whether or not there have been changes in the public’s perception of Roma, thus helping us evaluate which of the two social networks has been more influential. Overall, the results suggest that there is a greater and faster exchange of information in the pro-Roma network. However, when taking the hate crimes into account, the impact of the pro-Roma institutions is ambiguous, due to differing patterns among European countries, suggesting that the impact of the pro-Roma network is inconsistent. Despite antiziganist institutions having a slower flow of information, the hate crime patterns also suggest that the antiziganist network has a higher impact on certain countries, which may be due to institutions outside the political sphere boosting the spread of antiziganist ideas and information to the European public.

Keywords: applied mathematics, oppression, Roma people, social network analysis

Procedia PDF Downloads 261
1438 The Effect of Nano-Silver Packaging on Quality Maintenance of Fresh Strawberry

Authors: Naser Valipour Motlagh, Majid Aliabadi, Elnaz Rahmani, Samira Ghorbanpour

Abstract:

Strawberry is one of the most favored fruits all along the world. But due to its vulnerability to microbial contamination and short life storage, there are lots of problems in industrial production and transportation of this fruit. Therefore, lots of ideas have tried to increase the storage life of strawberries especially through proper packaging. This paper works on efficient packaging as well. The primary material used is produced through simple mixing of low-density polyethylene (LDPE) and silver nanoparticles in different weight fractions of 0.5 and 1% in presence of dicumyl peroxide as a cross-linking agent. Final packages were made in a twin-screw extruder. Then, their effect on the quality maintenance of strawberry is evaluated. The SEM images of nano-silver packages show the distribution of silver nanoparticles in the packages. Total bacteria count, mold, yeast and E. coli are measured for microbial evaluation of all samples. Texture, color, appearance, odor, taste and total acceptance of various samples are evaluated by trained panelists and based on 9-point hedonic scale method. The results show a decrease in total bacteria count and mold in nano-silver packages compared to the samples packed in polyethylene packages for the same storage time. The optimum concentration of silver nanoparticles for the lowest bacteria count and mold is predicted to be around 0.5% which has attained the most acceptance from the panelist as well. Moreover, organoleptic properties of strawberry are preserved for a longer period in nano-silver packages. It can be concluded that using nano-silver particles in strawberry packages has improved the storage life and quality maintenance of the fruit.

Keywords: antimicrobial properties, polyethylene, silver nanoparticles, strawberry

Procedia PDF Downloads 146
1437 Impact of Air Flow Structure on Distinct Shape of Differential Pressure Devices

Authors: A. Bertašienė

Abstract:

Energy harvesting from any structure makes a challenge. Different structure of air/wind flows in industrial, environmental and residential applications emerge the real flow investigation in detail. Many of the application fields are hardly achievable to the detailed description due to the lack of up-to-date statistical data analysis. In situ measurements aim crucial investments thus the simulation methods come to implement structural analysis of the flows. Different configurations of testing environment give an overview how important is the simple structure of field in limited area on efficiency of the system operation and the energy output. Several configurations of modeled working sections in air flow test facility was implemented in CFD ANSYS environment to compare experimentally and numerically air flow development stages and forms that make effects on efficiency of devices and processes. Effective form and amount of these flows under different geometry cases define the manner of instruments/devices that measure fluid flow parameters for effective operation of any system and emission flows to define. Different fluid flow regimes were examined to show the impact of fluctuations on the development of the whole volume of the flow in specific environment. The obtained results rise the discussion on how these simulated flow fields are similar to real application ones. Experimental results have some discrepancies from simulation ones due to the models implemented to fluid flow analysis in initial stage, not developed one and due to the difficulties of models to cover transitional regimes. Recommendations are essential for energy harvesting systems in both, indoor and outdoor cases. Further investigations aim to be shifted to experimental analysis of flow under laboratory conditions using state-of-the-art techniques as flow visualization tool and later on to in situ situations that is complicated, cost and time consuming study.

Keywords: fluid flow, initial region, tube coefficient, distinct shape

Procedia PDF Downloads 331
1436 Structural Evolution of Electrodeposited Ni Coating on Ti-6Al-4V Alloy during Heat Treatment

Authors: M. Abdoos, A. Amadeh, M. Adabi

Abstract:

In recent decades, the use of titanium and its alloys due to their high mechanical properties, light weight and their corrosion resistance has increased in military and industry applications. However, the poor surface properties can limit their widely usage. Many researches were carried out to improve their surface properties. The most effective technique is based on solid-state diffusion of elements that can form intermetallic compounds with the substrate. In the present work, inter-diffusion of nickel and titanium and formation of Ni-Ti intermetallic compounds in nickel-coated Ti-6Al-4V alloy have been studied. Initially, nickel was electrodeposited on the alloy using Watts bath at a current density of 20 mA/cm2 for 1 hour. The coated specimens were then heat treated in a tubular furnace under argon atmosphere at different temperatures near Ti β-transus to maximize the diffusion rate for various durations in order to improve the surface properties of the Ti-6Al-4V alloy. The effect of temperature and time on the thickness of diffusion layer and characteristics of intermetallic phases was studied by means of scanning electron microscope (SEM) equipped with energy dispersive X-ray spectrometer (EDS) and microhardness test. The results showed that a multilayer structure was formed after heat treatment: an outer layer of remaining nickel, an area of intermetallic layers with different compositions and solid solution of Ni-Ti. Three intermetallic layers was detected by EDS analysis, namely an outer layer with about 75 at.% Ni (Ni3Ti), an intermediate layer with 50 at.% Ni (NiTi) and finally an inner layer with 36 at.% Ni (NiTi2). It was also observed that the increase in time or temperature led to the formation of thicker intermetallic layers. Meanwhile, the microhardness of heat treated samples increased with formation of Ni-Ti intermetallics; however, its value depended on heat treatment parameters.

Keywords: heat treatment, microhardness, Ni coating, Ti-6Al-4V

Procedia PDF Downloads 430
1435 Study of Hypertension at Sohag City: Upper Egypt Experience

Authors: Aly Kassem, Eman Sapet, Eman Abdelbaset, Hosam Mahmoud

Abstract:

Objective: Hypertension is an important public health challenge being one of the most common worldwide disease-affecting human. Our aim is to study the clinical characteristics, therapeutic regimens, treatment compliance, and risk factors in a sector of of hypertensive patients at Sohag City. Subject and Methods: A cross sectional study; conducted in Sohag city; it involved 520 patients; males (45.7 %) and females (54.3 %). Their ages ranged between 35-85 years. BP measurements, BMI, blood glucose, Serum creatinine, urine analysis, serum Lipids, blood picture and ECG were done all the studied patients. Results: Hypertension presented more between non-smokers (72.55%), females (54.3%), educated patients (50.99%) and patients with low SES (54.9%). CAD presented in (51.63%) of patients, while laboratory investigations showed hyperglycaemia in (28.7%), anemia in (18.3%), high serum creatinine level in (8.49%) and proteinuria in (10.45%) of patient. Adequate BP control was achieved in (49.67%); older patients had lower adequacy of BP control in spite of the extensive use of multiple-drug therapy. Most hypertensive patients had more than one coexistent CV risk factor. Aging, being a female (54.3%), DM (32.3%), family history of hypertension (28.7%), family history of CAD (25.4%), and obesity (10%) were the common contributing risk factors. ACE-inhibitors were prescribed in (58.16%), Beta-blockers in (34.64%) of the patients. Monotherapy was prescribed for (41.17%) of the patients. (75.81%) of patients had regular use of their drug regimens. (49.67%) only of patients had their condition under control, the number of drugs was inversely related to BP control. Conclusion: Hypertensive patients in Sohag city had a profile of high CV risks, and poor blood pressure control particularly in the elderly. A multidisciplinary approach for routine clinical check-up, follow-up, physicians and patients training, prescribing simple once-daily regimens and encouraging life style modifications are recommended. Anti hypertensives, hypertension, elderly patients, risk factors, treatment compliance.

Keywords: anti hypertensives, hypertension, elderly patients, risk factors, treatment compliance

Procedia PDF Downloads 291
1434 Numerical Investigation on Anchored Sheet Pile Quay Wall with Separated Relieving Platform

Authors: Mahmoud Roushdy, Mohamed El Naggar, Ahmed Yehia Abdelaziz

Abstract:

Anchored sheet pile has been used worldwide as front quay walls for decades. With the increase in vessel drafts and weights, those sheet pile walls need to be upgraded by increasing the depth of the dredging line in front of the wall. A system has recently been used to increase the depth in front of the wall by installing a separated platform supported on a deep foundation (so called Relieving Platform) behind the sheet pile wall. The platform is structurally separated from the front wall. This paper presents a numerical investigation utilizing finite element analysis on the behavior of separated relieve platforms installed within existing anchored sheet pile quay walls. The investigation was done in two steps: a verification step followed by a parametric study. In the verification step, the numerical model was verified based on field measurements performed by others. The validated model was extended within the parametric study to a series of models with different backfill soils, separation gap width, and number of pile rows supporting the platform. The results of the numerical investigation show that using stiff clay as backfill soil (neglecting consolidation) gives better performance for the front wall and the first pile row adjacent to sandy backfills. The degree of compaction of the sandy backfill slightly increases lateral deformations but reduces bending moment acting on pile rows, while the effect is minor on the front wall. In addition, the increase in the separation gap width gradually increases bending moments on the front wall regardless of the backfill soil type, while this effect is reversed on pile rows (gradually decrease). Finally, the paper studies the possibility of reducing the number of pile rows along with the separation to take advantage of the positive separation effect on piles.

Keywords: anchored sheet pile, relieving platform, separation gap, upgrade quay wall

Procedia PDF Downloads 78
1433 Economic Expansion and Land Use Change in Thailand: An Environmental Impact Analysis Using Computable General Equilibrium Model

Authors: Supakij Saisopon

Abstract:

The process of economic development incurs spatial transformation. This spatial alternation also causes environmental impacts, leading to higher pollution. In the case of Thailand, there is still a lack of price-endogenous quantitative analysis incorporating relationships among economic growth, land-use change, and environmental impact. Therefore, this paper aimed at developing the Computable General Equilibrium (CGE) model with the capability of stimulating such mutual effects. The developed CGE model has also incorporated the nested constant elasticity of transformation (CET) structure that describes the spatial redistribution mechanism between agricultural land and urban area. The simulation results showed that the 1% decrease in the availability of agricultural land lowers the value-added of agricultural by 0.036%. Similarly, the 1% reduction of availability of urban areas can decrease the value-added of manufacturing and service sectors by 0.05% and 0.047%, respectively. Moreover, the outcomes indicate that the increasing farming and urban areas induce higher volumes of solid waste, wastewater, and air pollution. Specifically, the 1% increase in the urban area can increase pollution as follows: (1) the solid waste increase by 0.049%, (2) water pollution ̶ indicated by biochemical oxygen demand (BOD) value ̶ increase by 0.051% and (3) air pollution ̶ indicated by the volumes of CO₂, N₂O, NOₓ, CH₄, and SO₂ ̶ increase within the range of 0.045%–0.051%. With the simulation for exploring the sustainable development path, a 1% increase in agricultural land use efficiency leads to the shrinking demand for agricultural land. But this is not happening in urban, a 1% scale increase in urban utilization results in still increasing demand for land. Therefore, advanced clean production technology is necessary to align the increasing land-use efficiency with the lowered pollution density.

Keywords: CGE model, CET structure, environmental impact, land use

Procedia PDF Downloads 220
1432 Correlates of Modes of Transportation to Work among Working Adults in Ernakulam District, Kerala

Authors: Anjaly Joseph, Elezebeth Mathews

Abstract:

Transportation and urban planning is the least recognised area for physical activity promotion in India, unlike developed regions. Identifying the preferred transportation modalities and factors associated with it is essential to address these lacunae. The objective of the study was to assess the prevalence of modes of transportation to work, and its correlates among working adults in Ernakulam District, Kerala. A cross sectional study was conducted among 350 working individuals in the age group of 18-60 years, selected through multi-staged stratified random sampling in Ernakulam district of Kerala. The inclusion criteria were working individuals 18-60 years, workplace at a distance of more than 1 km from the home and who worked five or more days a week. Pregnant women/women on maternity leave and drivers (taxi drivers, autorickshaw drivers, and lorry drivers) were excluded. An interview schedule was used to capture the modes of transportation namely, public, private and active transportation, socio demographic details, travel behaviour, anthropometric measurements and health status. Nearly two-thirds (64 percent) of them used private transportation to work, while active commuters were only 6.6 percent. The correlates identified for active commuting compared to other modes were low socio-economic status (OR=0.22, CI=0.5-0.85) and presence of a driving license (OR=4.95, CI= 1.59-15.45). The correlates identified for public transportation compared to private transportation were female gender (OR= 17.79, CI= 6.26-50.31), low income (OR=0.33, CI= 0.11-0.93), being unmarried (OR=5.19, CI=1.46-8.37), presence of no or only one private vehicle in the house (OR=4.23, CI=1.24-20.54) and presence of convenient public transportation facility to workplace (OR=3.97, CI= 1.66-9.47). The association between body mass index (BMI) and public transportation were explored and found that public transport users had lesser BMI than private commuters (OR=2.30, CI=1.23-4.29). Policies that encourage active and public transportation needs to be introduced such as discouraging private vehicle through taxes, introduction of convenient and safe public transportation facility, walking/cycling paths, and paid parking facility.

Keywords: active transportation, correlates, India, public transportation, transportation modes

Procedia PDF Downloads 156
1431 Measurement of Ionospheric Plasma Distribution over Myanmar Using Single Frequency Global Positioning System Receiver

Authors: Win Zaw Hein, Khin Sandar Linn, Su Su Yi Mon, Yoshitaka Goto

Abstract:

The Earth ionosphere is located at the altitude of about 70 km to several 100 km from the ground, and it is composed of ions and electrons called plasma. In the ionosphere, these plasma makes delay in GPS (Global Positioning System) signals and reflect in radio waves. The delay along the signal path from the satellite to the receiver is directly proportional to the total electron content (TEC) of plasma, and this delay is the largest error factor in satellite positioning and navigation. Sounding observation from the top and bottom of the ionosphere was popular to investigate such ionospheric plasma for a long time. Recently, continuous monitoring of the TEC using networks of GNSS (Global Navigation Satellite System) observation stations, which are basically built for land survey, has been conducted in several countries. However, in these stations, multi-frequency support receivers are installed to estimate the effect of plasma delay using their frequency dependence and the cost of multi-frequency support receivers are much higher than single frequency support GPS receiver. In this research, single frequency GPS receiver was used instead of expensive multi-frequency GNSS receivers to measure the ionospheric plasma variation such as vertical TEC distribution. In this measurement, single-frequency support ublox GPS receiver was used to probe ionospheric TEC. The location of observation was assigned at Mandalay Technological University in Myanmar. In the method, the ionospheric TEC distribution is represented by polynomial functions for latitude and longitude, and parameters of the functions are determined by least-squares fitting on pseudorange data obtained at a known location under an assumption of thin layer ionosphere. The validity of the method was evaluated by measurements obtained by the Japanese GNSS observation network called GEONET. The performance of measurement results using single-frequency of GPS receiver was compared with the results by dual-frequency measurement.

Keywords: ionosphere, global positioning system, GPS, ionospheric delay, total electron content, TEC

Procedia PDF Downloads 124
1430 The Study of Platelet-Rich Plasma(PRP) on Wounds of OLEFT Rats Using Expression of MMP-2, MMP-9 mRNA

Authors: Ho Seong Shin

Abstract:

Introduction: A research in relation to wound healing also showed that platelet-rich plasma (PRP) was effective on normal tissue regeneration. Nonetheless, there is no evidence that when platelet-rich plasma was applied on diabetic wound, it normalize diabetic wound healing process. In this study, we have analyzed matrix metalloproteinase-2 (MMP-2), matrix metalloproteinase-9 (MMP-9) expression to know the effect of PRP on diabetic wounds using Reverse transcription-polymerase chain reaction (RT-PCR) of MMP-2, MMP-9 mRNA. Materials and Methods: Platelet-rich plasma (PRP) was prepared from blood of 6 rats. The whole 120-mL was added immediately to an anticoagulant. Citrate phosphonate dextrose(CPD) buffer (0.15 mg CPDmL) in a ratio of 1 mL of CPD buffer to 5 mL of blood. The blood was then centrifuged at 220g for 20minutes. The supernatant was saved to produce fibrin glue. The participate containing PRP was used for second centrifugation at 480g for 20 minutes. The pellet from the second centrifugation was saved and diluted with supernatant until the platelet concentration became 900,000/μL. Twenty male, 4week-old OLETF rats were underwent operation; each rat had two wounds created on left and right sides. The each wound of left side was treated with PRP gel, the wound of right side was treated with physiologic saline gauze. Results: RT-PCR analysis; The levels of MMP-2 mRNA in PRP applied tissues were positively related to postwounding days, whereas MMP-2 mRNA expression in saline-applied tissues remained in 5day after treatment. MMP-9 mRNA was undetectable in saline-applied tissues for either tissue, except 3day after treatment. Following PRP-applied tissues, MMP-9 mRNA expression was detected, with maximal expression being seen at third day. The levels of MMP-9 mRNA in PRP applied tissues were reported high intensity of optical density related to saline applied tissues.

Keywords: diabetes, MMP-2, MMP-9, OLETF, PRP, wound healing MMP-9

Procedia PDF Downloads 264
1429 Investigation of the Growth Kinetics of Phases in Ni–Sn System

Authors: Varun A Baheti, Sanjay Kashyap, Kamanio Chattopadhyay, Praveen Kumar, Aloke Paul

Abstract:

Ni–Sn system finds applications in the microelectronics industry, especially with respect to flip–chip or direct chip, attach technology. Here the region of interest is under bump metallization (UBM), and solder bump (Sn) interface due to the formation of brittle intermetallic phases there. Understanding the growth of these phases at UBM/Sn interface is important, as in many cases it controls the electro–mechanical properties of the product. Cu and Ni are the commonly used UBM materials. Cu is used for good bonding because of fast reaction with solder and Ni often acts as a diffusion barrier layer due to its inherently slower reaction kinetics with Sn–based solders. Investigation on the growth kinetics of phases in Ni–Sn system is reported in this study. Just for simplicity, Sn being major solder constituent is chosen. Ni–Sn electroplated diffusion couples are prepared by electroplating pure Sn on Ni substrate. Bulk diffusion couples prepared by the conventional method are also studied along with Ni–Sn electroplated diffusion couples. Diffusion couples are annealed for 25–1000 h at 50–215°C to study the phase evolutions and growth kinetics of various phases. The interdiffusion zone was analysed using field emission gun equipped scanning electron microscope (FE–SEM) for imaging. Indexing of selected area diffraction (SAD) patterns obtained from transmission electron microscope (TEM) and composition measurements done in electron probe micro−analyser (FE–EPMA) confirms the presence of various product phases grown across the interdiffusion zone. Time-dependent experiments indicate diffusion controlled growth of the product phase. The estimated activation energy in the temperature range 125–215°C for parabolic growth constants (and hence integrated interdiffusion coefficients) of the Ni₃Sn₄ phase shed light on the growth mechanism of the phase; whether its grain boundary controlled or lattice controlled diffusion. The location of the Kirkendall marker plane indicates that the Ni₃Sn₄ phase grows mainly by diffusion of Sn in the binary Ni–Sn system.

Keywords: diffusion, equilibrium phase, metastable phase, the Ni-Sn system

Procedia PDF Downloads 296
1428 Coupling Static Multiple Light Scattering Technique With the Hansen Approach to Optimize Dispersibility and Stability of Particle Dispersions

Authors: Guillaume Lemahieu, Matthias Sentis, Giovanni Brambilla, Gérard Meunier

Abstract:

Static Multiple Light Scattering (SMLS) has been shown to be a straightforward technique for the characterization of colloidal dispersions without dilution, as multiply scattered light in backscattered and transmitted mode is directly related to the concentration and size of scatterers present in the sample. In this view, the use of SMLS for stability measurement of various dispersion types has already been widely described in the literature. Indeed, starting from a homogeneous dispersion, the variation of backscattered or transmitted light can be attributed to destabilization phenomena, such as migration (sedimentation, creaming) or particle size variation (flocculation, aggregation). In a view to investigating more on the dispersibility of colloidal suspensions, an experimental set-up for “at the line” SMLS experiment has been developed to understand the impact of the formulation parameters on particle size and dispersibility. The SMLS experiment is performed with a high acquisition rate (up to 10 measurements per second), without dilution, and under direct agitation. Using such experimental device, SMLS detection can be combined with the Hansen approach to optimize the dispersing and stabilizing properties of TiO₂ particles. It appears that the dispersibility and the stability spheres generated are clearly separated, arguing that lower stability is not necessarily a consequence of poor dispersibility. Beyond this clarification, this combined SMLS-Hansen approach is a major step toward the optimization of dispersibility and stability of colloidal formulations by finding solvents having the best compromise between dispersing and stabilizing properties. Such study can be intended to find better dispersion media, greener and cheaper solvents to optimize particles suspensions, reduce the content of costly stabilizing additives or satisfy product regulatory requirements evolution in various industrial fields using suspensions (paints & inks, coatings, cosmetics, energy).

Keywords: dispersibility, stability, Hansen parameters, particles, solvents

Procedia PDF Downloads 89
1427 Effects of in Ovo Injection of Royal Jelly on Hatchability, One-Day Old Chickens Quality, Total Antioxidant Status and Blood Lipoproteins

Authors: Amin Adeli, Maryam Zarei

Abstract:

Background and purpose: Royal jelly (RJ) is a natural product with anti-hyperlipidemic and antioxidant properties. In ovo administration of RJ may improve lipid profile and antioxidant properties. This study was conducted to evaluate, for first time, the effects of in ovo injection of the RJ on hatchability, one-day old chick quality, total antioxidant status and blood lipoproteins. Methods: 400 incubating eggs produced by Ross 308 strain (52 weeks of age in first stage of production) were prepared and assigned into 4 groups (n=100) and 4 replications per group (n=25). These 4 groups were injected by the following pattern: 1) 0.1 ml normal saline (control), 2) 0.1 mg RJ+0.1 ml normal saline, 3) 0.2 mg RJ+0.1 ml normal saline, and 4) 0.3 mg RJ+0.1 ml normal saline. Injections were performed using a laminar flow system Lipid profile, antioxidant properties, hatchability, and one-day old chicken quality were assessed. Results: The administration of RJ at concentration of 0.1increased the percentage of hatchability compared to concentration of 0.2 and control, significant differences have not been observed among groups for quality scores (P>0.05). The results showed that in ovo injection of the RJ did not have any significant effects on lipid profile; but administration of the RJ only decreased High-density lipoprotein (HDL cholesterol, HDL-C) (P<0.05). The results showed that injection of the RJ at concentration of 0.3 increased total antioxidant capacity (TAC) compared to control group (p<0.05). Injection of the RJ progressively increased gluthation peroxidase (GPx) activity (p<0.05). The results showed that injection of the RJ decreased superoxide dismutase (SOD) compared to control group (p<0.05). Conclusion: In ovo injection of the RJ at the highest concentration increased TAC and GPx, but it did not have significant effects on lipid profile. Future studies are needed to investigate the effects of the RJ on the above-mentioned mechanisms.

Keywords: antioxidant enzymes, chicken quality, hatchability, royal jelly

Procedia PDF Downloads 80
1426 Guidelines for Sustainable Urban Mobility in Historic Districts from International Experiences

Authors: Tamer ElSerafi

Abstract:

In recent approaches to heritage conservation, the whole context of historic areas becomes as important as the single historic building. This makes the provision of infrastructure and network of mobility an effective element in the urban conservation. Sustainable urban conservation projects consider the high density of activities, the need for a good quality access system to the transit system, and the importance of the configuration of the mobility network by identifying the best way to connect the different districts of the urban area through a complex unique system that helps the synergic development to achieve a sustainable mobility system. A sustainable urban mobility is a key factor in maintaining the integrity between socio-cultural aspects and functional aspects. This paper illustrates the mobility aspects, mobility problems in historic districts, and the needs of the mobility systems in the first part. The second part is a practical analysis for different mobility plans. It is challenging to find innovative and creative conservation solutions fitting modern uses and needs without risking the loss of inherited built resources. Urban mobility management is becoming an essential and challenging issue in the urban conservation projects. Depending on literature review and practical analysis, this paper tries to define and clarify the guidelines for mobility management in historic districts as a key element in sustainability of urban conservation and development projects. Such rules and principles could control the conflict between the socio–cultural and economic activities, and the different needs for mobility in these districts in a sustainable way. The practical analysis includes a comparison between mobility plans which have been implemented in four different cities; Freiburg in Germany, Zurich in Switzerland and Bray Town in Ireland. This paper concludes with a matrix of guidelines that considers both principles of sustainability and livability factors in urban historic districts.

Keywords: sustainable mobility, urban mobility, mobility management, historic districts

Procedia PDF Downloads 149
1425 Mechanical Tests and Analyzes of Behaviors of High-Performance of Polyester Resins Reinforced With Unifilo Fiberglass

Authors: Băilă Diana Irinel, Păcurar Răzvan, Păcurar Ancuța

Abstract:

In the last years, composite materials are increasingly used in automotive, aeronautic, aerospace, construction applications. Composite materials have been used in aerospace in applications such as engine blades, brackets, interiors, nacelles, propellers/rotors, single aisle wings, wide body wings. The fields of use of composite materials have multiplied with the improvement of material properties, such as stability and adaptation to the environment, mechanical tests, wear resistance, moisture resistance, etc. The composite materials are classified concerning type of matrix materials, as metallic, polymeric and ceramic based composites and are grouped according to the reinforcement type as fibre, obtaining particulate and laminate composites. Production of a better material is made more likely by combining two or more materials with complementary properties. The best combination of strength and ductility may be accomplished in solids that consist of fibres embedded in a host material. Polyester is a suitable component for composite materials, as it adheres so readily to the particles, sheets, or fibres of the other components. The important properties of the reinforcing fibres are their high strength and high modulus of elasticity. For applications, as in automotive or in aeronautical domain, in which a high strength-to-weight ratio is important, non-metallic fibres such as fiberglass have a distinct advantage because of their low density. In general, the glass fibres content varied between 9 to 33% wt. in the composites. In this article, high-performance types of composite materials glass-epoxy and glass-polyester used in automotive domain will be analyzed, performing tensile and flexural tests and SEM analyzes.

Keywords: glass-polyester composite, glass fibre, traction and flexion tests, SEM analyzes

Procedia PDF Downloads 147
1424 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 108
1423 GBKMeans: A Genetic Based K-Means Applied to the Capacitated Planning of Reading Units

Authors: Anderson S. Fonseca, Italo F. S. Da Silva, Robert D. A. Santos, Mayara G. Da Silva, Pedro H. C. Vieira, Antonio M. S. Sobrinho, Victor H. B. Lemos, Petterson S. Diniz, Anselmo C. Paiva, Eliana M. G. Monteiro

Abstract:

In Brazil, the National Electric Energy Agency (ANEEL) establishes that electrical energy companies are responsible for measuring and billing their customers. Among these regulations, it’s defined that a company must bill your customers within 27-33 days. If a relocation or a change of period is required, the consumer must be notified in writing, in advance of a billing period. To make it easier to organize a workday’s measurements, these companies create a reading plan. These plans consist of grouping customers into reading groups, which are visited by an employee responsible for measuring consumption and billing. The creation process of a plan efficiently and optimally is a capacitated clustering problem with constraints related to homogeneity and compactness, that is, the employee’s working load and the geographical position of the consuming unit. This process is a work done manually by several experts who have experience in the geographic formation of the region, which takes a large number of days to complete the final planning, and because it’s human activity, there is no guarantee of finding the best optimization for planning. In this paper, the GBKMeans method presents a technique based on K-Means and genetic algorithms for creating a capacitated cluster that respects the constraints established in an efficient and balanced manner, that minimizes the cost of relocating consumer units and the time required for final planning creation. The results obtained by the presented method are compared with the current planning of a real city, showing an improvement of 54.71% in the standard deviation of working load and 11.97% in the compactness of the groups.

Keywords: capacitated clustering, k-means, genetic algorithm, districting problems

Procedia PDF Downloads 184
1422 Plasma Engineered Nanorough Substrates for Stem Cells in vitro Culture

Authors: Melanie Macgregor-Ramiasa, Isabel Hopp, Patricia Murray, Krasimir Vasilev

Abstract:

Stem cells based therapies are one of the greatest promises of new-age medicine due to their potential to help curing most dreaded conditions such as cancer, diabetes and even auto-immune disease. However, establishing suitable in vitro culture materials allowing to control the fate of stem cells remain a challenge. Amongst the factor influencing stem cell behavior, substrate chemistry and nanotopogaphy are particularly critical. In this work, we used plasma assisted surface modification methods to produce model substrates with tailored nanotopography and controlled chemistry. Three different sizes of gold nanoparticles were bound to amine rich plasma polymer layers to produce homogeneous and gradient surface nanotopographies. The outer chemistry of the substrate was kept constant for all substrates by depositing a thin layer of our patented biocompatible polyoxazoline plasma polymer on top of the nanofeatures. For the first time, protein adsorption and stem cell behaviour (mouse kidney stem cells and mesenchymal stem cells) were evaluated on nanorough plasma deposited polyoxazoline thin films. Compared to other nitrogen rich coatings, polyoxazoline plasma polymer supports the covalent binding of proteins. Moderate surface nanoroughness, in both size and density, triggers cell proliferation. In association with polyoxazoline coating, cell proliferation is further enhanced on nanorough substrates. Results are discussed in term of substrates wetting properties. These findings provide valuable insights on the mechanisms governing the interactions between stem cells and their growth support.

Keywords: nanotopography, stem cells, differentiation, plasma polymer, oxazoline, gold nanoparticles

Procedia PDF Downloads 270
1421 A Low-Cost of Foot Plantar Shoes for Gait Analysis

Authors: Zulkifli Ahmad, Mohd Razlan Azizan, Nasrul Hadi Johari

Abstract:

This paper presents a study on development and conducting of a wearable sensor system for gait analysis measurement. For validation, the method of plantar surface measurement by force plate was prepared. In general gait analysis, force plate generally represents a studies about barefoot in whole steps and do not allow analysis of repeating movement step in normal walking and running. The measurements that were usually perform do not represent the whole daily plantar pressures in the shoe insole and only obtain the ground reaction force. The force plate measurement is usually limited a few step and it is done indoor and obtaining coupling information from both feet during walking is not easily obtained. Nowadays, in order to measure pressure for a large number of steps and obtain pressure in each insole part, it could be done by placing sensors within an insole. With this method, it will provide a method for determine the plantar pressures while standing, walking or running of a shoe wearing subject. Inserting pressure sensors in the insole will provide specific information and therefore the point of the sensor placement will result in obtaining the critical part under the insole. In the wearable shoe sensor project, the device consists left and right shoe insole with ten FSR. Arduino Mega was used as a micro-controller that read the analog input from FSR. The analog inputs were transmitted via bluetooth data transmission that gains the force data in real time on smartphone. Blueterm software which is an android application was used as an interface to read the FSR reading on the shoe wearing subject. The subject consist of two healthy men with different age and weight doing test while standing, walking (1.5 m/s), jogging (5 m/s) and running (9 m/s) on treadmill. The data obtain will be saved on the android device and for making an analysis and comparison graph.

Keywords: gait analysis, plantar pressure, force plate, earable sensor

Procedia PDF Downloads 440
1420 The Impact of Nutrition Education Intervention in Improving the Nutritional Status of Sickle Cell Patients

Authors: Lindy Adoma Dampare, Marina Aferiba Tandoh

Abstract:

Sickle cell disease (SCD) is an inherited blood disorder that mostly affects individuals in sub-Saharan Africa. Nutritional deficiencies have been well established in SCD patients. In Ghana, studies have revealed the prevalence of malnutrition, especially amongst children with SCD and hence the need to develop an evidence-based comprehensive nutritional therapy for SCD to improve their nutritional status. The aim of the study was to develop and assess the effect of a nutrition education material on the nutritional status of SCD patients in Ghana. This was a pre-post interventional study. Patients between the ages of 2 to 60 years were recruited from the Tema General Hospital. Following a baseline nutrition knowledge (NK), beliefs, sanitary practice and dietary consumption pattern assessment, a twice-monthly nutrition education was carried out for 3 months, followed by a post-intervention assessment. Nutritional status of SCD patients was assessed using a 3-days dietary recall and anthropometric measurements. Nutrition education (NE) was given to SCD adults and caregivers of SCD children. Majority of the caregivers (69%) and SCD adult (82%) at baseline had low NK. The level of NK improved significantly in SCD adults (4.18±1.83 vs. 10.00±1.00, p<0.001) and caregivers (5.58 ± 2.25 vs.10.44± 0.846, p<0.001) after NE. Increase in NK improved dietary intake and dietary consumption pattern of SCD patients. Significant increase in weight (23.2±11.6 vs. 25.9±12.1, p=0.036) and height (118.5±21.9 vs. 123.5±22.2, p=0.011) was observed in SCD children at post intervention. Stunting (10.5% vs. 8.6%, p=0.62) and wasting (22.1% vs. 14.4%, p=0.30) reduced in SCD children after NE although not statistically significant. Reduction (18.2% vs. 9.1%) in underweight and an increase (18.2% vs. 27.3%) in overweight SCD adults was recorded at post intervention. Fat mass remained the same while high muscle mass increased (18.2% vs. 27.3%) at post intervention in SCD adult. Anaemic status of SCD patients improved at post intervention and the improvement was statistically significant amongst SCD children. Nutrition education improved the NK of SCD caregivers and adults hence, improving the dietary consumption pattern and nutrient intake of SCD patients. Overall, NE improved the nutritional status of SCD patients. This study shows the potential of nutrition education in improving the nutritional knowledge, dietary consumption pattern, dietary intake and nutritional status of SCD patients, and should be further explored.

Keywords: sickle cell disease, nutrition education, dietary intake, nutritional status

Procedia PDF Downloads 86
1419 Assessment of Forest Above Ground Biomass Through Linear Modeling Technique Using SAR Data

Authors: Arjun G. Koppad

Abstract:

The study was conducted in Joida taluk of Uttara Kannada district, Karnataka, India, to assess the land use land cover (LULC) and forest aboveground biomass using L band SAR data. The study area covered has dense, moderately dense, and sparse forests. The sampled area was 0.01 percent of the forest area with 30 sampling plots which were selected randomly. The point center quadrate (PCQ) method was used to select the tree and collected the tree growth parameters viz., tree height, diameter at breast height (DBH), and diameter at the tree base. The tree crown density was measured with a densitometer. Each sample plot biomass was estimated using the standard formula. In this study, the LULC classification was done using Freeman-Durden, Yamaghuchi and Pauli polarimetric decompositions. It was observed that the Freeman-Durden decomposition showed better LULC classification with an accuracy of 88 percent. An attempt was made to estimate the aboveground biomass using SAR backscatter. The ALOS-2 PALSAR-2 L-band data (HH, HV, VV &VH) fully polarimetric quad-pol SAR data was used. SAR backscatter-based regression model was implemented to retrieve forest aboveground biomass of the study area. Cross-polarization (HV) has shown a good correlation with forest above-ground biomass. The Multi Linear Regression analysis was done to estimate aboveground biomass of the natural forest areas of the Joida taluk. The different polarizations (HH &HV, VV &HH, HV & VH, VV&VH) combination of HH and HV polarization shows a good correlation with field and predicted biomass. The RMSE and value for HH & HV and HH & VV were 78 t/ha and 0.861, 81 t/ha and 0.853, respectively. Hence the model can be recommended for estimating AGB for the dense, moderately dense, and sparse forest.

Keywords: forest, biomass, LULC, back scatter, SAR, regression

Procedia PDF Downloads 16