Search results for: atomic layer deposition (ALD)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3775

Search results for: atomic layer deposition (ALD)

325 Investigating the Atmospheric Phase Distribution of Inorganic Reactive Nitrogen Species along the Urban Transect of Indo Gangetic Plains

Authors: Reema Tiwari, U. C. Kulshrestha

Abstract:

As a key regulator of atmospheric oxidative capacity and secondary aerosol formations, the signatures of reactive nitrogen (Nr) emissions are becoming increasingly evident in the cascade of air pollution, acidification, and eutrophication of the ecosystem. However, their accurate estimates in N budget remains limited by the photochemical conversion processes where occurrence of differential atmospheric residence time of gaseous (NOₓ, HNO₃, NH₃) and particulate (NO₃⁻, NH₄⁺) Nr species becomes imperative to their spatio temporal evolution on a synoptic scale. The present study attempts to quantify such interactions under tropical conditions when low anticyclonic winds become favorable to the advections from west during winters. For this purpose, a diurnal sampling was conducted using low volume sampler assembly where ambient concentrations of Nr trace gases along with their ionic fractions in the aerosol samples were determined with UV-spectrophotometer and ion chromatography respectively. The results showed a spatial gradient of the gaseous precursors with a much pronounced inter site variability (p < 0.05) than their particulate fractions. Such observations were confirmed for their limited photochemical conversions where less than 1 ratios of day and night measurements (D/N) for the different Nr fractions suggested an influence of boundary layer dynamics at the background site. These phase conversion processes were further corroborated with the molar ratios of NOₓ/NOᵧ and NH₃/NHₓ where incomplete titrations of NOₓ and NH₃ emissions were observed irrespective of their diurnal phases along the sampling transect. Their calculations with equilibrium based approaches for an NH₃-HNO₃-NH₄NO₃ system, on the other hand, were characterized by delays in equilibrium attainment where plots of their below deliquescence Kₘ and Kₚ values with 1000/T confirmed the role of lower temperature ranges in NH₄NO₃ aerosol formation. These results would help us in not only resolving the changing atmospheric inputs of reduced (NH₃, NH₄⁺) and oxidized (NOₓ, HNO₃, NO₃⁻) Nr estimates but also in understanding the dependence of Nr mixing ratios on their local meteorological conditions.

Keywords: diurnal ratios, gas-aerosol interactions, spatial gradient, thermodynamic equilibrium

Procedia PDF Downloads 128
324 Analysis of Residents’ Travel Characteristics and Policy Improving Strategies

Authors: Zhenzhen Xu, Chunfu Shao, Shengyou Wang, Chunjiao Dong

Abstract:

To improve the satisfaction of residents' travel, this paper analyzes the characteristics and influencing factors of urban residents' travel behavior. First, a Multinominal Logit Model (MNL) model is built to analyze the characteristics of residents' travel behavior, reveal the influence of individual attributes, family attributes and travel characteristics on the choice of travel mode, and identify the significant factors. Then put forward suggestions for policy improvement. Finally, Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP) models are introduced to evaluate the policy effect. This paper selects Futian Street in Futian District, Shenzhen City for investigation and research. The results show that gender, age, education, income, number of cars owned, travel purpose, departure time, journey time, travel distance and times all have a significant influence on residents' choice of travel mode. Based on the above results, two policy improvement suggestions are put forward from reducing public transportation and non-motor vehicle travel time, and the policy effect is evaluated. Before the evaluation, the prediction effect of MNL, SVM and MLP models was evaluated. After parameter optimization, it was found that the prediction accuracy of the three models was 72.80%, 71.42%, and 76.42%, respectively. The MLP model with the highest prediction accuracy was selected to evaluate the effect of policy improvement. The results showed that after the implementation of the policy, the proportion of public transportation in plan 1 and plan 2 increased by 14.04% and 9.86%, respectively, while the proportion of private cars decreased by 3.47% and 2.54%, respectively. The proportion of car trips decreased obviously, while the proportion of public transport trips increased. It can be considered that the measures have a positive effect on promoting green trips and improving the satisfaction of urban residents, and can provide a reference for relevant departments to formulate transportation policies.

Keywords: neural network, travel characteristics analysis, transportation choice, travel sharing rate, traffic resource allocation

Procedia PDF Downloads 138
323 A Sustainable and Low-Cost Filter to Treat Pesticides in Water

Authors: T. Abbas, J. McEvoy, E. Khan

Abstract:

Pesticide contamination in water supply is a common environmental problem in rural agricultural communities. Advanced water treatment processes such as membrane filtration and adsorption on activated carbon only remove pesticides from water without degrading them into less toxic/easily degradable compounds leaving behind contaminated brine and activated carbon that need to be managed. Rural communities which normally cannot afford expensive water treatment technologies need an economical and sustainable filter which not only treats pesticides from water but also degrades them into benign products. In this study, iron turning waste experimented as potential point-of-use filtration media for the removal/degradation of a mixture of six chlorinated pesticides (lindane, heptachlor, endosulfan, dieldrin, endrin, and DDT) in water. As a common and traditional medium for water filtration, sand was also tested along with iron turning waste. Iron turning waste was characterized using scanning electron microscopy and energy dispersive X-Ray analyzer. Four glass columns with different filter media layer configurations were set up: (1) only sand, (2) only iron turning, (3) sand and iron turning (two separate layers), and (4) sand, iron turning and sand (three separate layers). The initial pesticide concentration and flow rate were 2 μg/L and 10 mL/min. Results indicate that sand filtration was effective only for the removal of DDT (100%) and endosulfan (94-96%). Iron turning filtration column effectively removed endosulfan, endrin, and dieldrin (85-95%) whereas the lindane and DDT removal were 79-85% and 39-56%, respectively. The removal efficiencies for heptachlor, endosulfan, endrin, dieldrin, and DDT were 90-100% when sand and iron turning waste (two separate layers) were used. However, better removal efficiencies (93-100%) for five out of six pesticides were achieved, when sand, iron turning and sand (three separate layers) were used as filtration media. Moreover, the effects of water pH, amounts of media, and minerals present in water such as magnesium, sodium, calcium, and nitrate on the removal of pesticides were examined. Results demonstrate that iron turning waste efficiently removed all the pesticides under studied parameters. Also, it completely de-chlorinated all the pesticides studied and based on the detection of by-products, the degradation mechanisms for all six pesticides were proposed.

Keywords: pesticide contamination, rural communities, iron turning waste, filtration

Procedia PDF Downloads 255
322 Investigation of the Technological Demonstrator 14x B in Different Angle of Attack in Hypersonic Velocity

Authors: Victor Alves Barros Galvão, Israel Da Silveira Rego, Antonio Carlos Oliveira, Paulo Gilberto De Paula Toro

Abstract:

The Brazilian hypersonic aerospace vehicle 14-X B, VHA 14-X B, is a vehicle integrated with the hypersonic airbreathing propulsion system based on supersonic combustion (scramjet), developing in Aerothermodynamics and hypersonic Prof. Henry T. Nagamatsu Laboratory, to conduct demonstration in atmospheric flight at the speed corresponding to Mach number 7 at an altitude of 30km. In the experimental procedure the hypersonic shock tunnel T3 was used, installed in that laboratory. This device simulates the flow over a model is fixed in the test section and can also simulate different atmospheric conditions. The scramjet technology offers substantial advantages to improve aerospace vehicle performance which flies at a hypersonic speed through the Earth's atmosphere by reducing fuel consumption on board. Basically, the scramjet is an aspirated aircraft engine fully integrated that uses oblique/conic shock waves generated during hypersonic flight, to promote the deceleration and compression of atmospheric air in scramjet inlet. During the hypersonic flight, the vehicle VHA 14-X will suffer atmospheric influences, promoting changes in the vehicle's angles of attack (angle that the mean line of vehicle makes with respect to the direction of the flow). Based on this information, a study is conducted to analyze the influences of changes in the vehicle's angle of attack during the atmospheric flight. Analytical theoretical analysis, simulation computational fluid dynamics and experimental investigation are the methodologies used to design a technological demonstrator prior to the flight in the atmosphere. This paper considers analysis of the thermodynamic properties (pressure, temperature, density, sound velocity) in lower surface of the VHA 14-X B. Also, it considers air as an ideal gas and chemical equilibrium, with and without boundary layer, considering changes in the vehicle's angle of attack (positive and negative in relation to the flow) and bi-dimensional expansion wave theory at the expansion section (Theory of Prandtl-Meyer).

Keywords: angle of attack, experimental hypersonic, hypersonic airbreathing propulsion, Scramjet

Procedia PDF Downloads 408
321 Practice and Understanding of Fracturing Renovation for Risk Exploration Wells in Xujiahe Formation Tight Sandstone Gas Reservoir

Authors: Fengxia Li, Lufeng Zhang, Haibo Wang

Abstract:

The tight sandstone gas reservoir in the Xujiahe Formation of the Sichuan Basin has huge reserves, but its utilization rate is low. Fracturing and stimulation are indispensable technologies to unlock their potential and achieve commercial exploitation. Slickwater is the most widely used fracturing fluid system in the fracturing and renovation of tight reservoirs. However, its viscosity is low, its sand-carrying performance is poor, and the risk of sand blockage is high. Increasing the sand carrying capacity by increasing the displacement will increase the frictional resistance of the pipe string, affecting the resistance reduction performance. The variable viscosity slickwater can flexibly switch between different viscosities in real-time online, effectively overcoming problems such as sand carrying and resistance reduction. Based on a self-developed indoor loop friction testing system, a visualization device for proppant transport, and a HAAKE MARS III rheometer, a comprehensive evaluation was conducted on the performance of variable viscosity slickwater, including resistance reduction, rheology, and sand carrying. The indoor experimental results show that: 1. by changing the concentration of drag-reducing agents, the viscosity of the slippery water can be changed between 2~30mPa. s; 2. the drag reduction rate of the variable viscosity slickwater is above 80%, and the shear rate will not reduce the drag reduction rate of the liquid; under indoor experimental conditions, 15mPa. s of variable viscosity and slickwater can basically achieve effective carrying and uniform placement of proppant. The layered fracturing effect of the JiangX well in the dense sandstone of the Xujiahe Formation shows that the drag reduction rate of the variable viscosity slickwater is 80.42%, and the daily production of the single layer after fracturing is over 50000 cubic meters. This study provides theoretical support and on-site experience for promoting the application of variable viscosity slickwater in tight sandstone gas reservoirs.

Keywords: slickwater, hydraulic fracturing, dynamic sand laying, drag reduction rate, rheological properties

Procedia PDF Downloads 75
320 Superchaotropicity: Grafted Surface to Probe the Adsorption of Nano-Ions

Authors: Raimoana Frogier, Luc Girard, Pierre Bauduin, Diane Rebiscoul, Olivier Diat

Abstract:

Nano-ions (NIs) are ionic species or clusters of nanometric size. Their low charge density and the delocalization of their charges give special properties to some of NIs belonging to chemical classes of polyoxometalates (POMs) or boron clusters. They have the particularity of interacting non-covalently with neutral hydrated surface or interfaces such as assemblies of surface-active molecules (micelles, vesicles, lyotropic liquid crystals), foam bubbles or emulsion droplets. This makes possible to classify those NIs in the Hofmeister series as superchaotropic ions. The mechanism of adsorption is complex, linked to the simultaneous dehydration of the ion and the molecule or supramolecular assembly with which it can interact, all with an enthalpic gain on the free energy of the system. This interaction process is reversible and is sufficiently pronounced to induce changes in molecular and supramolecular shape or conformation, phase transitions in the liquid phase, all at sub-millimolar ionic concentrations. This new property of some NIs opens up new possibilities for applications in fields as varied as biochemistry for solubilization, recovery of metals of interest by foams in the form of NIs... In order to better understand the physico-chemical mechanisms at the origin of this interaction, we use silicon wafers functionalized by non-ionic oligomers (polyethylene glycol chains or PEG) to study in situ by X-ray reflectivity this interaction of NIs with the grafted chains. This study carried out at ESRF (European Synchrotron Radiation Facility) and has shown that the adsorption of the NIs, such as POMs, has a very fast kinetics. Moreover the distribution of the NIs in the grafted PEG chain layer was quantify. These results are very encouraging and confirm what has been observed on soft interfaces such as micelles or foams. The possibility to play on the density, length and chemical nature of the grafted chains makes this system an ideal tool to provide kinetic and thermodynamic information to decipher the complex mechanisms at the origin of this adsorption.

Keywords: adsorption, nano-ions, solid-liquid interface, superchaotropicity

Procedia PDF Downloads 67
319 Application of Multilayer Perceptron and Markov Chain Analysis Based Hybrid-Approach for Predicting and Monitoring the Pattern of LULC Using Random Forest Classification in Jhelum District, Punjab, Pakistan

Authors: Basit Aftab, Zhichao Wang, Feng Zhongke

Abstract:

Land Use and Land Cover Change (LULCC) is a critical environmental issue that has significant effects on biodiversity, ecosystem services, and climate change. This study examines the spatiotemporal dynamics of land use and land cover (LULC) across a three-decade period (1992–2022) in a district area. The goal is to support sustainable land management and urban planning by utilizing the combination of remote sensing, GIS data, and observations from Landsat satellites 5 and 8 to provide precise predictions of the trajectory of urban sprawl. In order to forecast the LULCC patterns, this study suggests a hybrid strategy that combines the Random Forest method with Multilayer Perceptron (MLP) and Markov Chain analysis. To predict the dynamics of LULC change for the year 2035, a hybrid technique based on multilayer Perceptron and Markov Chain Model Analysis (MLP-MCA) was employed. The area of developed land has increased significantly, while the amount of bare land, vegetation, and forest cover have all decreased. This is because the principal land types have changed due to population growth and economic expansion. The study also discovered that between 1998 and 2023, the built-up area increased by 468 km² as a result of the replacement of natural resources. It is estimated that 25.04% of the study area's urbanization will be increased by 2035. The performance of the model was confirmed with an overall accuracy of 90% and a kappa coefficient of around 0.89. It is important to use advanced predictive models to guide sustainable urban development strategies. It provides valuable insights for policymakers, land managers, and researchers to support sustainable land use planning, conservation efforts, and climate change mitigation strategies.

Keywords: land use land cover, Markov chain model, multi-layer perceptron, random forest, sustainable land, remote sensing.

Procedia PDF Downloads 33
318 Optimized Electron Diffraction Detection and Data Acquisition in Diffraction Tomography: A Complete Solution by Gatan

Authors: Saleh Gorji, Sahil Gulati, Ana Pakzad

Abstract:

Continuous electron diffraction tomography, also known as microcrystal electron diffraction (MicroED) or three-dimensional electron diffraction (3DED), is a powerful technique, which in combination with cryo-electron microscopy (cryo-ED), can provide atomic-scale 3D information about the crystal structure and composition of different classes of crystalline materials such as proteins, peptides, and small molecules. Unlike the well-established X-ray crystallography method, 3DED does not require large single crystals and can collect accurate electron diffraction data from crystals as small as 50 – 100 nm. This is a critical advantage as growing larger crystals, as required by X-ray crystallography methods, is often very difficult, time-consuming, and expensive. In most cases, specimens studied via 3DED method are electron beam sensitive, which means there is a limitation on the maximum amount of electron dose one can use to collect the required data for a high-resolution structure determination. Therefore, collecting data using a conventional scintillator-based fiber coupled camera brings additional challenges. This is because of the inherent noise introduced during the electron-to-photon conversion in the scintillator and transfer of light via the fibers to the sensor, which results in a poor signal-to-noise ratio and requires a relatively higher and commonly specimen-damaging electron dose rates, especially for protein crystals. As in other cryo-EM techniques, damage to the specimen can be mitigated if a direct detection camera is used which provides a high signal-to-noise ratio at low electron doses. In this work, we have used two classes of such detectors from Gatan, namely the K3® camera (a monolithic active pixel sensor) and Stela™ (that utilizes DECTRIS hybrid-pixel technology), to address this problem. The K3 is an electron counting detector optimized for low-dose applications (like structural biology cryo-EM), and Stela is also a counting electron detector but optimized for diffraction applications with high speed and high dynamic range. Lastly, data collection workflows, including crystal screening, microscope optics setup (for imaging and diffraction), stage height adjustment at each crystal position, and tomogram acquisition, can be one of the other challenges of the 3DED technique. Traditionally this has been all done manually or in a partly automated fashion using open-source software and scripting, requiring long hours on the microscope (extra cost) and extensive user interaction with the system. We have recently introduced Latitude® D in DigitalMicrograph® software, which is compatible with all pre- and post-energy-filter Gatan cameras and enables 3DED data acquisition in an automated and optimized fashion. Higher quality 3DED data enables structure determination with higher confidence, while automated workflows allow these to be completed considerably faster than before. Using multiple examples, this work will demonstrate how to direct detection electron counting cameras enhance 3DED results (3 to better than 1 Angstrom) for protein and small molecule structure determination. We will also show how Latitude D software facilitates collecting such data in an integrated and fully automated user interface.

Keywords: continuous electron diffraction tomography, direct detection, diffraction, Latitude D, Digitalmicrograph, proteins, small molecules

Procedia PDF Downloads 107
317 Organic Rejection and Membrane Fouling with Inorganic Alumina Membrane for Industrial Wastewater Treatment

Authors: Rizwan Ahmad, Soomin Chang, Daeun Kwon, Jeonghwan Kim

Abstract:

Interests in an inorganic membrane are growing rapidly for industrial wastewater treatment due to its excellent chemical and thermal stability over polymeric membrane. Nevertheless, understanding of the membrane rejection and fouling rate caused by the deposit of contaminants on membrane surface and within membrane pores through inorganic porous membranes still requires much attention. Microfiltration alumina membranes were developed and applied for the industrial wastewater treatment to investigate rejection efficiency of organic contaminant and membrane fouling at various operational conditions. In this study, organic rejection and membrane fouling were investigated by using the alumina flat-tubular membrane developed for the treatment of industrial wastewaters. The flat-tubular alumina membranes were immersed in a fluidized membrane reactor added with granular activated carbon (GAC) particles. Fluidization was driven by recirculating a bulk industrial wastewater along membrane surface through the reactor. In the absence of GAC particles, for hazardous anionic dye contaminants, functional group characterized by the organic contaminant was found as one of the main factors affecting both membrane rejection and fouling rate. More fouling on the membrane surface led to the existence of dipolar characterizations and this was more pronounced at lower solution pH, thereby improving membrane rejection accordingly. Similar result was observed with a real metal-plating wastewater. Strong correlation was found that higher fouling rate resulted in higher organic rejection efficiency. Hydrophilicity exhibited by alumina membrane improved the organic rejection efficiency of the membrane due to the formation of hydrophilic fouling layer deposited on it. In addition, less surface roughness of alumina membrane resulted in less fouling rate. Regardless of the operational conditions applied in this study, fluidizing the GAC particles along the surface of alumina membrane was very effective to enhance organic removal efficiency higher than 95% and provide an excellent tool to reduce membrane fouling. Less than 0.1 bar as suction pressure was maintained with the alumina membrane at 25 L/m²hr of permeate set-point flux during the whole operational periods without performing any backwashing and chemical enhanced cleaning for the membrane.

Keywords: alumina membrane, fluidized membrane reactor, industrial wastewater, membrane fouling, rejection

Procedia PDF Downloads 167
316 Reconnaissance Investigation of Thermal Springs in the Middle Benue Trough, Nigeria by Remote Sensing

Authors: N. Tochukwu, M. Mukhopadhyay, A. Mohamed

Abstract:

It is no new that Nigeria faces a continual power shortage problem due to its vast population power demand and heavy reliance on nonrenewable forms of energy such as thermal power or fossil fuel. Many researchers have recommended using geothermal energy as an alternative; however, Past studies focus on the geophysical & geochemical investigation of this energy in the sedimentary and basement complex; only a few studies incorporated the remote sensing methods. Therefore, in this study, the preliminary examination of geothermal resources in the Middle Benue was carried out using satellite imagery in ArcMap. Landsat 8 scene (TIR, NIR, Red spectral bands) was used to estimate the Land Surface Temperature (LST). The Maximum Likelihood Classification (MLC) technique was used to classify sites with very low, low, moderate, and high LST. The intermediate and high classification happens to be possible geothermal zones, and they occupy 49% of the study area (38077km2). Riverline were superimposed on the LST layer, and the identification tool was used to locate high temperate sites. Streams that overlap on the selected sites were regarded as geothermal springs as. Surprisingly, the LST results show lower temperatures (<36°C) at the famous thermal springs (Awe & Wukari) than some unknown rivers/streams found in Kwande (38°C), Ussa, (38°C), Gwer East (37°C), Yola Cross & Ogoja (36°C). Studies have revealed that temperature increases with depth. However, this result shows excellent geothermal resources potential as it is expected to exceed the minimum geothermal gradient of 25.47 with an increase in depth. Therefore, further investigation is required to estimate the depth of the causative body, geothermal gradients, and the sustainability of the reservoirs by geophysical and field exploration. This method has proven to be cost-effective in locating geothermal resources in the study area. Consequently, the same procedure is recommended to be applied in other regions of the Precambrian basement complex and the sedimentary basins in Nigeria to save a preliminary field survey cost.

Keywords: ArcMap, geothermal resources, Landsat 8, LST, thermal springs, MLC

Procedia PDF Downloads 188
315 An ANOVA-based Sequential Forward Channel Selection Framework for Brain-Computer Interface Application based on EEG Signals Driven by Motor Imagery

Authors: Forouzan Salehi Fergeni

Abstract:

Converting the movement intents of a person into commands for action employing brain signals like electroencephalogram signals is a brain-computer interface (BCI) system. When left or right-hand motions are imagined, different patterns of brain activity appear, which can be employed as BCI signals for control. To make better the brain-computer interface (BCI) structures, effective and accurate techniques for increasing the classifying precision of motor imagery (MI) based on electroencephalography (EEG) are greatly needed. Subject dependency and non-stationary are two features of EEG signals. So, EEG signals must be effectively processed before being used in BCI applications. In the present study, after applying an 8 to 30 band-pass filter, a car spatial filter is rendered for the purpose of denoising, and then, a method of analysis of variance is used to select more appropriate and informative channels from a category of a large number of different channels. After ordering channels based on their efficiencies, a sequential forward channel selection is employed to choose just a few reliable ones. Features from two domains of time and wavelet are extracted and shortlisted with the help of a statistical technique, namely the t-test. Finally, the selected features are classified with different machine learning and neural network classifiers being k-nearest neighbor, Probabilistic neural network, support-vector-machine, Extreme learning machine, decision tree, Multi-layer perceptron, and linear discriminant analysis with the purpose of comparing their performance in this application. Utilizing a ten-fold cross-validation approach, tests are performed on a motor imagery dataset found in the BCI competition III. Outcomes demonstrated that the SVM classifier got the greatest classification precision of 97% when compared to the other available approaches. The entire investigative findings confirm that the suggested framework is reliable and computationally effective for the construction of BCI systems and surpasses the existing methods.

Keywords: brain-computer interface, channel selection, motor imagery, support-vector-machine

Procedia PDF Downloads 50
314 Plasma Levels of Collagen Triple Helix Repeat Containing 1 (CTHRC1) as a Potential Biomarker in Interstitial Lung Disease

Authors: Rijnbout-St.James Willem, Lindner Volkhard, Scholand Mary Beth, Ashton M. Tillett, Di Gennaro Michael Jude, Smith Silvia Enrica

Abstract:

Introduction: Fibrosing lung diseases are characterized by changes in the lung interstitium and are classified based on etiology: 1) environmental/exposure-related, 2) autoimmune-related, 3) sarcoidosis, 4) interstitial pneumonia, and 4) idiopathic. Among interstitial lung diseases (ILD) idiopathic forms, idiopathic pulmonary fibrosis (IPF) is the most severe. Pathogenesis of IPF is characterized by an increased presence of proinflammatory mediators, resulting in alveolar injury, where injury to alveolar epithelium precipitates an increase in collagen deposition, subsequently thickening the alveolar septum and decreasing gas exchange. Identifying biomarkers implicated in the pathogenesis of lung fibrosis is key to developing new therapies and improving the efficacy of existing therapies. The transforming growth factor-beta (TGF-B1), a mediator of tissue repair associated with WNT5A signaling, is partially responsible for fibroblast proliferation in ILD and is the target of Pirfenidone, one of the antifibrotic therapies used for patients with IPF. Canonical TGF-B signaling is mediated by the proteins SMAD 2/3, which are, in turn, indirectly regulated by Collagen Triple Helix Repeat Containing 1 (CTHRC1). In this study, we tested the following hypotheses: 1) CTHRC1 is more elevated in the ILD cohort compared to unaffected controls, and 2) CTHRC1 is differently expressed among ILD types. Material and Methods: CTHRC1 levels were measured by ELISA in 171 plasma samples from the deidentified University of Utah ILD cohort. Data represent a cohort of 131 ILD-affected participants and 40 unaffected controls. CTHRC1 samples were categorized by a pulmonologist based on affectation status and disease subtypes: IPF (n = 45), sarcoidosis (4), nonspecific interstitial pneumonia (16), hypersensitivity pneumonitis (n = 7), interstitial pneumonia (n=13), autoimmune (n = 15), other ILD - a category that includes undifferentiated ILD diagnoses (n = 31), and unaffected controls (n = 40). We conducted a single-factor ANOVA of plasma CTHRC1 levels to test whether CTHRC1 variance among affected and non-affected participants is statistically significantly different. In-silico analysis was performed with Ingenuity Pathway Analysis® to characterize the role of CTHRC1 in the pathway of lung fibrosis. Results: Statistical analyses of CTHRC1 in plasma samples indicate that the average CTHRC1 level is significantly higher in ILD-affected participants than controls, with the autoimmune ILD being higher than other ILD types, thus supporting our hypotheses. In-silico analyses show that CTHRC1 indirectly activates and phosphorylates SMAD3, which in turn cross-regulates TGF-B1. CTHRC1 also may regulate the expression and transcription of TGFB-1 via WNT5A and its regulatory relationship with CTNNB1. Conclusion: In-silico pathway analyses demonstrate that CTHRC1 may be an important biomarker in ILD. Analysis of plasma samples indicates that CTHRC1 expression is positively associated with ILD affectation, with autoimmune ILD having the highest average CTHRC1 values. While characterizing CTHRC1 levels in plasma can help to differentiate among ILD types and predict response to Pirfenidone, the extent to which plasma CTHRC1 level is a function of ILD severity or chronicity is unknown.

Keywords: interstitial lung disease, CTHRC1, idiopathic pulmonary fibrosis, pathway analyses

Procedia PDF Downloads 191
313 A Study on Conventional and Improved Tillage Practices for Sowing Paddy in Wheat Harvested Field

Authors: R. N. Pateriya, T. K. Bhattacharya

Abstract:

In India, rice-wheat cropping system occupies the major area and contributes about 40% of the country’s total food grain production. It is necessary that production of rice and wheat must keep pace with growing population. However, various factors such as degradation in natural resources, shift in cropping pattern, energy constraints etc. are causing reduction in the productivity of these crops. Seedbed for rice after wheat is difficult to prepare due to presence of straw and stubbles, and require excessive tillage operations to bring optimum tilth. In addition, delayed sowing and transplanting of rice is mainly due to poor crop residue management, multiplicity of tillage operations and non-availability of the power source. With increasing concern for fuel conservation and energy management, farmers might wish to estimate the best cultivation system for more productivity. The widest spread method of tilling land is ploughing with mould board plough. However, with the mould board plough upper layer of soil is neither always loosened at the desired extent nor proper mixing of different layers are achieved. Therefore, additional operations carried out to improve tilth. The farmers are becoming increasingly aware of the need for minimum tillage by minimizing the use of machines. Soil management can be achieved by using the combined active-passive tillage machines. A study was therefore, undertaken in wheat-harvested field to study the impact of conventional and modified tillage practices on paddy crop cultivation. Tillage treatments with tractor as a power source were selected during the experiment. The selected level of tillage treatments of tractor machinery management were (T1:- Direct Sowing of Rice), (T2:- 2 to 3 harrowing and no Puddling with manual transplanting), (T3:- 2 to 3 harrowing and Puddling with paddy harrow with manual transplanting), (T4:- 2 to 3 harrowing and Puddling with Rotavator with manual transplanting). The maximum output was obtained with treatment T1 (7.85 t/ha)) followed by T4 (6.4 t/ha), T3 (6.25 t/ha) and T2 (6.0 t/ha)) respectively.

Keywords: crop residues, cropping system, minimum tillage, yield

Procedia PDF Downloads 208
312 Fatty Acids in Female's Gonads of the Red Sea Fish Rhabdosargus Sarba During the Spawning Season

Authors: Suhaila Qari, Samia Moharram, Safaa Alowaidi

Abstract:

Objectives: To determine the fatty acids profiles in female fish, R. sarba from the Red Sea during the spawning season. Methods: Monthly individual Rhabdosargus sarba were obtained from Bangalah market in Jeddah, Red Sea and transported to the laboratory in ice aquarium. The total length, standard length and weight were measured, fishes were dissected. Ovaries were removed, weighed and 10 ml of concentrated hydrochloric acid were added to 10g of the ovary in a conical flask and immersed in boiling water until the sample was dissolved and the fat was seen to collect on the surface. The conical was cooled and the fat was extracted by shaking with 30 ml of diethyl ether. The extract was bowled after allowing the layers to separate into a weighed flask. The extraction was repeated three times more and distilled off the solvent then the fat dried at 100oC, cooled and weighed. Then 50 mg of lipid was put in a tube, 5 ml of methanolic sulphuric acid was added and 2 ml of benzene, the tube well closed and placed in water bath at 90oC for an hour and half. After cooling, 8 ml water and 5 ml petroleum was added shacked strongly and the ethereal layer was separated in a dry tube, evaporated to dryness. The fatty acid methyl esters were analyzed using a Hewlett Packard (HP 6890) chromatography, asplit /splitless injector and flame ionization detector (FID). Results: In female Rhabdosargus sarba, a total of 29 fatty acids detected in ovaries throughout the spawning season. The main fatty acid group in total lipid was saturated fatty acid (SFA, 28.9%), followed by 23.5% of polyunsaturated fatty acids (PUFA) and 12.9% of monounsaturated fatty acids (MUFA). The dominant SFA were palmitic and stearic, the major MUFA were palmitoleic and oleic, and the major PUFA were C18:2 and C22:2. During spawning stages no significant differences in total SFA, MUFA and PUFA, the highest value of SFA was in late spawning (36.78%). However, the highest value of MUFA and PUFA was in spawning (16.70% and 24.96% respectively). During spawning season there were a significant differences in total SFA between March (late spawning stage) and December (nearly ripe stage), (P < 0.05).

Keywords: sparidae, Rhabdosargus sarba, fish, fatty acids, spawning, gonads, red sea

Procedia PDF Downloads 802
311 Information Pollution: Exploratory Analysis of Subs-Saharan African Media’s Capabilities to Combat Misinformation and Disinformation

Authors: Muhammed Jamiu Mustapha, Jamiu Folarin, Stephen Obiri Agyei, Rasheed Ademola Adebiyi, Mutiu Iyanda Lasisi

Abstract:

The role of information in societal development and growth cannot be over-emphasized. It has remained an age-long strategy to adopt the information flow to make an egalitarian society. The same has become a tool for throwing society into chaos and anarchy. It has been adopted as a weapon of war and a veritable instrument of psychological warfare with a variety of uses. That is why some scholars posit that information could be deployed as a weapon to wreak “Mass Destruction" or promote “Mass Development". When used as a tool for destruction, the effect on society is like an atomic bomb which when it is released, pollutes the air and suffocates the people. Technological advancement has further exposed the latent power of information and many societies seem to be overwhelmed by its negative effect. While information remains one of the bedrock of democracy, the information ecosystem across the world is currently facing a more difficult battle than ever before due to information pluralism and technological advancement. The more the agents involved try to combat its menace, the difficult and complex it is proving to be curbed. In a region like Africa with dangling democracy enfolds with complexities of multi-religion, multi-cultures, inter-tribes, ongoing issues that are yet to be resolved, it is important to pay critical attention to the case of information disorder and find appropriate ways to curb or mitigate its effects. The media, being the middleman in the distribution of information, needs to build capacities and capabilities to separate the whiff of misinformation and disinformation from the grains of truthful data. From quasi-statistical senses, it has been observed that the efforts aimed at fighting information pollution have not considered the built resilience of media organisations against this disorder. Apparently, the efforts, resources and technologies adopted for the conception, production and spread of information pollution are much more sophisticated than approaches to suppress and even reduce its effects on society. Thus, this study seeks to interrogate the phenomenon of information pollution and the capabilities of select media organisations in Sub-Saharan Africa. In doing this, the following questions are probed; what are the media actions to curb the menace of information pollution? Which of these actions are working and how effective are they? And which of the actions are not working and why they are not working? Adopting quantitative and qualitative approaches and anchored on the Dynamic Capability Theory, the study aims at digging up insights to further understand the complexities of information pollution, media capabilities and strategic resources for managing misinformation and disinformation in the region. The quantitative approach involves surveys and the use of questionnaires to get data from journalists on their understanding of misinformation/disinformation and their capabilities to gate-keep. Case Analysis of select media and content analysis of their strategic resources to manage misinformation and disinformation is adopted in the study while the qualitative approach will involve an In-depth Interview to have a more robust analysis is also considered. The study is critical in the fight against information pollution for a number of reasons. One, it is a novel attempt to document the level of media capabilities to fight the phenomenon of information disorder. Two, the study will enable the region to have a clear understanding of the capabilities of existing media organizations to combat misinformation and disinformation in the countries that make up the region. Recommendations emanating from the study could be used to initiate, intensify or review existing approaches to combat the menace of information pollution in the region.

Keywords: disinformation, information pollution, misinformation, media capabilities, sub-Saharan Africa

Procedia PDF Downloads 161
310 Numerical Study of Natural Convection in Isothermal Open Cavities

Authors: Gaurav Prabhudesai, Gaetan Brill

Abstract:

The sun's energy source comes from a hydrogen-to-helium thermonuclear reaction, generating a temperature of about 5760 K on its outer layer. On account of this high temperature, energy is radiated by the sun, a part of which reaches the earth. This sunlight, even after losing part of its energy en-route to scattering and absorption, provides a time and space averaged solar flux of 174.7 W/m^2 striking the earth’s surface. According to one study, the solar energy striking earth’s surface in one and a half hour is more than the energy consumption that was recorded in the year 2001 from all sources combined. Thus, technology for extraction of solar energy holds much promise for solving energy crisis. Of the many technologies developed in this regard, Concentrating Solar Power (CSP) plants with central solar tower and receiver system are very impressive because of their capability to provide a renewable energy that can be stored in the form of heat. One design of central receiver towers is an open cavity where sunlight is concentrated into by using mirrors (also called heliostats). This concentrated solar flux produces high temperature inside the cavity which can be utilized in an energy conversion process. The amount of energy captured is reduced by losses occurring at the cavity through all three modes viz., radiation to the atmosphere, conduction to the adjoining structure and convection. This study investigates the natural convection losses to the environment from the receiver. Computational fluid dynamics were used to simulate the fluid flow and heat transfer of the receiver; since no analytical solution can be obtained and no empirical correlations exist for the given geometry. The results provide guide lines for predicting natural convection losses for hexagonal and circular shaped open cavities. Additionally, correlations are given for various inclination angles and aspect ratios. These results provide methods to minimize natural convection through careful design of receiver geometry and modification of the inclination angle, and aspect ratio of the cavity.

Keywords: concentrated solar power (CSP), central receivers, natural convection, CFD, open cavities

Procedia PDF Downloads 288
309 The Concentration of Selected Cosmogenic and Anthropogenic Radionuclides in the Ground Layer of the Atmosphere (Polar and Mid-Latitudes Regions)

Authors: A. Burakowska, M. Piotrowski, M. Kubicki, H. Trzaskowska, R. Sosnowiec, B. Myslek-Laurikainen

Abstract:

The most important source of atmospheric radioactivity are radionuclides generated as a result of the impact of primary and secondary cosmic radiation, with the nuclei of nitrogen oxygen and carbon in the upper troposphere and lower stratosphere. This creates about thirty radioisotopes of more than twenty elements. For organisms, the four of them are most important: ³H, ⁷Be, ²²Na, ¹⁴C. The natural radionuclides, which are present in Earth crust, also settle on dust and particles of water vapor. By this means, the derivatives of uranium and thorium, and long-life 40K get into the air. ¹³⁷Cs is the most widespread isotope, that is implemented by humans into the environment. To determine the concentration of radionuclides in the atmosphere, high volume air samplers were used, where the aerosol collection took place on a special filter fabric (Petrianov filter tissue FPP-15-1.5). In 2002 the high volume air sampler AZA-1000 was installed at the Polish Polar Observatory of the Polish Academy of Science in Hornsund, Spitsbergen (77°00’N, 15°33’E), designed to operate in all weather conditions of the cold polar region. Since 1991 (with short breaks) the ASS-500 air sampler has been working, which is located in Swider at the Kalinowski Geophysical Observatory of Geophysics Institute of the Polish Academy of Science (52°07’N, 21°15’E). The following results of radionuclides concentrations were obtained from both stations using gamma spectroscopy analysis: ⁷Be, ¹³⁷Cs, ¹³⁴Cs, ²¹⁰Pb, ⁴⁰K. For gamma spectroscopy analysis HPGe (High Purity Germanium) detector were used. These data were compared with each other. The preliminary results gave evidence that radioactivity measured in aerosols is not proportional to the amount of dust for both studied regions. Furthermore, the results indicate annual variability (seasonal fluctuations) as well as a decrease in the average activity of ⁷Be with increasing latitude. The content of ⁷Be in surface air also indicates the relationship with solar activity cycles.

Keywords: aerosols, air filters, atmospheric beryllium, environmental radionuclides, gamma spectroscopy, mid-latitude regions radionuclides, polar regions radionuclides, solar cycles

Procedia PDF Downloads 140
308 Structure and Mechanics Patterns in the Assembly of Type V Intermediate-Filament Protein-Based Fibers

Authors: Mark Bezner, Shani Deri, Tom Trigano, Kfir Ben-Harush

Abstract:

Intermediate filament (IF) proteins-based fibers are among the toughest fibers in nature, as was shown by native hagfish slime threads and by synthetic fibers that are based on type V IF-proteins, the nuclear lamins. It is assumed that their mechanical performance stems from two major factors: (1) the transition from elastic -helices to stiff-sheets during tensile load; and (2) the specific organization of the coiled-coil proteins into a hierarchical network of nano-filaments. Here, we investigated the interrelationship between these two factors by using wet-spun fibers based on C. elegans (Ce) lamin. We found that Ce-lamin fibers, whether assembled in aqueous or alcoholic solutions, had the same nonlinear mechanical behavior, with the elastic region ending at ~5%. The pattern of the transition was, however, different: the ratio between -helices and -sheets/random coils was relatively constant until a 20% strain for fibers assembled in an aqueous solution, whereas for fibers assembled in 70% ethanol, the transition ended at a 6% strain. This structural phenomenon in alcoholic solution probably occurred through the transition between compacted and extended conformation of the random coil, and not between -helix and -sheets, as cycle analyses had suggested. The different transition pattern can also be explained by the different higher order organization of Ce-lamins in aqueous or alcoholic solutions, as demonstrated by introducing a point mutation in conserved residue in Ce-lamin gene that alter the structure of the Ce-lamins’ nano-fibrils. In addition, biomimicking the layered structure of silk and hair fibers by coating the Ce-lamin fiber with a hydrophobic layer enhanced fiber toughness and lead to a reversible transition between -helix and the extended conformation. This work suggests that different hierarchical structures, which are formed by specific assembly conditions, lead to diverse secondary structure transitions patterns, which in turn affect the fibers’ mechanical properties.

Keywords: protein-based fibers, intermediate filaments (IF) assembly, toughness, structure-property relationships

Procedia PDF Downloads 110
307 Preparation and Evaluation of Gelatin-Hyaluronic Acid-Polycaprolactone Membrane Containing 0.5 % Atorvastatin Loaded Nanostructured Lipid Carriers as a Nanocomposite Scaffold for Skin Tissue Engineering

Authors: Mahsa Ahmadi, Mehdi Mehdikhani-Nahrkhalaji, Jaleh Varshosaz, Shadi Farsaei

Abstract:

Gelatin and hyaluronic acid are commonly used in skin tissue engineering scaffolds, but because of their low mechanical properties and high biodegradation rate, adding a synthetic polymer such as polycaprolactone could improve the scaffold properties. Therefore, we developed a gelatin-hyaluronic acid-polycaprolactone scaffold, containing 0.5 % atorvastatin loaded nanostructured lipid carriers (NLCs) for skin tissue engineering. The atorvastatin loaded NLCs solution was prepared by solvent evaporation method and freeze drying process. Synthesized atorvastatin loaded NLCs was added to the gelatin and hyaluronic acid solution, and a membrane was fabricated with solvent evaporation method. Thereafter it was coated by a thin layer of polycaprolactone via spine coating set. The resulting scaffolds were characterized by scanning electron microscopy (SEM), Fourier transform infrared spectroscopy (FTIR) and X-ray diffraction (XRD) analyses. Moreover, mechanical properties, in vitro degradation in 7 days period, and in vitro drug release of scaffolds were also evaluated. SEM images showed the uniform distributed NLCs with an average size of 100 nm in the scaffold structure. Mechanical test indicated that the scaffold had a 70.08 Mpa tensile modulus which was twofold of tensile modulus of normal human skin. A Franz-cell diffusion test was performed to investigate the scaffold drug release in phosphate buffered saline (pH=7.4) medium. Results showed that 72% of atorvastatin was released during 5 days. In vitro degradation test demonstrated that the membrane was degradated approximately 97%. In conclusion, suitable physicochemical and biological properties of membrane indicated that the developed gelatin-hyaluronic acid-polycaprolactone nanocomposite scaffold containing 0.5 % atorvastatin loaded NLCs could be used as a good candidate for skin tissue engineering applications.

Keywords: atorvastatin, gelatin, hyaluronic acid, nano lipid carriers (NLCs), polycaprolactone, skin tissue engineering, solvent casting, solvent evaporation

Procedia PDF Downloads 252
306 Multi-Scale Modelling of the Cerebral Lymphatic System and Its Failure

Authors: Alexandra K. Diem, Giles Richardson, Roxana O. Carare, Neil W. Bressloff

Abstract:

Alzheimer's disease (AD) is the most common form of dementia and although it has been researched for over 100 years, there is still no cure or preventive medication. Its onset and progression is closely related to the accumulation of the neuronal metabolite Aβ. This raises the question of how metabolites and waste products are eliminated from the brain as the brain does not have a traditional lymphatic system. In recent years the rapid uptake of Aβ into cerebral artery walls and its clearance along those arteries towards the lymph nodes in the neck has been suggested and confirmed in mice studies, which has led to the hypothesis that interstitial fluid (ISF), in the basement membranes in the walls of cerebral arteries, provides the pathways for the lymphatic drainage of Aβ. This mechanism, however, requires a net reverse flow of ISF inside the blood vessel wall compared to the blood flow and the driving forces for such a mechanism remain unknown. While possible driving mechanisms have been studied using mathematical models in the past, a mechanism for net reverse flow has not been discovered yet. Here, we aim to address the question of the driving force of this reverse lymphatic drainage of Aβ (also called perivascular drainage) by using multi-scale numerical and analytical modelling. The numerical simulation software COMSOL Multiphysics 4.4 is used to develop a fluid-structure interaction model of a cerebral artery, which models blood flow and displacements in the artery wall due to blood pressure changes. An analytical model of a layer of basement membrane inside the wall governs the flow of ISF and, therefore, solute drainage based on the pressure changes and wall displacements obtained from the cerebral artery model. The findings suggest that an active role in facilitating a reverse flow is played by the components of the basement membrane and that stiffening of the artery wall during age is a major risk factor for the impairment of brain lymphatics. Additionally, our model supports the hypothesis of a close association between cerebrovascular diseases and the failure of perivascular drainage.

Keywords: Alzheimer's disease, artery wall mechanics, cerebral blood flow, cerebral lymphatics

Procedia PDF Downloads 526
305 Environmental Conditions Simulation Device for Evaluating Fungal Growth on Wooden Surfaces

Authors: Riccardo Cacciotti, Jiri Frankl, Benjamin Wolf, Michael Machacek

Abstract:

Moisture fluctuations govern the occurrence of fungi-related problems in buildings, which may impose significant health risks for users and even lead to structural failures. Several numerical engineering models attempt to capture the complexity of mold growth on building materials. From real life observations, in cases with suppressed daily variations of boundary conditions, e.g. in crawlspaces, mold growth model predictions well correspond with the observed mold growth. On the other hand, in cases with substantial diurnal variations of boundary conditions, e.g. in the ventilated cavity of a cold flat roof, mold growth predicted by the models is significantly overestimated. This study, founded by the Grant Agency of the Czech Republic (GAČR 20-12941S), aims at gaining a better understanding of mold growth behavior on solid wood, under varying boundary conditions. In particular, the experimental investigation focuses on the response of mold to changing conditions in the boundary layer and its influence on heat and moisture transfer across the surface. The main results include the design and construction at the facilities of ITAM (Prague, Czech Republic) of an innovative device allowing for the simulation of changing environmental conditions in buildings. It consists of a square section closed circuit with rough dimensions 200 × 180 cm and cross section roughly 30 × 30 cm. The circuit is thermally insulated and equipped with an electric fan to control air flow inside the tunnel, a heat and humidity exchange unit to control the internal RH and variations in temperature. Several measuring points, including an anemometer, temperature and humidity sensor, a loading cell in the test section for recording mass changes, are provided to monitor the variations of parameters during the experiments. The research is ongoing and it is expected to provide the final results of the experimental investigation at the end of 2022.

Keywords: moisture, mold growth, testing, wood

Procedia PDF Downloads 133
304 Developing Manufacturing Process for the Graphene Sensors

Authors: Abdullah Faqihi, John Hedley

Abstract:

Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.

Keywords: laser scribing, lightscribe DVD, graphene oxide, scanning electron microscopy

Procedia PDF Downloads 120
303 Application of Groundwater Level Data Mining in Aquifer Identification

Authors: Liang Cheng Chang, Wei Ju Huang, You Cheng Chen

Abstract:

Investigation and research are keys for conjunctive use of surface and groundwater resources. The hydrogeological structure is an important base for groundwater analysis and simulation. Traditionally, the hydrogeological structure is artificially determined based on geological drill logs, the structure of wells, groundwater levels, and so on. In Taiwan, groundwater observation network has been built and a large amount of groundwater-level observation data are available. The groundwater level is the state variable of the groundwater system, which reflects the system response combining hydrogeological structure, groundwater injection, and extraction. This study applies analytical tools to the observation database to develop a methodology for the identification of confined and unconfined aquifers. These tools include frequency analysis, cross-correlation analysis between rainfall and groundwater level, groundwater regression curve analysis, and decision tree. The developed methodology is then applied to groundwater layer identification of two groundwater systems: Zhuoshui River alluvial fan and Pingtung Plain. The abovementioned frequency analysis uses Fourier Transform processing time-series groundwater level observation data and analyzing daily frequency amplitude of groundwater level caused by artificial groundwater extraction. The cross-correlation analysis between rainfall and groundwater level is used to obtain the groundwater replenishment time between infiltration and the peak groundwater level during wet seasons. The groundwater regression curve, the average rate of groundwater regression, is used to analyze the internal flux in the groundwater system and the flux caused by artificial behaviors. The decision tree uses the information obtained from the above mentioned analytical tools and optimizes the best estimation of the hydrogeological structure. The developed method reaches training accuracy of 92.31% and verification accuracy 93.75% on Zhuoshui River alluvial fan and training accuracy 95.55%, and verification accuracy 100% on Pingtung Plain. This extraordinary accuracy indicates that the developed methodology is a great tool for identifying hydrogeological structures.

Keywords: aquifer identification, decision tree, groundwater, Fourier transform

Procedia PDF Downloads 157
302 Regional Flood Frequency Analysis in Narmada Basin: A Case Study

Authors: Ankit Shah, R. K. Shrivastava

Abstract:

Flood and drought are two main features of hydrology which affect the human life. Floods are natural disasters which cause millions of rupees’ worth of damage each year in India and the whole world. Flood causes destruction in form of life and property. An accurate estimate of the flood damage potential is a key element to an effective, nationwide flood damage abatement program. Also, the increase in demand of water due to increase in population, industrial and agricultural growth, has let us know that though being a renewable resource it cannot be taken for granted. We have to optimize the use of water according to circumstances and conditions and need to harness it which can be done by construction of hydraulic structures. For their safe and proper functioning of hydraulic structures, we need to predict the flood magnitude and its impact. Hydraulic structures play a key role in harnessing and optimization of flood water which in turn results in safe and maximum use of water available. Mainly hydraulic structures are constructed on ungauged sites. There are two methods by which we can estimate flood viz. generation of Unit Hydrographs and Flood Frequency Analysis. In this study, Regional Flood Frequency Analysis has been employed. There are many methods for estimating the ‘Regional Flood Frequency Analysis’ viz. Index Flood Method. National Environmental and Research Council (NERC Methods), Multiple Regression Method, etc. However, none of the methods can be considered universal for every situation and location. The Narmada basin is located in Central India. It is drained by most of the tributaries, most of which are ungauged. Therefore it is very difficult to estimate flood on these tributaries and in the main river. As mentioned above Artificial Neural Network (ANN)s and Multiple Regression Method is used for determination of Regional flood Frequency. The annual peak flood data of 20 sites gauging sites of Narmada Basin is used in the present study to determine the Regional Flood relationships. Homogeneity of the considered sites is determined by using the Index Flood Method. Flood relationships obtained by both the methods are compared with each other, and it is found that ANN is more reliable than Multiple Regression Method for the present study area.

Keywords: artificial neural network, index flood method, multi layer perceptrons, multiple regression, Narmada basin, regional flood frequency

Procedia PDF Downloads 418
301 Affordable and Environmental Friendly Small Commuter Aircraft Improving European Mobility

Authors: Diego Giuseppe Romano, Gianvito Apuleo, Jiri Duda

Abstract:

Mobility is one of the most important societal needs for amusement, business activities and health. Thus, transport needs are continuously increasing, with the consequent traffic congestion and pollution increase. Aeronautic effort aims at smarter infrastructures use and in introducing greener concepts. A possible solution to address the abovementioned topics is the development of Small Air Transport (SAT) system, able to guarantee operability from today underused airfields in an affordable and green way, helping meanwhile travel time reduction, too. In the framework of Horizon2020, EU (European Union) has funded the Clean Sky 2 SAT TA (Transverse Activity) initiative to address market innovations able to reduce SAT operational cost and environmental impact, ensuring good levels of operational safety. Nowadays, most of the key technologies to improve passenger comfort and to reduce community noise, DOC (Direct Operating Costs) and pilot workload for SAT have reached an intermediate level of maturity TRL (Technology Readiness Level) 3/4. Thus, the key technologies must be developed, validated and integrated on dedicated ground and flying aircraft demonstrators to reach higher TRL levels (5/6). Particularly, SAT TA focuses on the integration at aircraft level of the following technologies [1]: 1)    Low-cost composite wing box and engine nacelle using OoA (Out of Autoclave) technology, LRI (Liquid Resin Infusion) and advance automation process. 2) Innovative high lift devices, allowing aircraft operations from short airfields (< 800 m). 3) Affordable small aircraft manufacturing of metallic fuselage using FSW (Friction Stir Welding) and LMD (Laser Metal Deposition). 4)       Affordable fly-by-wire architecture for small aircraft (CS23 certification rules). 5) More electric systems replacing pneumatic and hydraulic systems (high voltage EPGDS -Electrical Power Generation and Distribution System-, hybrid de-ice system, landing gear and brakes). 6) Advanced avionics for small aircraft, reducing pilot workload. 7) Advanced cabin comfort with new interiors materials and more comfortable seats. 8) New generation of turboprop engine with reduced fuel consumption, emissions, noise and maintenance costs for 19 seats aircraft. (9) Alternative diesel engine for 9 seats commuter aircraft. To address abovementioned market innovations, two different platforms have been designed: Reference and Green aircraft. Reference aircraft is a virtual aircraft designed considering 2014 technologies with an existing engine assuring requested take-off power; Green aircraft is designed integrating the technologies addressed in Clean Sky 2. Preliminary integration of the proposed technologies shows an encouraging reduction of emissions and operational costs of small: about 20% CO2 reduction, about 24% NOx reduction, about 10 db (A) noise reduction at measurement point and about 25% DOC reduction. Detailed description of the performed studies, analyses and validations for each technology as well as the expected benefit at aircraft level are reported in the present paper.

Keywords: affordable, European, green, mobility, technologies development, travel time reduction

Procedia PDF Downloads 99
300 Mesoporous Na2Ti3O7 Nanotube-Constructed Materials with Hierarchical Architecture: Synthesis and Properties

Authors: Neumoin Anton Ivanovich, Opra Denis Pavlovich

Abstract:

Materials based on titanium oxide compounds are widely used in such areas as solar energy, photocatalysis, food industry and hygiene products, biomedical technologies, etc. Demand for them has also formed in the battery industry (an example of this is the commercialization of Li4Ti5O12), where much attention has recently been paid to the development of next-generation systems and technologies, such as sodium-ion batteries. This dictates the need to search for new materials with improved characteristics, as well as ways to obtain them that meet the requirements of scalability. One of the ways to solve these problems can be the creation of nanomaterials that often have a complex of physicochemical properties that radically differ from the characteristics of their counterparts in the micro- or macroscopic state. At the same time, it is important to control the texture (specific surface area, porosity) of such materials. In view of the above, among other methods, the hydrothermal technique seems to be suitable, allowing a wide range of control over the conditions of synthesis. In the present study, a method was developed for the preparation of mesoporous nanostructured sodium trititanate (Na2Ti3O7) with a hierarchical architecture. The materials were synthesized by hydrothermal processing and exhibit a complex hierarchically organized two-layer architecture. At the first level of the hierarchy, materials are represented by particles having a roughness surface, and at the second level, by one-dimensional nanotubes. The products were found to have high specific surface area and porosity with a narrow pore size distribution (about 6 nm). As it is known, the specific surface area and porosity are important characteristics of functional materials, which largely determine the possibilities and directions of their practical application. Electrochemical impedance spectroscopy data show that the resulting sodium trititanate has a sufficiently high electrical conductivity. As expected, the synthesized complexly organized nanoarchitecture based on sodium trititanate with a porous structure can be practically in demand, for example, in the field of new generation electrochemical storage and energy conversion devices.

Keywords: sodium trititanate, hierarchical materials, mesoporosity, nanotubes, hydrothermal synthesis

Procedia PDF Downloads 107
299 Developing a SOA-Based E-Healthcare Systems

Authors: Hend Albassam, Nouf Alrumaih

Abstract:

Nowadays we are in the age of technologies and communication and there is no doubt that technologies such as the Internet can offer many advantages for many business fields, and the health field is no execution. In fact, using the Internet provide us with a new path to improve the quality of health care throughout the world. The e-healthcare offers many advantages such as: efficiency by reducing the cost and avoiding duplicate diagnostics, empowerment of patients by enabling them to access their medical records, enhancing the quality of healthcare and enabling information exchange and communication between healthcare organizations. There are many problems that result from using papers as a way of communication, for example, paper-based prescriptions. Usually, the doctor writes a prescription and gives it to the patient who in turn carries it to the pharmacy. After that, the pharmacist takes the prescription to fill it and give it to the patient. Sometimes the pharmacist might find difficulty in reading the doctor’s handwriting; the patient could change and counterfeit the prescription. These existing problems and many others heighten the need to improve the quality of the healthcare. This project is set out to develop a distributed e-healthcare system that offers some features of e-health and addresses some of the above-mentioned problems. The developed system provides an electronic health record (EHR) and enables communication between separate health care organizations such as the clinic, pharmacy and laboratory. To develop this system, the Service Oriented Architecture (SOA) is adopted as a design approach, which helps to design several independent modules that communicate by using web services. The layering design pattern is used in designing each module as it provides reusability that allows the business logic layer to be reused by different higher layers such as the web service or the website in our system. The experimental analysis has shown that the project has successfully achieved its aims toward solving the problems related to the paper-based healthcare systems and it enables different health organization to communicate effectively. It implements four independent modules including healthcare provider, pharmacy, laboratory and medication information provider. Each module provides different functionalities and is used by a different type of user. These modules interoperate with each other using a set of web services.

Keywords: e-health, services oriented architecture (SOA), web services, interoperability

Procedia PDF Downloads 304
298 A Facile One Step Modification of Poly(dimethylsiloxane) via Smart Polymers for Biomicrofluidics

Authors: A. Aslihan Gokaltun, Martin L. Yarmush, Ayse Asatekin, O. Berk Usta

Abstract:

Poly(dimethylsiloxane) (PDMS) is one of the most widely used materials in the fabrication of microfluidic devices. It is easily patterned and can replicate features down to nanometers. Its flexibility, gas permeability that allows oxygenation, and low cost also drive its wide adoption. However, a major drawback of PDMS is its hydrophobicity and fast hydrophobic recovery after surface hydrophilization. This results in significant non-specific adsorption of proteins as well as small hydrophobic molecules such as therapeutic drugs limiting the utility of PDMS in biomedical microfluidic circuitry. While silicon, glass, and thermoplastics have been used, they come with problems of their own such as rigidity, high cost, and special tooling needs, which limit their use to a smaller user base. Many strategies to alleviate these common problems with PDMS are lack of general practical applicability, or have limited shelf lives in terms of the modifications they achieve. This restricts large scale implementation and adoption by industrial and research communities. Accordingly, we aim to tailor biocompatible PDMS surfaces by developing a simple and one step bulk modification approach with novel smart materials to reduce non-specific molecular adsorption and to stabilize long-term cell analysis with PDMS substrates. Smart polymers that blended with PDMS during device manufacture, spontaneously segregate to surfaces when in contact with aqueous solutions and create a < 1 nm layer that reduces non-specific adsorption of organic and biomolecules. Our methods are fully compatible with existing PDMS device manufacture protocols without any additional processing steps. We have demonstrated that our modified PDMS microfluidic system is effective at blocking the adsorption of proteins while retaining the viability of primary rat hepatocytes and preserving the biocompatibility, oxygen permeability, and transparency of the material. We expect this work will enable the development of fouling-resistant biomedical materials from microfluidics to hospital surfaces and tubing.

Keywords: cell culture, microfluidics, non-specific protein adsorption, PDMS, smart polymers

Procedia PDF Downloads 294
297 Exploring Pre-Trained Automatic Speech Recognition Model HuBERT for Early Alzheimer’s Disease and Mild Cognitive Impairment Detection in Speech

Authors: Monica Gonzalez Machorro

Abstract:

Dementia is hard to diagnose because of the lack of early physical symptoms. Early dementia recognition is key to improving the living condition of patients. Speech technology is considered a valuable biomarker for this challenge. Recent works have utilized conventional acoustic features and machine learning methods to detect dementia in speech. BERT-like classifiers have reported the most promising performance. One constraint, nonetheless, is that these studies are either based on human transcripts or on transcripts produced by automatic speech recognition (ASR) systems. This research contribution is to explore a method that does not require transcriptions to detect early Alzheimer’s disease (AD) and mild cognitive impairment (MCI). This is achieved by fine-tuning a pre-trained ASR model for the downstream early AD and MCI tasks. To do so, a subset of the thoroughly studied Pitt Corpus is customized. The subset is balanced for class, age, and gender. Data processing also involves cropping the samples into 10-second segments. For comparison purposes, a baseline model is defined by training and testing a Random Forest with 20 extracted acoustic features using the librosa library implemented in Python. These are: zero-crossing rate, MFCCs, spectral bandwidth, spectral centroid, root mean square, and short-time Fourier transform. The baseline model achieved a 58% accuracy. To fine-tune HuBERT as a classifier, an average pooling strategy is employed to merge the 3D representations from audio into 2D representations, and a linear layer is added. The pre-trained model used is ‘hubert-large-ls960-ft’. Empirically, the number of epochs selected is 5, and the batch size defined is 1. Experiments show that our proposed method reaches a 69% balanced accuracy. This suggests that the linguistic and speech information encoded in the self-supervised ASR-based model is able to learn acoustic cues of AD and MCI.

Keywords: automatic speech recognition, early Alzheimer’s recognition, mild cognitive impairment, speech impairment

Procedia PDF Downloads 127
296 Experimental and Theoratical Methods to Increase Core Damping for Sandwitch Cantilever Beam

Authors: Iyd Eqqab Maree, Moouyad Ibrahim Abbood

Abstract:

The purpose behind this study is to predict damping effect for steel cantilever beam by using two methods of passive viscoelastic constrained layer damping. First method is Matlab Program, this method depend on the Ross, Kerwin and Unger (RKU) model for passive viscoelastic damping. Second method is experimental lab (frequency domain method), in this method used the half-power bandwidth method and can be used to determine the system loss factors for damped steel cantilever beam. The RKU method has been applied to a cantilever beam because beam is a major part of a structure and this prediction may further leads to utilize for different kinds of structural application according to design requirements in many industries. In this method of damping a simple cantilever beam is treated by making sandwich structure to make the beam damp, and this is usually done by using viscoelastic material as a core to ensure the damping effect. The use of viscoelastic layers constrained between elastic layers is known to be effective for damping of flexural vibrations of structures over a wide range of frequencies. The energy dissipated in these arrangements is due to shear deformation in the viscoelastic layers, which occurs due to flexural vibration of the structures. The theory of dynamic stability of elastic systems deals with the study of vibrations induced by pulsating loads that are parametric with respect to certain forms of deformation. There is a very good agreement of the experimental results with the theoretical findings. The main ideas of this thesis are to find the transition region for damped steel cantilever beam (4mm and 8mm thickness) from experimental lab and theoretical prediction (Matlab R2011a). Experimentally and theoretically proved that the transition region for two specimens occurs at modal frequency between mode 1 and mode 2, which give the best damping, maximum loss factor and maximum damping ratio, thus this type of viscoelastic material core (3M468) is very appropriate to use in automotive industry and in any mechanical application has modal frequency eventuate between mode 1 and mode 2.

Keywords: 3M-468 material core, loss factor and frequency, domain method, bioinformatics, biomedicine, MATLAB

Procedia PDF Downloads 271