Search results for: evaluation questionnaire for parents
522 A Clinical Cutoff to Identify Metabolically Unhealthy Obese and Normal-Weight Phenotype in Young Adults
Authors: Lívia Pinheiro Carvalho, Luciana Di Thommazo-Luporini, Rafael Luís Luporini, José Carlos Bonjorno Junior, Renata Pedrolongo Basso Vanelli, Manoel Carneiro de Oliveira Junior, Rodolfo de Paula Vieira, Renata Trimer, Renata G. Mendes, Mylène Aubertin-Leheudre, Audrey Borghi-Silva
Abstract:
Rationale: Cardiorespiratory fitness (CRF) and functional capacity in young obese and normal-weight people are associated with metabolic and cardiovascular diseases and mortality. However, it remains unclear whether their metabolically healthy (MH) or at risk (AR) phenotype influences cardiorespiratory fitness in this vulnerable population such as obese adults but also in normal-weight people. HOMA insulin resistance index (HI) and leptin-adiponectin ratio (LA) are strong markers for characterizing those phenotypes that we hypothesized to be associated with physical fitness. We also hypothesized that an easy and feasible exercise test could identify a subpopulation at risk to develop metabolic and related disorders. Methods: Thirty-nine sedentary men and women (20-45y; 18.5Keywords: aerobic capacity, exercise, fitness, metabolism, obesity, 6MST
Procedia PDF Downloads 353521 Demographic Assessment and Evaluation of Degree of Lipid Control in High Risk Indian Dyslipidemia Patients
Authors: Abhijit Trailokya
Abstract:
Background: Cardiovascular diseases (CVD’s) are the major cause of morbidity and mortality in both developed and developing countries. Many clinical trials have demonstrated that low-density lipoprotein cholesterol (LDL-C) lowering, reduces the incidence of coronary and cerebrovascular events across a broad spectrum of patients at risk. Guidelines for the management of patients at risk have been established in Europe and North America. The guidelines have advocated progressively lower LDL-C targets and more aggressive use of statin therapy. In Indian patients, comprehensive data on dyslipidemia management and its treatment outcomes are inadequate. There is lack of information on existing treatment patterns, the patient’s profile being treated, and factors that determine treatment success or failure in achieving desired goals. Purpose: The present study was planned to determine the lipid control status in high-risk dyslipidemic patients treated with lipid-lowering therapy in India. Methods: This cross-sectional, non-interventional, single visit program was conducted across 483 sites in India where male and female patients with high-risk dyslipidemia aged 18 to 65 years who had visited for a routine health check-up to their respective physician at hospital or a healthcare center. Percentage of high-risk dyslipidemic patients achieving adequate LDL-C level (< 70 mg/dL) on lipid-lowering therapy and the association of lipid parameters with patient characteristics, comorbid conditions, and lipid lowering drugs were analysed. Results: 3089 patients were enrolled in the study; of which 64% were males. LDL-C data was available for 95.2% of the patients; only 7.7% of these patients achieved LDL-C levels < 70 mg/dL on lipid-lowering therapy, which may be due to inability to follow therapeutic plans, poor compliance, or inadequate counselling by physician. The physician’s lack of awareness about recent treatment guidelines also might contribute to patients’ poor adherence, not explaining adequately the benefit and risks of a medication, not giving consideration to the patient’s life style and the cost of medication. Statin was the most commonly used anti-dyslipidemic drug across population. The higher proportion of patients had the comorbid condition of CVD and diabetes mellitus across all dyslipidemic patients. Conclusion: As per the European Society of Cardiology guidelines the ideal LDL-C levels in high risk dyslipidemic patients should be less than 70%. In the present study, 7.7% of the patients achieved LDL-C levels < 70 mg/dL on lipid lowering therapy which is very less. Most of high risk dyslipidemic patients in India are on suboptimal dosage of statin. So more aggressive and high dosage statin therapy may be required to achieve target LDLC levels in high risk Indian dyslipidemic patients.Keywords: cardiovascular disease, diabetes mellitus, dyslipidemia, LDL-C, lipid lowering drug, statins
Procedia PDF Downloads 201520 Ethical, Legal and Societal Aspects of Unmanned Aircraft in Defence
Authors: Henning Lahmann, Benjamyn I. Scott, Bart Custers
Abstract:
Suboptimal adoption of AI in defence organisations carries risks for the protection of the freedom, safety, and security of society. Despite the vast opportunities that defence AI-technology presents, there are also a variety of ethical, legal, and societal concerns. To ensure the successful use of AI technology by the military, ethical, legal, and societal aspects (ELSA) need to be considered, and their concerns continuously addressed at all levels. This includes ELSA considerations during the design, manufacturing and maintenance of AI-based systems, as well as its utilisation via appropriate military doctrine and training. This raises the question how defence organisations can remain strategically competitive and at the edge of military innovation, while respecting the values of its citizens. This paper will explain the set-up and share preliminary results of a 4-year research project commissioned by the National Research Council in the Netherlands on the ethical, legal, and societal aspects of AI in defence. The project plans to develop a future-proof, independent, and consultative ecosystem for the responsible use of AI in the defence domain. In order to achieve this, the lab shall devise a context-dependent methodology that focuses on the ‘analysis’, ‘design’ and ‘evaluation’ of ELSA of AI-based applications within the military context, which include inter alia unmanned aircraft. This is bolstered as the Lab also recognises and complements the existing methods in regards to human-machine teaming, explainable algorithms, and value-sensitive design. Such methods will be modified for the military context and applied to pertinent case-studies. These case-studies include, among others, the application of autonomous robots (incl. semi- autonomous) and AI-based methods against cognitive warfare. As the perception of the application of AI in the military context, by both society and defence personnel, is important, the Lab will study how these perceptions evolve and vary in different contexts. Furthermore, the Lab will monitor – as they may influence people’s perception – developments in the global technological, military and societal spheres. Although the emphasis of the research project is on different forms of AI in defence, it focuses on several case studies. One of these case studies is on unmanned aircraft, which will also be the focus of the paper. Hence, ethical, legal, and societal aspects of unmanned aircraft in the defence domain will be discussed in detail, including but not limited to privacy issues. Typical other issues concern security (for people, objects, data or other aircraft), privacy (sensitive data, hindrance, annoyance, data collection, function creep), chilling effects, PlayStation mentality, and PTSD.Keywords: autonomous weapon systems, unmanned aircraft, human-machine teaming, meaningful human control, value-sensitive design
Procedia PDF Downloads 93519 Human-Carnivore Interaction: Patterns, Causes and Perceptions of Local Herders of Hoper Valley in Central Karakoram National Park, Pakistan
Authors: Saeed Abbas, Rahilla Tabassum, Haider Abbas, Babar Khan, Shahid Hussain, Muhammad Zafar Khan, Fazal Karim, Yawar Abbas, Rizwan Karim
Abstract:
Human–carnivore conflict is considered to be a major conservation and rural livelihood concern because many carnivore species have been heavily victimized due to elevated conflict levels with communities. Like other snow leopard range countries, this situation prevails in Pakistan, where WWF is currently working under Asia High Mountain Project (AHMP) in Gilgit-Baltistan of Pakistan. To mitigate such conflicts requires a firm understanding of grazing and predation pattern including human-carnivore interaction. For this purpose we conducted a survey in Hoper valley (one of the AHMP project sites in Pakistan), during August, 2013 through a questionnaire based survey and unstructured interviews covering 647 households, permanently residing in the project area out of the total 900 households. The valley, spread over 409 km2 between 36°7'46" N and 74°49'2"E, at 2900m asl in Karakoram ranges is considered to be one of an important habitat of snow leopard and associated prey species such as Himalayan ibex. The valley is home of 8100 Brusho people (ancient tribe of Northern Pakistan) dependent on agro-pastoral livelihoods including farming and livestock rearing. The total number of livestock reported were (N=15,481) out of which 8346 (53.91%) were sheep, 3546 (22.91%) goats, 2193 (14.16%) cows, 903 (5.83%) yaks, 508 (3.28%) bulls, 28 (0.18%) donkeys, 27 (0.17%) zo/zomo (cross breed of yak and cow), and 4 (0.03%) horses. 83 percent respondent (n=542 households) confirmed loss of their livestock during the last one year July, 2012 to June, 2013 which account for 2246 (14.51%) animals. The major reason of livestock loss include predation by large carnivores such as snow leopards and wolf (1710, 76.14%) followed by diseases (536, 23.86%). Of the total predation cases snow leopard is suspected to kill 1478 animals (86.43%). Among livestock sheep were found to be the major prey of snow leopard (810, 55%) followed by goats (484, 32.7%) cows (151, 10.21%), yaks (15, 1.015%), zo/zomo (7, 0.5%) and donkey (1, 0.07%). The reason for the mass depredation of sheep and goats is that they tend to browse on twigs of bushes and graze on soft grass near cliffs. They are also considered to be very active as compared to other species in moving quickly and covering more grazing area. This makes them more vulnerable to snow leopard attack. The majority (1283, 75%) of livestock killed by predators occurred during the warm season (May-September) in alpine and sub-alpine pastures and remaining (427, 25%) occurred in the winter season near settlements in valley. It was evident from the recent study that Snow leopard kills outside the pen were (1351, 79.76%) as compared to inside pen (359, 20.24%). Assessing the economic loss of livestock predation we found that the total loss of livestock predation in the study area is equal to PKR 11,230,000 (USD 105,797), which is about PRK 17, 357 (USD 163.51) per household per year. Economic loss incurred by the locals due to predation is quite significant where the average cash income per household per year is PKR 85,000 (USD 800.75).Keywords: carnivores, conflict, predation, livelihood, conservation, rural, snow leopard, livestock
Procedia PDF Downloads 347518 Electron Bernstein Wave Heating in the Toroidally Magnetized System
Authors: Johan Buermans, Kristel Crombé, Niek Desmet, Laura Dittrich, Andrei Goriaev, Yurii Kovtun, Daniel López-Rodriguez, Sören Möller, Per Petersson, Maja Verstraeten
Abstract:
The International Thermonuclear Experimental Reactor (ITER) will rely on three sources of external heating to produce and sustain a plasma; Neutral Beam Injection (NBI), Ion Cyclotron Resonance Heating (ICRH), and Electron Cyclotron Resonance Heating (ECRH). ECRH is a way to heat the electrons in a plasma by resonant absorption of electromagnetic waves. The energy of the electrons is transferred indirectly to the ions by collisions. The electron cyclotron heating system can be directed to deposit heat in particular regions in the plasma (https://www.iter.org/mach/Heating). Electron Cyclotron Resonance Heating (ECRH) at the fundamental resonance in X-mode is limited by a low cut-off density. Electromagnetic waves cannot propagate in the region between this cut-off and the Upper Hybrid Resonance (UHR) and cannot reach the Electron Cyclotron Resonance (ECR) position. Higher harmonic heating is hence preferred in heating scenarios nowadays to overcome this problem. Additional power deposition mechanisms can occur above this threshold to increase the plasma density. This includes collisional losses in the evanescent region, resonant power coupling at the UHR, tunneling of the X-wave with resonant coupling at the ECR, and conversion to the Electron Bernstein Wave (EBW) with resonant coupling at the ECR. A more profound knowledge of these deposition mechanisms can help determine the optimal plasma production scenarios. Several ECRH experiments are performed on the TOroidally MAgnetized System (TOMAS) to identify the conditions for Electron Bernstein Wave (EBW) heating. Density and temperature profiles are measured with movable Triple Langmuir Probes in the horizontal and vertical directions. Measurements of the forwarded and reflected power allow evaluation of the coupling efficiency. Optical emission spectroscopy and camera images also contribute to plasma characterization. The influence of the injected power, magnetic field, gas pressure, and wave polarization on the different deposition mechanisms is studied, and the contribution of the Electron Bernstein Wave is evaluated. The TOMATOR 1D hydrogen-helium plasma simulator numerically describes the evolution of current less magnetized Radio Frequency plasmas in a tokamak based on Braginskii’s legal continuity and heat balance equations. This code was initially benchmarked with experimental data from TCV to determine the transport coefficients. The code is used to model the plasma parameters and the power deposition profiles. The modeling is compared with the data from the experiments.Keywords: electron Bernstein wave, Langmuir probe, plasma characterization, TOMAS
Procedia PDF Downloads 95517 Neurotoxic Effects Assessment of Metformin in Danio rerio
Authors: Gustavo Axel Elizalde-Velázquez
Abstract:
Metformin is the first line of oral therapy to treat type II diabetes and is also employed as a treatment for other indications, such as polycystic ovary syndrome, cancer, and COVID-19. Recent data suggest it is the aspirin of the 21st century due to its antioxidant and anti-aging effects. However, increasingly current articles indicate its long-term consumption generates mitochondrial impairment. Up to date, it is known metformin increases the biogenesis of Alzheimer's amyloid peptides via up-regulating BACE1 transcription, but further information related to brain damage after its consumption is missing. Bearing in mind the above, this work aimed to establish whether or not chronic exposure to metformin may alter swimming behavior and induce neurotoxicity in Danio rerio adults. For this purpose, 250 Danio rerio grown-ups were assigned to six tanks of 50 L of capacity. Four of the six systems contained 50 fish, while the remaining two had 25 fish (≈1 male:1 female ratio). Every system with 50 fish was allocated one of the three metformin treatment concentrations (1, 20, and 40 μg/L), with one system as the control treatment. Systems with 25 fish, on the other hand, were used as positive controls for acetylcholinesterase (10 μg/L of Atrazine) and oxidative stress (3 μg/L of Atrazine). After four months of exposure, a mean of 32 fish (S.D. ± 2) per group of MET treatment survived, which were used for the evaluation of behavior with the Novel Tank test. Moreover, after the behavioral assessment, we aimed to collect the blood and brains of all fish from all treatment groups. For blood collection, fish were anesthetized with an MS-222 solution (150 mg/L), while for brain gathering, fish were euthanized using the hypothermic shock method (2–4 °C). Blood was employed to determine CASP3 activity and the percentage of apoptotic cells with the TUNEL assay, and brains were used to evaluate acetylcholinesterase activity, oxidative damage, and gene expression. After chronic exposure, MET-exposed fish exhibited less swimming activity when compared to control fish. Moreover, compared with the control group, MET significantly inhibited the activity of AChE and induced oxidative damage in the brain of fish. Concerning gene expression, MET significantly upregulated the expression of Nrf1, Nrf2, BAX, p53, BACE1, APP, PSEN1, and downregulated CASP3 and CASP9. Although MET did not overexpress the CASP3 gene, we saw a meaningful rise in the activity of this enzyme in the blood of fish exposed to MET compared to the control group, which we then confirmed by a high number of apoptotic cells in the TUNEL assay. To the best of our understanding, this is the first study that delivers evidence of oxidative impairment, apoptosis, AChE alteration, and overexpression of B- amyloid-related genes in the brain of fish exposed to metformin.Keywords: AChE inhibition, CASP3 activity, NovelTank test, oxidative damage, TUNEL assay
Procedia PDF Downloads 86516 Comparison of Two Methods of Cryopreservation of Testicular Tissue from Prepubertal Lambs
Authors: Rensson Homero Celiz Ygnacio, Marco Aurélio Schiavo Novaes, Lucy Vanessa Sulca Ñaupas, Ana Paula Ribeiro Rodrigues
Abstract:
The cryopreservation of testicular tissue emerges as an alternative for the preservation of the reproductive potential of individuals who still cannot produce sperm; however, they will undergo treatments that may affect their fertility (e.g., chemotherapy). Therefore, the present work aims to compare two cryopreservation methods (slow freezing and vitrification) in testicular tissue of prepubertal lambs. For that, to obtain the testicular tissue, the animals were castrated and the testicles were collected immediately in a physiological solution supplemented with antibiotics. In the laboratory, the testis was split into small pieces. The total size of the testicular fragments was 3×3x1 mm³ and was placed in a dish contained in Minimum Essential Medium (MEM-HEPES). The fragments were distributed randomly into non-cryopreserved (fresh control), slow freezing (SF), and vitrified. To SF procedures, two fragments from a given male were then placed in a 2,0 mL cryogenic vial containing 1,0 mL MEM-HEPES supplemented with 20% fetal bovine serum (FBS) and 20% dimethylsulfoxide (DMSO). Tubes were placed into a Mr. Frosty™ Freezing container with isopropyl alcohol and transferred to a -80°C freezer for overnight storage. On the next day, each tube was plunged into liquid nitrogen (NL). For vitrification, the ovarian tissue cryosystem (OTC) device was used. Testicular fragments were placed in the OTC device and exposed to the first vitrification solution composed of MEM-HEPES supplemented with 10 mg/mL Bovine Serum Albumin (BSA), 0.25 M sucrose, 10% Ethylene glycol (EG), 10% DMSO and 150 μM alpha-lipoic acid for four min. The VS1 was discarded and then the fragments were submerged into a second vitrification solution (VS2) containing the same composition of VS1 but 20% EG and 20% DMSO. VS2 was then discarded and each OTC device containing up to four testicular fragments was closed and immersed in NL. After the storage period, the fragments were removed from the NL, kept at room temperature for one min and then immersed at 37 °C in a water bath for 30 s. Samples were warmed by sequentially immersing in solutions of MEM-HEPES supplemented with 3 mg/mL BSA and decreasing concentrations of sucrose. Hematoxylin-eosin staining to analyze the tissue architecture was used. The score scale used was from 0 to 3, classified with a score 0 representing normal morphologically, and 3 were considered a lot of alteration. The histomorphological evaluation of the testicular tissue shows that when evaluating the nuclear alteration (distinction of nucleoli and condensation of nuclei), there are no differences when using slow freezing with respect to the control. However, vitrification presents greater damage (p <0.05). On the other hand, when evaluating the epithelial alteration, we observed that the freezing showed scores statistically equal to the control in variables such as retraction of the basement membrane, formation of gaps and organization of the peritubular cells. The results of the study demonstrated that cryopreservation using the slow freezing method is an excellent tool for the preservation of pubertal testicular tissue.Keywords: cryopreservation, slow freezing, vitrification, testicular tissue, lambs
Procedia PDF Downloads 174515 Evaluation of Tensile Strength of Natural Fibres Reinforced Epoxy Composites Using Fly Ash as Filler Material
Authors: Balwinder Singh, Veerpaul Kaur Mann
Abstract:
A composite material is formed by the combination of two or more phases or materials. Natural minerals-derived Basalt fiber is a kind of fiber being introduced in the polymer composite industry due to its good mechanical properties similar to synthetic fibers and low cost, environment friendly. Also, there is a rising trend towards the use of industrial wastes as fillers in polymer composites with the aim of improving the properties of the composites. The mechanical properties of the fiber-reinforced polymer composites are influenced by various factors like fiber length, fiber weight %, filler weight %, filler size, etc. Thus, a detailed study has been done on the characterization of short-chopped Basalt fiber-reinforced polymer matrix composites using fly ash as filler. Taguchi’s L9 orthogonal array has been used to develop the composites by considering fiber length (6, 9 and 12 mm), fiber weight % (25, 30 and 35 %) and filler weight % (0, 5 and 10%) as input parameters with their respective levels and a thorough analysis on the mechanical characteristics (tensile strength and impact strength) has been done using ANOVA analysis with the help of MINITAB14 software. The investigation revealed that fiber weight is the most significant parameter affecting tensile strength, followed by fiber length and fiber weight %, respectively, while impact characterization showed that fiber length is the most significant factor, followed by fly ash weight, respectively. Introduction of fly ash proved to be beneficial in both the characterization with enhanced values upto 5% fly ash weight. The present study on the natural fibres reinforced epoxy composites using fly ash as filler material to study the effect of input parameters on the tensile strength in order to maximize tensile strength of the composites. Fabrication of composites based on Taguchi L9 orthogonal array design of experiments by using three factors fibre type, fibre weight % and fly ash % with three levels of each factor. The Optimization of composition of natural fibre reinforces composites using ANOVA for obtaining maximum tensile strength on fabricated composites revealed that the natural fibres along with fly ash can be successfully used with epoxy resin to prepare polymer matrix composites with good mechanical properties. Paddy- Paddy fibre gives high elasticity to the fibre composite due to presence of approximately hexagonal structure of cellulose present in paddy fibre. Coir- Coir fibre gives less tensile strength than paddy fibre as Coir fibre is brittle in nature when it pulls breakage occurs showing less tensile strength. Banana- Banana fibre has the least tensile strength in comparison to the paddy & coir fibre due to less cellulose content. Higher fibre weight leads to reduction in tensile strength due to increased nuclei of air pockets. Increasing fly ash content reduces tensile strength due to nonbonding of fly ash particles with natural fibre. Fly ash is also not very strong as compared to the epoxy resin leading to reduction in tensile strength.Keywords: tensile strength and epoxy resin. basalt Fiber, taguchi, polymer matrix, natural fiber
Procedia PDF Downloads 49514 The MHz Frequency Range EM Induction Device Development and Experimental Study for Low Conductive Objects Detection
Authors: D. Kakulia, L. Shoshiashvili, G. Sapharishvili
Abstract:
The results of the study are related to the direction of plastic mine detection research using electromagnetic induction, the development of appropriate equipment, and the evaluation of expected results. Electromagnetic induction sensing is effectively used in the detection of metal objects in the soil and in the discrimination of unexploded ordnances. Metal objects interact well with a low-frequency alternating magnetic field. Their electromagnetic response can be detected at the low-frequency range even when they are placed in the ground. Detection of plastic things such as plastic mines by electromagnetic induction is associated with difficulties. The interaction of non-conducting bodies or low-conductive objects with a low-frequency alternating magnetic field is very weak. At the high-frequency range where already wave processes take place, the interaction increases. Interactions with other distant objects also increase. A complex interference picture is formed, and extraction of useful information also meets difficulties. Sensing by electromagnetic induction at the intermediate MHz frequency range is the subject of research. The concept of detecting plastic mines in this range can be based on the study of the electromagnetic response of non-conductive cavity in a low-conductivity environment or the detection of small metal components in plastic mines, taking into account constructive features. The detector node based on the amplitude and phase detector 'Analog Devices ad8302' has been developed for experimental studies. The node has two inputs. At one of the inputs, the node receives a sinusoidal signal from the generator, to which a transmitting coil is also connected. The receiver coil is attached to the second input of the node. The additional circuit provides an option to amplify the signal output from the receiver coil by 20 dB. The node has two outputs. The voltages obtained at the output reflect the ratio of the amplitudes and the phase difference of the input harmonic signals. Experimental measurements were performed in different positions of the transmitter and receiver coils at the frequency range 1-20 MHz. Arbitrary/Function Generator Tektronix AFG3052C and the eight-channel high-resolution oscilloscope PICOSCOPE 4824 were used in the experiments. Experimental measurements were also performed with a low-conductive test object. The results of the measurements and comparative analysis show the capabilities of the simple detector node and the prospects for its further development in this direction. The results of the experimental measurements are compared and analyzed with the results of appropriate computer modeling based on the method of auxiliary sources (MAS). The experimental measurements are driven using the MATLAB environment. Acknowledgment -This work was supported by Shota Rustaveli National Science Foundation (SRNSF) (Grant number: NFR 17_523).Keywords: EM induction sensing, detector, plastic mines, remote sensing
Procedia PDF Downloads 149513 Post-Soviet LULC Analysis of Tbilisi, Batumi and Kutaisi Using of Remote Sensing and Geo Information System
Authors: Lela Gadrani, Mariam Tsitsagi
Abstract:
Human is a part of the urban landscape and responsible for it. Urbanization of cities includes the longest phase; thus none of the environment ever undergoes such anthropogenic impact as the area of large cities. The post-Soviet period is very interesting in terms of scientific research. The changes that have occurred in the cities since the collapse of the Soviet Union have not yet been analyzed best to our knowledge. In this context, the aim of this paper is to analyze the changes in the land use of the three large cities of Georgia (Tbilisi, Kutaisi, Batumi). Tbilisi as a capital city, Batumi as a port city, and Kutaisi as a former industrial center. Data used during the research process are conventionally divided into satellite and supporting materials. For this purpose, the largest topographic maps (1:10 000) of all three cities were analyzed, Tbilisi General Plans (1896, 1924), Tbilisi and Kutaisi historical maps. The main emphasis was placed on the classification of Landsat images. In this case, we have classified the images LULC (LandUse / LandCover) of all three cities taken in 1987 and 2016 using the supervised and unsupervised methods. All the procedures were performed in the programs: Arc GIS 10.3.1 and ENVI 5.0. In each classification we have singled out the following classes: built-up area, water bodies, agricultural lands, green cover and bare soil, and calculated the areas occupied by them. In order to check the validity of the obtained results, additionally we used the higher resolution images of CORONA and Sentinel. Ultimately we identified the changes that took place in the land use in the post-Soviet period in the above cities. According to the results, a large wave of changes touched Tbilisi and Batumi, though in different periods. It turned out that in the case of Tbilisi, the area of developed territory has increased by 13.9% compared to the 1987 data, which is certainly happening at the expense of agricultural land and green cover, in particular, the area of agricultural lands has decreased by 4.97%; and the green cover by 5.67%. It should be noted that Batumi has obviously overtaken the country's capital in terms of development. With the unaided eye it is clear that in comparison with other regions of Georgia, everything is different in Batumi. In fact, Batumi is an unofficial summer capital of Georgia. Undoubtedly, Batumi’s development is very important both in economic and social terms. However, there is a danger that in the uneven conditions of urban development, we will eventually get a developed center - Batumi, and multiple underdeveloped peripheries around it. Analysis of the changes in the land use is of utmost importance not only for quantitative evaluation of the changes already implemented, but for future modeling and prognosis of urban development. Raster data containing the classes of land use is an integral part of the city's prognostic models.Keywords: analysis, geo information system, remote sensing, LULC
Procedia PDF Downloads 451512 Improving the Management Systems of the Ownership Risks in Conditions of Transformation of the Russian Economy
Authors: Mikhail V. Khachaturyan
Abstract:
The article analyzes problems of improving the management systems of the ownership risks in the conditions of the transformation of the Russian economy. Among the main sources of threats business owners should highlight is the inefficiency of the implementation of business models and interaction with hired managers. In this context, it is particularly important to analyze the relationship of business models and ownership risks. The analysis of this problem appears to be relevant for a number of reasons: Firstly, the increased risk appetite of the owner directly affects the business model and the composition of his holdings; secondly, owners with significant stakes in the company are factors in the formation of particular types of risks for owners, for which relations have a significant influence on a firm's competitiveness and ultimately determines its survival; and thirdly, inefficient system of management ownership of risk is one of the main causes of mass bankruptcies, which significantly affects the stable operation of the economy as a whole. The separation of the processes of possession, disposal and use in modern organizations is the cause of not only problems in the process of interaction between the owner and managers in managing the organization as a whole, but also the asymmetric information about the kinds and forms of the main risks. Managers tend to avoid risky projects, inhibit the diversification of the organization's assets, while owners can insist on the development of such projects, with the aim not only of creating new values for themselves and consumers, but also increasing the value of the company as a result of increasing capital. In terms of separating ownership and management, evaluation of projects by the ratio of risk-yield requires preservation of the influence of the owner on the process of development and making management decisions. It is obvious that without a clearly structured system of participation of the owner in managing the risks of their business, further development is hopeless. In modern conditions of forming a risk management system, owners are compelled to compromise between the desire to increase the organization's ability to produce new value, and, consequently, increase its cost due to the implementation of risky projects and the need to tolerate the cost of lost opportunities of risk diversification. Improving the effectiveness of the management of ownership risks may also contribute to the revitalization of creditors on implementation claims to inefficient owners, which ultimately will contribute to the efficiency models of ownership control to exclude variants of insolvency. It is obvious that in modern conditions, the success of the model of the ownership of risk management and audit is largely determined by the ability and willingness of the owner to find a compromise between potential opportunities for expanding the firm's ability to create new value through risk and maintaining the current level of new value creation and an acceptable level of risk through the use of models of diversification.Keywords: improving, ownership risks, problem, Russia
Procedia PDF Downloads 349511 Investigation of Municipal Solid Waste Incineration Filter Cake as Minor Additional Constituent in Cement Production
Authors: Veronica Caprai, Katrin Schollbach, Miruna V. A. Florea, H. J. H. Brouwers
Abstract:
Nowadays MSWI (Municipal Solid Waste Incineration) bottom ash (BA) produced by Waste-to-Energy (WtE) plants represents the majority of the solid residues derived from MSW incineration. Once processed, the BA is often landfilled resulting in possible environmental problems, additional costs for the plant and increasing occupation of public land. In order to limit this phenomenon, European countries such as the Netherlands aid the utilization of MSWI BA in the construction field, by providing standards about the leaching of contaminants into the environment (Dutch Soil Quality Decree). Commonly, BA has a particle size below 32 mm and a heterogeneous chemical composition, depending on its source. By washing coarser BA, an MSWI sludge is obtained. It is characterized by a high content of heavy metals, chlorides, and sulfates as well as a reduced particle size (below 0.25 mm). To lower its environmental impact, MSWI sludge is filtered or centrifuged for removing easily soluble contaminants, such as chlorides. However, the presence of heavy metals is not easily reduced, compromising its possible application. For lowering the leaching of those contaminants, the use of MSWI residues in combination with cement represents a precious option, due to the known retention of those ions into the hydrated cement matrix. Among the applications, the European standard for common cement EN 197-1:1992 allows the incorporation of up to 5% by mass of a minor additional constituent (MAC), such as fly ash or blast furnace slag but also an unspecified filler into cement. To the best of the author's knowledge, although it is widely available, it has the appropriate particle size and a chemical composition similar to cement, FC has not been investigated as possible MAC in cement production. Therefore, this paper will address the suitability of MSWI FC as MAC for CEM I 52.5 R, within a 5% maximum replacement by mass. After physical and chemical characterization of the raw materials, the crystal phases of the pastes are determined by XRD for 3 replacement levels (1%, 3%, and 5%) at different ages. Thereafter, the impact of FC on mechanical and environmental performances of cement is assessed according to EN 196-1 and the Dutch Soil Quality Decree, respectively. The investigation of the reaction products evidences the formation of layered double hydroxides (LDH), in the early stage of the reaction. Mechanically the presence of FC results in a reduction of 28 days compressive strength by 8% for a replacement of 5% wt., compared with the pure CEM I 52.5 R without any MAC. In contrast, the flexural strength is not affected by the presence of FC. Environmentally, the Dutch legislation for the leaching of contaminants for unshaped (granular) material is satisfied. Based on the collected results, FC represents a suitable candidate as MAC in cement production.Keywords: environmental impact evaluation, Minor additional constituent, MSWI residues, X-ray diffraction crystallography
Procedia PDF Downloads 178510 Rheological Evaluation of a Mucoadhesive Precursor of Based-Poloxamer 407 or Polyethylenimine Liquid Crystal System for Buccal Administration
Authors: Jéssica Bernegossi, Lívia Nordi Dovigo, Marlus Chorilli
Abstract:
Mucoadhesive liquid crystalline systems are emerging how delivery systems for oral cavity. These systems are interesting since they facilitate the targeting of medicines and change the release enabling a reduction in the number of applications made by the patient. The buccal mucosa is permeable besides present a great blood supply and absence of first pass metabolism, it is a good route of administration. It was developed two systems liquid crystals utilizing as surfactant the ethyl alcohol ethoxylated and propoxylated (30%) as oil phase the oleic acid (60%), and the aqueous phase (10%) dispersion of polymer polyethylenimine (0.5%) or dispersion of polymer poloxamer 407 (16%), with the intention of applying the buccal mucosa. Initially, was performed for characterization of systems the conference by polarized light microscopy and rheological analysis. For the preparation of the systems the components described was added above in glass vials and shaken. Then, 30 and 100% artificial saliva were added to each prepared formulation so as to simulate the environment of the oral cavity. For the verification of the system structure, aliquots of the formulations were observed in glass slide and covered with a coverslip, examined in polarized light microscope (PLM) Axioskop - Zeizz® in 40x magnifier. The formulations were also evaluated for their rheological profile Rheometer TA Instruments®, which were obtained rheograms the selected systems employing fluency mode (flow) in temperature of 37ºC (98.6ºF). In PLM, it was observed that in formulations containing polyethylenimine and poloxamer 407 without the addition of artificial saliva was observed dark-field being indicative of microemulsion, this was also observed with the formulation that was increased with 30% of the artificial saliva. In the formulation that was increased with 100% simulated saliva was shown to be a system structure since it presented anisotropy with the presence of striae being indicative of hexagonal liquid crystalline mesophase system. Upon observation of rheograms, both systems without the addition of artificial saliva showed a Newtonian profile, after addition of 30% artificial saliva have been given a non-Newtonian behavior of the pseudoplastic-thixotropic type and after adding 100% of the saliva artificial proved plastic-thixotropic. Furthermore, it is clearly seen that the formulations containing poloxamer 407 have significantly larger (15-800 Pa) shear stress compared to those containing polyethyleneimine (5-50 Pa), indicating a greater plasticity of these. Thus, it is possible to observe that the addition of saliva was of interest to the system structure, starting from a microemulsion for a liquid crystal system, thereby also changing thereby its rheological behavior. The systems have promising characteristics as controlled release systems to the oral cavity, as it features good fluidity during its possible application and greater structuring of the system when it comes into contact with environmental saliva.Keywords: liquid crystal system, poloxamer 407, polyethylenimine, rheology
Procedia PDF Downloads 458509 A Comparative Laboratory Evaluation of Efficacy of Two Fungi: Beauveria bassiana and Acremonium perscinum, on Dichomeris eridantis Meyrick (Lepidoptera: Gelechiidae) Larvae, an Important Pest of Dalbergia sissoo
Authors: Gunjan Srivastava, Shamila Kalia
Abstract:
Dalbergia sissoo Roxb., (Family- Leguminosae; Subfamily- Papilionoideae), is an economically and ecologically important tree species having medicinal value. Of the rich complex of insect fauna, ten have been recognized as potential pests of nurseries and plantations. Present study was conducted to explore an effective ecofriendly control of Dichomeris eridantis Meyrick, an important defoliator pest of D. sissoo. Health and environmental concerns demanded devising a bio-intensive pest management strategy and employing ecofriendly measures. In the present laboratory bioassay two entomopathogenic fungi Acremonium perscinum and Beauveria bassiana were tested and compared for evaluating the efficacy of their seven different concentrations (besides control) against the 3rd, 4th and 5th instar larvae of D. eridantis, on the basis of mean percent mortality data recorded and tabulated for seven days after treatment application. Analysis showed that both treatments vary significantly among themselves. Also, variations amongst instars and duration with respect to their mortality were highly significant (p < .001). All their interactions were found to vary significantly. B. bassiana at 0.25x107 spores / ml spore concentration caused maximum mean percent mortality (62.38%) followed by mean percent mortality at its 0.25x106 spores / ml concentration (56.67%). Mean percent mortality at maximum spore concentration (0.054x107 spores / ml) and next highest spore concentration (0.054 x106 spores / ml) due to A. perscinum treatment were far less effective (mean percent mortality of 45.40% and 31.29%, respectively). At 168 hours mean percent mortality of larval instars due to both fungal treatment applications reached its maximum (52.99%) whereas, at 24 hours mean percent mortality remained least (5.70%). In both cases, treatments were most effective against 3rd instar larvae and least effective against 5th instar larvae. A comparative acccount of efficacy of B. bassiana and A. perscinum on the 3rd, 4th and 5th instar larvae of D. eridantis on 5th, 6th and 7th post treatment observation days after their application, on the basis of their median lethal concentrations (LC50) proved B. bassiana to be more potential microbial pathogen of the two fungal microbes, for all the three instars (3rd, 4th and 5th) of D. eridantis, on all the three days (5th, 6th and 7th post observation days after application of both treatments). Percent mortality of D. eridantis increased in a dose dependent manner. Koch’s Postulates tested positive, thus confirming the pathogenicity of B. bassiana against the larval instars of D. eridantis. LC90 values of 0.280x1011 spores/ml, 0.301x108 spores/ml and 0.262x108 spores/ml concentrations of B. bassiana were standardized which can effectively cause mortality of all the larval instars of D. eridantis in the field after 5th, 6th and 7th day of their application, respectively. Therefore, these concentrations can be safely used in nurseries as well as plantations of D. sissoo for effective control of D. eridantis larvae.Keywords: Acremonium perscinum, Beauveria bassiana, Dalbergia sissoo, Dichomeris eridantis
Procedia PDF Downloads 225508 Pregnancy Outcome in Women with HIV Infection from a Tertiary Care Centre of India
Authors: Kavita Khoiwal, Vatsla Dadhwal, K. Aparna Sharma, Dipika Deka, Plabani Sarkar
Abstract:
Introduction: About 2.4 million (1.93 - 3.04 million) people are living with HIV/AIDS in India. Of all HIV infections, 39% (9,30,000) are among women. 5.4% of infections are from mother to child transmission (MTCT), 25,000 infected children are born every year. Besides the risk of mother to child transmission of HIV, these women are at risk of the higher adverse pregnancy outcome. The objectives of the study were to compare the obstetric and neonatal outcome in women who are HIV positive with low-risk HIV negative women and effect of antiretroviral drugs on preterm birth and IUGR. Materials and Methods: This is a retrospective case record analysis of 212 HIV-positive women delivering between 2002 to 2015, in a tertiary health care centre which was compared with 238 HIV-negative controls. Women who underwent medical termination of pregnancy and abortion were excluded from the study. Obstetric outcome analyzed were pregnancy induced hypertension, HIV positive intrauterine growth restriction, preterm birth, anemia, gestational diabetes and intrahepatic cholestasis of pregnancy. Neonatal outcome analysed were birth weight, apgar score, NICU admission and perinatal transmission.HIV-positiveOut of 212 women, 204 received antiretroviral therapy (ART) to prevent MTCT, 27 women received single dose nevirapine (sdNVP) or sdNVP tailed with 7 days of zidovudine and lamivudine (ZDV + 3TC), 15 received ZDV, 82 women received duovir and 80 women received triple drug therapy depending upon the time period of presentation. Results: Mean age of 212 HIV positive women was 25.72+3.6 years, 101 women (47.6 %) were primigravida. HIV positive status was diagnosed during pregnancy in 200 women while 12 women were diagnosed prior to conception. Among 212 HIV positive women, 20 (9.4 %) women had preterm delivery (< 37 weeks), 194 women (91.5 %) delivered by cesarean section and 18 women (8.5 %) delivered vaginally. 178 neonates (83.9 %) received exclusive top feeding and 34 neonates (16.03 %) received exclusive breast feeding. When compared to low risk HIV negative women (n=238), HIV positive women were more likely to deliver preterm (OR 1.27), have anemia (OR 1.39) and intrauterine growth restriction (OR 2.07). Incidence of pregnancy induced hypertension, diabetes mellitus and ICP was not increased. Mean birth weight was significantly lower in HIV positive women (2593.60+499 gm) when compared to HIV negative women (2919+459 gm). Complete follow up is available for 148 neonates till date, rest are under evaluation. Out of these 7 neonates found to have HIV positive status. Risk of preterm birth (P value = 0.039) and IUGR (P value = 0.739) was higher in HIV positive women who did not receive any ART during pregnancy than women who received ART. Conclusion: HIV positive pregnant women are at increased risk of adverse pregnancy outcome. Multidisciplinary team approach and use of highly active antiretroviral therapy can optimize the maternal and perinatal outcome.Keywords: antiretroviral therapy, HIV infection, IUGR, preterm birth
Procedia PDF Downloads 260507 Inherent Difficulties in Countering Islamophobia
Authors: Imbesat Daudi
Abstract:
Islamophobia, which is a billion-dollar industry, is widespread, especially in the United States, Europe, India, Israel, and countries that have Muslim minorities at odds with their governmental policies. Hatred of Islam in the West did not evolve spontaneously; it was methodically created. Islamophobia's current format has been designed to spread on its own, find a space in the Western psyche, and resist its eradication. Hatred has been sustained by neoconservative ideologues and their allies, which are supported by the mainstream media. Social scientists have evaluated how ideas spread, why any idea can go viral, and where new ideas find space in our brains. This was possible because of the advances in the computational power of software and computers. Spreading of ideas, including Islamophobia, follows a sine curve; it has three phases: An initial exploratory phase with a long lag period, an explosive phase if ideas go viral, and the final phase when ideas find space in the human psyche. In the initial phase, the ideas are quickly examined in a center in the prefrontal lobe. When it is deemed relevant, it is sent for evaluation to another center of the prefrontal lobe; there, it is critically examined. Once it takes a final shape, the idea is sent as a final product to a center in the occipital lobe. This center cannot critically evaluate ideas; it can only defend them from its critics. Counterarguments, no matter how scientific, are automatically rejected. Therefore, arguments that could be highly effective in the early phases are counterproductive once they are stored in the occipital lobe. Anti-Islamophobic intellectuals have done a very good job of countering Islamophobic arguments. However, they have not been as effective as neoconservative ideologues who have promoted anti-Muslim rhetoric that was based on half-truths, misinformation, or outright lies. The failure is partly due to the support pro-war activists receive from the mainstream media, state institutions, mega-corporations engaged in violent conflicts, and think tanks that provide Islamophobic arguments. However, there are also scientific reasons why anti-Islamophobic thinkers have been less effective. There are different dynamics of spreading ideas once they are stored in the occipital lobe. The human brain is incapable of evaluating further once it accepts ideas as its own; therefore, a different strategy is required to be effective. This paper examines 1) why anti-Islamophobic intellectuals have failed in changing the minds of non-Muslims and 2) the steps of countering hatred. Simply put, a new strategy is needed that can effectively counteract hatred of Islam and Muslims. Islamophobia is a disease that requires strong measures. Fighting hatred is always a challenge, but if we understand why Islamophobia is taking root in the twenty-first century, one can succeed in challenging Islamophobic arguments. That will need a coordinated effort of Intellectuals, writers and the media.Keywords: islamophobia, Islam and violence, anti-islamophobia, demonization of Islam
Procedia PDF Downloads 48506 Evaluation of Groundwater Quality and Contamination Sources Using Geostatistical Methods and GIS in Miryang City, Korea
Authors: H. E. Elzain, S. Y. Chung, V. Senapathi, Kye-Hun Park
Abstract:
Groundwater is considered a significant source for drinking and irrigation purposes in Miryang city, and it is attributed to a limited number of a surface water reservoirs and high seasonal variations in precipitation. Population growth in addition to the expansion of agricultural land uses and industrial development may affect the quality and management of groundwater. This research utilized multidisciplinary approaches of geostatistics such as multivariate statistics, factor analysis, cluster analysis and kriging technique in order to identify the hydrogeochemical process and characterizing the control factors of the groundwater geochemistry distribution for developing risk maps, exploiting data obtained from chemical investigation of groundwater samples under the area of study. A total of 79 samples have been collected and analyzed using atomic absorption spectrometer (AAS) for major and trace elements. Chemical maps using 2-D spatial Geographic Information System (GIS) of groundwater provided a powerful tool for detecting the possible potential sites of groundwater that involve the threat of contamination. GIS computer based map exhibited that the higher rate of contamination observed in the central and southern area with relatively less extent in the northern and southwestern parts. It could be attributed to the effect of irrigation, residual saline water, municipal sewage and livestock wastes. At wells elevation over than 85m, the scatter diagram represents that the groundwater of the research area was mainly influenced by saline water and NO3. Level of pH measurement revealed low acidic condition due to dissolved atmospheric CO2 in the soil, while the saline water had a major impact on the higher values of TDS and EC. Based on the cluster analysis results, the groundwater has been categorized into three group includes the CaHCO3 type of the fresh water, NaHCO3 type slightly influenced by sea water and Ca-Cl, Na-Cl types which are heavily affected by saline water. The most predominant water type was CaHCO3 in the study area. Contamination sources and chemical characteristics were identified from factor analysis interrelationship and cluster analysis. The chemical elements that belong to factor 1 analysis were related to the effect of sea water while the elements of factor 2 associated with agricultural fertilizers. The degree level, distribution, and location of groundwater contamination have been generated by using Kriging methods. Thus, geostatistics model provided more accurate results for identifying the source of contamination and evaluating the groundwater quality. GIS was also a creative tool to visualize and analyze the issues affecting water quality in the Miryang city.Keywords: groundwater characteristics, GIS chemical maps, factor analysis, cluster analysis, Kriging techniques
Procedia PDF Downloads 168505 Bio-Medical Equipment Technicians: Crucial Workforce to Improve Quality of Health Services in Rural Remote Hospitals in Nepal
Authors: C. M. Sapkota, B. P. Sapkota
Abstract:
Background: Continuous developments in science and technology are increasing the availability of thousands of medical devices – all of which should be of good quality and used appropriately to address global health challenges. It is obvious that bio medical devices are becoming ever more indispensable in health service delivery and among the key workforce responsible for their design, development, regulation, evaluation and training in their use: biomedical technician (BMET) is the crucial. As a pivotal member of health workforce, biomedical technicians are an essential component of the quality health service delivery mechanism supporting the attainment of the Sustainable Development Goals. Methods: The study was based on cross sectional descriptive design. Indicators measuring the quality of health services were assessed in Mechi Zonal Hospital (MZH) and Sagarmatha Zonal Hospital (SZH). Indicators were calculated based on the data about hospital utilization and performance of 2018 available in Medical record section of both hospitals. MZH had employed the BMET during 2018 but SZH had no BMET in 2018.Focus Group Discussion with health workers in both hospitals was conducted to validate the hospital records. Client exit interview was conducted to assess the level of client satisfaction in both the hospitals. Results: In MZH there was round the clock availability and utilization of Radio diagnostics equipment, Laboratory equipment. Operation Theater was functional throughout the year. Bed Occupancy rate in MZH was 97% but in SZH it was only 63%.In SZH, OT was functional only 54% of the days in 2018. CT scan machine was just installed but not functional. Computerized X-Ray in SZH was functional only in 72% of the days. Level of client satisfaction was 87% in MZH but was just 43% in SZH. MZH performed all (256) the Caesarean Sections but SZH performed only 36% of 210 Caesarean Sections in 2018. In annual performance ranking of Government Hospitals, MZH was placed in 1st rank while as SZH was placed in 19th rank out of 32 referral hospitals nationwide in 2018. Conclusion: Biomedical technicians are the crucial member of the human resource for health team with the pivotal role. Trained and qualified BMET professionals are required within health-care systems in order to design, evaluate, regulate, acquire, maintain, manage and train on safe medical technologies. Applying knowledge of engineering and technology to health-care systems to ensure availability, affordability, accessibility, acceptability and utilization of the safer, higher quality, effective, appropriate and socially acceptable bio medical technology to populations for preventive, promotive, curative, rehabilitative and palliative care across all levels of the health service delivery.Keywords: biomedical equipment technicians, BMET, human resources for health, HRH, quality health service, rural hospitals
Procedia PDF Downloads 126504 A Public Health Perspective on Deradicalisation: Re-Conceptualising Deradicalisation Approaches
Authors: Erin Lawlor
Abstract:
In 2008 Time magazine named terrorist rehabilitation as one of the best ideas of the year. The term deradicalisation has become synonymous with rehabilitation within security discourse. The allure for a “quick fix” when managing terrorist populations (particularly within prisons) has led to a focus on prescriptive programmes where there is a distinct lack of exploration into the drivers for a person to disengage or deradicalise from violence. It has been argued that to tackle a snowballing issue that interventions have moved too quickly for both theory development and methodological structure. This overly quick acceptance of a term that lacks rigorous testing, measuring, and monitoring means that there is distinct lack of evidence base for deradicalisation being a genuine process/phenomenon, leading to academics retrospectively attempting to design frameworks and interventions around a concept that is not truly understood. The UK Home Office has openly acknowledged the lack of empirical data on this subject. This lack of evidence has a direct impact on policy and intervention development. Extremism and deradicalisation are issues that affect public health outcomes on a global scale, to the point that terrorism has now been added to the list of causes of trauma, both in the direct form of being victim of an attack but also the indirect context of witnesses, children and ordinary citizens who live in daily fear. This study critiques current deradicalisation discourses to establish whether public health approaches offer opportunities for development. The research begins by exploring the theoretical constructs of both what deradicalisation, and public health issues are. Questioning: What does deradicalisation involve? Is there an evidential base on which deradicalisation theory has established itself? What theory are public health interventions devised from? What does success look like in both fields? From establishing this base, current deradicalisation practices will then be explored through examples of work already being carried out. Critiques can be broken into discussion points of: Language, the difficulties with conducting empirical studies and the issues around outcome measurements that deradicalisation interventions face. This study argues that a public health approach towards deradicalisation offers the opportunity to attempt to bring clarity to the definitions of radicalisation, identify what could be modified through intervention and offer insights into the evaluation of interventions. As opposed to simply focusing on an element of deradicalisation and analysing that in isolation, a public health approach allows for what the literature has pointed out is missing, a comprehensive analysis of current interventions and information on creating efficacy monitoring systems. Interventions, policies, guidance, and practices in both the UK and Australia will be compared and contrasted, due to the joint nature of this research between Sheffield Hallam University and La Trobe, Melbourne.Keywords: radicalisation, deradicalisation, violent extremism, public health
Procedia PDF Downloads 66503 Comparing Practices of Swimming in the Netherlands against a Global Model for Integrated Development of Mass and High Performance Sport: Perceptions of Coaches
Authors: Melissa de Zeeuw, Peter Smolianov, Arnold Bohl
Abstract:
This study was designed to help and improve international performance as well increase swimming participation in the Netherlands. Over 200 sources of literature on sport delivery systems from 28 Australasian, North and South American, Western and Eastern European countries were analyzed to construct a globally applicable model of high performance swimming integrated with mass participation, comprising of the following seven elements and three levels: Micro level (operations, processes, and methodologies for development of individual athletes): 1. Talent search and development, 2. Advanced athlete support. Meso level (infrastructures, personnel, and services enabling sport programs): 3. Training centers, 4. Competition systems, 5. Intellectual services. Macro level (socio-economic, cultural, legislative, and organizational): 6. Partnerships with supporting agencies, 7. Balanced and integrated funding and structures of mass and elite sport. This model emerged from the integration of instruments that have been used to analyse and compare national sport systems. The model has received scholarly validation and showed to be a framework for program analysis that is not culturally bound. It has recently been accepted as a model for further understanding North American sport systems, including (in chronological order of publications) US rugby, tennis, soccer, swimming and volleyball. The above model was used to design a questionnaire of 42 statements reflecting desired practices. The statements were validated by 12 international experts, including executives from sport governing bodies, academics who published on high performance and sport development, and swimming coaches and administrators. In this study both a highly structured and open ended qualitative analysis tools were used. This included a survey of swim coaches where open responses accompanied structured questions. After collection of the surveys, semi-structured discussions with Federation coaches were conducted to add triangulation to the findings. Lastly, a content analysis of Dutch Swimming’s website and organizational documentation was conducted. A representative sample of 1,600 Dutch Swim coaches and administrators was collected via email addresses from Royal Dutch Swimming Federation' database. Fully completed questionnaires were returned by 122 coaches from all key country’s regions for a response rate of 7,63% - higher than the response rate of the previously mentioned US studies which used the same model and method. Results suggest possible enhancements at macro level (e.g., greater public and corporate support to prepare and hire more coaches and to address the lack of facilities, monies and publicity at mass participation level in order to make swimming affordable for all), at meso level (e.g., comprehensive education for all coaches and full spectrum of swimming pools particularly 50 meters long), and at micro level (e.g., better preparation of athletes for a future outside swimming and better use of swimmers to stimulate swimming development). Best Dutch swimming management practices (e.g., comprehensive support to most talented swimmers who win Olympic medals) as well as relevant international practices available for transfer to the Netherlands (e.g., high school competitions) are discussed.Keywords: sport development, high performance, mass participation, swimming
Procedia PDF Downloads 205502 Classification of Foliar Nitrogen in Common Bean (Phaseolus Vulgaris L.) Using Deep Learning Models and Images
Authors: Marcos Silva Tavares, Jamile Raquel Regazzo, Edson José de Souza Sardinha, Murilo Mesquita Baesso
Abstract:
Common beans are a widely cultivated and consumed legume globally, serving as a staple food for humans, especially in developing countries, due to their nutritional characteristics. Nitrogen (N) is the most limiting nutrient for productivity, and foliar analysis is crucial to ensure balanced nitrogen fertilization. Excessive N applications can cause, either isolated or cumulatively, soil and water contamination, plant toxicity, and increase their susceptibility to diseases and pests. However, the quantification of N using conventional methods is time-consuming and costly, demanding new technologies to optimize the adequate supply of N to plants. Thus, it becomes necessary to establish constant monitoring of the foliar content of this macronutrient in plants, mainly at the V4 stage, aiming at precision management of nitrogen fertilization. In this work, the objective was to evaluate the performance of a deep learning model, Resnet-50, in the classification of foliar nitrogen in common beans using RGB images. The BRS Estilo cultivar was sown in a greenhouse in a completely randomized design with four nitrogen doses (T1 = 0 kg N ha-1, T2 = 25 kg N ha-1, T3 = 75 kg N ha-1, and T4 = 100 kg N ha-1) and 12 replications. Pots with 5L capacity were used with a substrate composed of 43% soil (Neossolo Quartzarênico), 28.5% crushed sugarcane bagasse, and 28.5% cured bovine manure. The water supply of the plants was done with 5mm of water per day. The application of urea (45% N) and the acquisition of images occurred 14 and 32 days after sowing, respectively. A code developed in Matlab© R2022b was used to cut the original images into smaller blocks, originating an image bank composed of 4 folders representing the four classes and labeled as T1, T2, T3, and T4, each containing 500 images of 224x224 pixels obtained from plants cultivated under different N doses. The Matlab© R2022b software was used for the implementation and performance analysis of the model. The evaluation of the efficiency was done by a set of metrics, including accuracy (AC), F1-score (F1), specificity (SP), area under the curve (AUC), and precision (P). The ResNet-50 showed high performance in the classification of foliar N levels in common beans, with AC values of 85.6%. The F1 for classes T1, T2, T3, and T4 was 76, 72, 74, and 77%, respectively. This study revealed that the use of RGB images combined with deep learning can be a promising alternative to slow laboratory analyses, capable of optimizing the estimation of foliar N. This can allow rapid intervention by the producer to achieve higher productivity and less fertilizer waste. Future approaches are encouraged to develop mobile devices capable of handling images using deep learning for the classification of the nutritional status of plants in situ.Keywords: convolutional neural network, residual network 50, nutritional status, artificial intelligence
Procedia PDF Downloads 19501 MBES-CARIS Data Validation for the Bathymetric Mapping of Shallow Water in the Kingdom of Bahrain on the Arabian Gulf
Authors: Abderrazak Bannari, Ghadeer Kadhem
Abstract:
The objectives of this paper are the validation and the evaluation of MBES-CARIS BASE surface data performance for bathymetric mapping of shallow water in the Kingdom of Bahrain. The latter is an archipelago with a total land area of about 765.30 km², approximately 126 km of coastline and 8,000 km² of marine area, located in the Arabian Gulf, east of Saudi Arabia and west of Qatar (26° 00’ N, 50° 33’ E). To achieve our objectives, bathymetric attributed grid files (X, Y, and depth) generated from the coverage of ship-track MBSE data with 300 x 300 m cells, processed with CARIS-HIPS, were downloaded from the General Bathymetric Chart of the Oceans (GEBCO). Then, brought into ArcGIS and converted into a raster format following five steps: Exportation of GEBCO BASE surface data to the ASCII file; conversion of ASCII file to a points shape file; extraction of the area points covering the water boundary of the Kingdom of Bahrain and multiplying the depth values by -1 to get the negative values. Then, the simple Kriging method was used in ArcMap environment to generate a new raster bathymetric grid surface of 30×30 m cells, which was the basis of the subsequent analysis. Finally, for validation purposes, 2200 bathymetric points were extracted from a medium scale nautical map (1:100 000) considering different depths over the Bahrain national water boundary. The nautical map was scanned, georeferenced and overlaid on the MBES-CARIS generated raster bathymetric grid surface (step 5 above), and then homologous depth points were selected. Statistical analysis, expressed as a linear error at the 95% confidence level, showed a strong correlation coefficient (R² = 0.96) and a low RMSE (± 0.57 m) between the nautical map and derived MBSE-CARIS depths if we consider only the shallow areas with depths of less than 10 m (about 800 validation points). When we consider only deeper areas (> 10 m) the correlation coefficient is equal to 0.73 and the RMSE is equal to ± 2.43 m while if we consider the totality of 2200 validation points including all depths, the correlation coefficient is still significant (R² = 0.81) with satisfactory RMSE (± 1.57 m). Certainly, this significant variation can be caused by the MBSE that did not completely cover the bottom in several of the deeper pockmarks because of the rapid change in depth. In addition, steep slopes and the rough seafloor probably affect the acquired MBSE raw data. In addition, the interpolation of missed area values between MBSE acquisition swaths-lines (ship-tracked sounding data) may not reflect the true depths of these missed areas. However, globally the results of the MBES-CARIS data are very appropriate for bathymetric mapping of shallow water areas.Keywords: bathymetry mapping, multibeam echosounder systems, CARIS-HIPS, shallow water
Procedia PDF Downloads 381500 Repair of Thermoplastic Composites for Structural Applications
Authors: Philippe Castaing, Thomas Jollivet
Abstract:
As a result of their advantages, i.e. recyclability, weld-ability, environmental compatibility, long (continuous) fiber thermoplastic composites (LFTPC) are increasingly used in many industrial sectors (mainly automotive and aeronautic) for structural applications. Indeed, in the next ten years, the environmental rules will put the pressure on the use of new structural materials like composites. In aerospace, more than 50% of the damage are due to stress impact and 85% of damage are repaired on the fuselage (fuselage skin panels and around doors). With the arrival of airplanes mainly of composite materials, replacement of sections or panels seems difficult economically speaking and repair becomes essential. The objective of the present study is to propose a solution of repair to prevent the replacement the damaged part in thermoplastic composites in order to recover the initial mechanical properties. The classification of impact damage is not so not easy : talking about low energy impact (less than 35 J) can be totally wrong when high speed or weak thicknesses as well as thermoplastic resins are considered. Crash and perforation with higher energy create important damages and the structures are replaced without repairing, so we just consider here damages due to impacts at low energy that are as follows for laminates : − Transverse cracking; − Delamination; − Fiber rupture. At low energy, the damages are barely visible but can nevertheless reduce significantly the mechanical strength of the part due to resin cracks while few fiber rupture is observed. The patch repair solution remains the standard one but may lead to the rupture of fibers and consequently creates more damages. That is the reason why we investigate the repair of thermoplastic composites impacted at low energy. Indeed, thermoplastic resins are interesting as they absorb impact energy through plastic strain. The methodology is as follows: - impact tests at low energy on thermoplastic composites; - identification of the damage by micrographic observations; - evaluation of the harmfulness of the damage; - repair by reconsolidation according to the extent of the damage ; -validation of the repair by mechanical characterization (compression). In this study, the impacts tests are performed at various levels of energy on thermoplastic composites (PA/C, PEEK/C and PPS/C woven 50/50 and unidirectional) to determine the level of impact energy creating damages in the resin without fiber rupture. We identify the extent of the damage by US inspection and micrographic observations in the plane part thickness. The samples were in addition characterized in compression to evaluate the loss of mechanical properties. Then the strategy of repair consists in reconsolidating the damaged parts by thermoforming, and after reconsolidation the laminates are characterized in compression for validation. To conclude, the study demonstrates the feasibility of the repair for low energy impact on thermoplastic composites as the samples recover their properties. At a first step of the study, the “repair” is made by reconsolidation on a thermoforming press but we could imagine a process in situ to reconsolidate the damaged parts.Keywords: aerospace, automotive, composites, compression, damages, repair, structural applications, thermoplastic
Procedia PDF Downloads 304499 Simulating an Interprofessional Hospital Day Shift: A Student Interprofessional (IP) Collaborative Learning Activity
Authors: Fiona Jensen, Barb Goodwin, Nancy Kleiman, Rhonda Usunier
Abstract:
Background: Clinical simulation is now a common component in many health profession curricula in preparation for clinical practice. In the Rady Faculty of Health Sciences (RFHS) college leads in simulation and interprofessional (IP) education, planned an eight hour simulated hospital day shift, where seventy students from six health professions across two campuses, learned with each other in a safe, realistic environment. Learning about interprofessional collaboration, an expected competency for many health professions upon graduation, was a primary focus of the simulation event. Method: Faculty representatives from the Colleges of Nursing, Medicine, Pharmacy and Rehabilitation Sciences (Physical Therapy, Occupation Therapy, Respiratory Therapy) and Pharmacy worked together to plan the IP event in a simulation facility in the College of Nursing. Each college provided a faculty mentor to guide the same profession students. Students were placed in interprofessional teams consisting of a nurse, physician, pharmacist, and then sharing respiratory, occupational, and physical therapists across the team depending on the needs of the patients. Eight patient scenarios were role played by health profession students, who had been provided with their patient’s story shortly before the event. Each team was guided by a facilitator. Results and Outcomes: On the morning of the event, all students gathered in a large group to meet mentors and facilitators and have a brief overview of the six competencies for effective collaboration and the session objectives. The students assuming their same profession roles were provided with their patient’s chart at the beginning of the shift, met with their team, and then completed professional specific assessments. Shortly into the shift, IP team rounds began, facilitated by the team facilitator. During the shift, each patient role-played a spontaneous health incident, which required collaboration between the IP team members for assessment and management. The afternoon concluded with team rounds, a collaborative management plan, and a facilitated de-brief. Conclusions: During the de-brief sessions, students responded to set questions related to the session learning objectives and expressed many positive learning moments. We believe that we have a sustainable simulation IP collaborative learning opportunity, which can be embedded into curricula, and has the capacity to grow to include more health profession faculties and students. Opportunities are being explored in the RFHS at the administrative level, to offer this event more frequently in the academic year to reach more students. In addition, a formally structured event evaluation tool would provide important feedback and inform the qualitative feedback to event organizers and the colleges about the significance of the simulation event to student learning.Keywords: simulation, collaboration, teams, interprofessional
Procedia PDF Downloads 130498 Presenting Research-Based Mindfulness Tools for Corporate Wellness
Authors: Dana Zelicha
Abstract:
The objective of this paper is to present innovative mindfulness tools specifically designed by OWBA—The Well Being Agency for organisations and corporate wellness programmes. The OWBA Mindfulness Tools (OWBA-MT) consist of practical mindfulness exercises to educate and train employees and business leaders to think, feel, and act more mindfully. Among these cutting-edge interventions are Mindful Meetings, Mindful Decision Making and Unitasking activities, intended to cultivate mindful communication and compassion in the workplace and transform organisational culture. In addition to targeting CEO’s and leaders within large corporations, OWBA-MT is also directed at the needs of specific populations such as entrepreneurs’ resilience and women empowerment. The goals of the OWBA-MT are threefold: to inform, inspire and implement. The first goal is to inform participants about the relationship between workplace stress, distractibility and miscommunication in the framework of mindfulness. The second goal is for the audience to be inspired to share those practices with other members of their organisation. The final objective is to equip participants with the tools to foster a compassionate, mindful and well-balanced work environment. To assess these tools, a 6-week case study was conducted as part of an employee wellness programme for a large international corporation. The OWBA-MT were introduced in a workshop forum once-a-week, with participants practicing these tools both in the office and at home. The workshops occurred 1 day a week (2 hours each), with themes and exercises varying weekly. To reinforce practice at home, participants received reflection forms and guided meditations online. Materials were sent via-email at the same time each day to ensure consistency and participation. To evaluate the effectiveness of the mindfulness intervention, improvements in four categories were measured: listening skills, mindfulness levels, prioritising skills and happiness levels. These factors were assessed using online self-reported questionnaires administered at the start of the intervention, and then again 4-weeks following completion. The measures included the Mindfulness Attention Awareness Scale (MAAS), Listening Skills Inventory (LSI), Time Management Behaviour Scale (TMBS) and a modified version of the Oxford Happiness Questionnaire (OHQ). All four parameters showed significant improvements from the start of the programme to the 4-week follow-up. Participant testimonials exhibited high levels of satisfaction and the overall results indicate that the OWBA-MT intervention substantially impacted the corporation in a positive way. The implications of these results suggest that OWBA-MT can improve employees’ capacities to listen and work well with others, to manage time effectively, and to experience enhanced satisfaction both at work and in life. Although corporate mindfulness programmes have proven to be effective, the challenge remains the low engagement levels at home in between training sessions and to implement the tools beyond the scope of the intervention. OWBA-MT has offered an innovative approach to enforce engagement levels at home by sending daily online materials outside the workshop forum with a personalised response. The limitations also noteworthy to consider for future research include the afterglow effect and lack of generalisability, as this study was conducted on a small and fairly homogenous sample.Keywords: corporate mindfulness, listening skills, mindful leadership, mindfulness tools, organisational well being
Procedia PDF Downloads 243497 Estimating the Efficiency of a Meta-Cognitive Intervention Program to Reduce the Risk Factors of Teenage Drivers with Attention Deficit Hyperactivity Disorder While Driving
Authors: Navah Z. Ratzon, Talia Glick, Iris Manor
Abstract:
Attention Deficit Hyperactivity Disorder (ADHD) is a chronic disorder that affects the sufferer’s functioning throughout life and in various spheres of activity, including driving. Difficulties in cognitive functioning and executive functions are often part and parcel of the ADHD diagnosis, and thus form a risk factor in driving. Studies examining the effectiveness of intervention programs for improving and rehabilitating driving in typical teenagers have been conducted in relatively small numbers; while studies on similar programs for teenagers with ADHD have been especially scarce. The aim of the present study has been to examine the effectiveness of a metacognitive occupational therapy intervention program for reducing risk factors in driving among teenagers with ADHD. The present study included 37 teenagers aged 17 to 19. They included 23 teenagers with ADHD divided into experimental (11) and control (12) groups; as well as 14 non-ADHD teenagers forming a second control group. All teenagers taking part in the study were examined in the Tel Aviv University driving lab, and underwent cognitive diagnoses and a driving simulator test. Every subject in the intervention group took part in 3 assessment meetings, and two metacognitive treatment meetings. The control groups took part in two assessment meetings with a follow-up meeting 3 months later. In all the study’s groups, the treatment’s effectiveness was tested by comparing monitoring results on the driving simulator at the first and second evaluations. In addition, the driving of 5 subjects from the intervention group was monitored continuously from a month prior to the start of the intervention, a month during the phase of the intervention and another month until the end of the intervention. In the ADHD control group, the driving of 4 subjects was monitored from the end of the first evaluation for a period of 3 months. The study’s findings were affected by the fact that the ADHD control group was different from the two other groups, and exhibited ADHD characteristics manifested by impaired executive functions and lower metacognitive abilities relative to their peers. The study found partial, moderate, non-significant correlations between driving skills and cognitive functions, executive functions, and perceptions and attitudes towards driving. According to the driving simulator test results and the limited sampling results of actual driving, it was found that a metacognitive occupational therapy intervention may be effective in reducing risk factors in driving among teenagers with ADHD relative to their peers with and without ADHD. In summary, the results of the present study indicate a positive direction that speaks to the viability of using a metacognitive occupational therapy intervention program for reducing risk factors in driving. A further study is required that will include a bigger number of subjects, add actual driving monitoring hours, and assign subjects randomly to the various groups.Keywords: ADHD, driving, driving monitoring, metacognitive intervention, occupational therapy, simulator, teenagers
Procedia PDF Downloads 306496 An Evaluation of the Artificial Neural Network and Adaptive Neuro Fuzzy Inference System Predictive Models for the Remediation of Crude Oil-Contaminated Soil Using Vermicompost
Authors: Precious Ehiomogue, Ifechukwude Israel Ahuchaogu, Isiguzo Edwin Ahaneku
Abstract:
Vermicompost is the product of the decomposition process using various species of worms, to create a mixture of decomposing vegetable or food waste, bedding materials, and vemicast. This process is called vermicomposting, while the rearing of worms for this purpose is called vermiculture. Several works have verified the adsorption of toxic metals using vermicompost but the application is still scarce for the retention of organic compounds. This research brings to knowledge the effectiveness of earthworm waste (vermicompost) for the remediation of crude oil contaminated soils. The remediation methods adopted in this study were two soil washing methods namely, batch and column process which represent laboratory and in-situ remediation. Characterization of the vermicompost and crude oil contaminated soil were performed before and after the soil washing using Fourier transform infrared (FTIR), scanning electron microscopy (SEM), X-ray fluorescence (XRF), X-ray diffraction (XRD) and Atomic adsorption spectrometry (AAS). The optimization of washing parameters, using response surface methodology (RSM) based on Box-Behnken Design was performed on the response from the laboratory experimental results. This study also investigated the application of machine learning models [Artificial neural network (ANN), Adaptive neuro fuzzy inference system (ANFIS). ANN and ANFIS were evaluated using the coefficient of determination (R²) and mean square error (MSE)]. Removal efficiency obtained from the Box-Behnken design experiment ranged from 29% to 98.9% for batch process remediation. Optimization of the experimental factors carried out using numerical optimization techniques by applying desirability function method of the response surface methodology (RSM) produce the highest removal efficiency of 98.9% at absorbent dosage of 34.53 grams, adsorbate concentration of 69.11 (g/ml), contact time of 25.96 (min), and pH value of 7.71, respectively. Removal efficiency obtained from the multilevel general factorial design experiment ranged from 56% to 92% for column process remediation. The coefficient of determination (R²) for ANN was (0.9974) and (0.9852) for batch and column process, respectively, showing the agreement between experimental and predicted results. For batch and column precess, respectively, the coefficient of determination (R²) for RSM was (0.9712) and (0.9614), which also demonstrates agreement between experimental and projected findings. For the batch and column processes, the ANFIS coefficient of determination was (0.7115) and (0.9978), respectively. It can be concluded that machine learning models can predict the removal of crude oil from polluted soil using vermicompost. Therefore, it is recommended to use machines learning models to predict the removal of crude oil from contaminated soil using vermicompost.Keywords: ANFIS, ANN, crude-oil, contaminated soil, remediation and vermicompost
Procedia PDF Downloads 111495 Boussinesq Model for Dam-Break Flow Analysis
Authors: Najibullah M, Soumendra Nath Kuiry
Abstract:
Dams and reservoirs are perceived for their estimable alms to irrigation, water supply, flood control, electricity generation, etc. which civilize the prosperity and wealth of society across the world. Meantime the dam breach could cause devastating flood that can threat to the human lives and properties. Failures of large dams remain fortunately very seldom events. Nevertheless, a number of occurrences have been recorded in the world, corresponding in an average to one to two failures worldwide every year. Some of those accidents have caused catastrophic consequences. So it is decisive to predict the dam break flow for emergency planning and preparedness, as it poses high risk to life and property. To mitigate the adverse impact of dam break, modeling is necessary to gain a good understanding of the temporal and spatial evolution of the dam-break floods. This study will mainly deal with one-dimensional (1D) dam break modeling. Less commonly used in the hydraulic research community, another possible option for modeling the rapidly varied dam-break flows is the extended Boussinesq equations (BEs), which can describe the dynamics of short waves with a reasonable accuracy. Unlike the Shallow Water Equations (SWEs), the BEs taken into account the wave dispersion and non-hydrostatic pressure distribution. To capture the dam-break oscillations accurately it is very much needed of at least fourth-order accurate numerical scheme to discretize the third-order dispersion terms present in the extended BEs. The scope of this work is therefore to develop an 1D fourth-order accurate in both space and time Boussinesq model for dam-break flow analysis by using finite-volume / finite difference scheme. The spatial discretization of the flux and dispersion terms achieved through a combination of finite-volume and finite difference approximations. The flux term, was solved using a finite-volume discretization whereas the bed source and dispersion term, were discretized using centered finite-difference scheme. Time integration achieved in two stages, namely the third-order Adams Basforth predictor stage and the fourth-order Adams Moulton corrector stage. Implementation of the 1D Boussinesq model done using PYTHON 2.7.5. Evaluation of the performance of the developed model predicted as compared with the volume of fluid (VOF) based commercial model ANSYS-CFX. The developed model is used to analyze the risk of cascading dam failures similar to the Panshet dam failure in 1961 that took place in Pune, India. Nevertheless, this model can be used to predict wave overtopping accurately compared to shallow water models for designing coastal protection structures.Keywords: Boussinesq equation, Coastal protection, Dam-break flow, One-dimensional model
Procedia PDF Downloads 231494 Adapting Cyber Physical Production Systems to Small and Mid-Size Manufacturing Companies
Authors: Yohannes Haile, Dipo Onipede, Jr., Omar Ashour
Abstract:
The main thrust of our research is to determine Industry 4.0 readiness of small and mid-size manufacturing companies in our region and assist them to implement Cyber Physical Production System (CPPS) capabilities. Adopting CPPS capabilities will help organizations realize improved quality, order delivery, throughput, new value creation, and reduced idle time of machines and work centers of their manufacturing operations. The key metrics for the assessment include the level of intelligence, internal and external connections, responsiveness to internal and external environmental changes, capabilities for customization of products with reference to cost, level of additive manufacturing, automation, and robotics integration, and capabilities to manufacture hybrid products in the near term, where near term is defined as 0 to 18 months. In our initial evaluation of several manufacturing firms which are profitable and successful in what they do, we found low level of Physical-Digital-Physical (PDP) loop in their manufacturing operations, whereas 100% of the firms included in this research have specialized manufacturing core competencies that have differentiated them from their competitors. The level of automation and robotics integration is low to medium range, where low is defined as less than 30%, and medium is defined as 30 to 70% of manufacturing operation to include automation and robotics. However, there is a significant drive to include these capabilities at the present time. As it pertains to intelligence and connection of manufacturing systems, it is observed to be low with significant variance in tying manufacturing operations management to Enterprise Resource Planning (ERP). Furthermore, it is observed that the integration of additive manufacturing in general, 3D printing, in particular, to be low, but with significant upside of integrating it in their manufacturing operations in the near future. To hasten the readiness of the local and regional manufacturing companies to Industry 4.0 and transitions towards CPPS capabilities, our working group (ADMAR Working Group) in partnership with our university have been engaged with the local and regional manufacturing companies. The goal is to increase awareness, share know-how and capabilities, initiate joint projects, and investigate the possibility of establishing the Center for Cyber Physical Production Systems Innovation (C2P2SI). The center is intended to support the local and regional university-industry research of implementing intelligent factories, enhance new value creation through disruptive innovations, the development of hybrid and data enhanced products, and the creation of digital manufacturing enterprises. All these efforts will enhance local and regional economic development and educate students that have well developed knowledge and applications of cyber physical manufacturing systems and Industry 4.0.Keywords: automation, cyber-physical production system, digital manufacturing enterprises, disruptive innovation, new value creation, physical-digital-physical loop
Procedia PDF Downloads 140493 Fabrication of SnO₂ Nanotube Arrays for Enhanced Gas Sensing Properties
Authors: Hsyi-En Cheng, Ying-Yi Liou
Abstract:
Metal-oxide semiconductor (MOS) gas sensors are widely used in the gas-detection market due to their high sensitivity, fast response, and simple device structures. However, the high working temperature of MOS gas sensors makes them difficult to integrate with the appliance or consumer goods. One-dimensional (1-D) nanostructures are considered to have the potential to lower their working temperature due to their large surface-to-volume ratio, confined electrical conduction channels, and small feature sizes. Unfortunately, the difficulty of fabricating 1-D nanostructure electrodes has hindered the development of low-temperature MOS gas sensors. In this work, we proposed a method to fabricate nanotube-arrays, and the SnO₂ nanotube-array sensors with different wall thickness were successfully prepared and examined. The fabrication of SnO₂ nanotube arrays incorporates the techniques of barrier-free anodic aluminum oxide (AAO) template and atomic layer deposition (ALD) of SnO₂. First, 1.0 µm Al film was deposited on ITO glass substrate by electron beam evaporation and then anodically oxidized by five wt% phosphoric acid solution at 5°C under a constant voltage of 100 V to form porous aluminum oxide. As the Al film was fully oxidized, a 15 min over anodization and a 30 min post chemical dissolution were used to remove the barrier oxide at the bottom end of pores to generate a barrier-free AAO template. The ALD using reactants of TiCl4 and H₂O was followed to grow a thin layer of SnO₂ on the template to form SnO₂ nanotube arrays. After removing the surface layer of SnO₂ by H₂ plasma and dissolving the template by 5 wt% phosphoric acid solution at 50°C, upright standing SnO₂ nanotube arrays on ITO glass were produced. Finally, Ag top electrode with line width of 5 μm was printed on the nanotube arrays to form SnO₂ nanotube-array sensor. Two SnO₂ nanotube-arrays with wall thickness of 30 and 60 nm were produced in this experiment for the evaluation of gas sensing ability. The flat SnO₂ films with thickness of 30 and 60 nm were also examined for comparison. The results show that the properties of ALD SnO₂ films were related to the deposition temperature. The films grown at 350°C had a low electrical resistivity of 3.6×10-3 Ω-cm and were, therefore, used for the nanotube-array sensors. The carrier concentration and mobility of the SnO₂ films were characterized by Ecopia HMS-3000 Hall-effect measurement system and were 1.1×1020 cm-3 and 16 cm3/V-s, respectively. The electrical resistance of SnO₂ film and nanotube-array sensors in air and in a 5% H₂-95% N₂ mixture gas was monitored by Pico text M3510A 6 1/2 Digits Multimeter. It was found that, at 200 °C, the 30-nm-wall SnO₂ nanotube-array sensor performs the highest responsivity to 5% H₂, followed by the 30-nm SnO₂ film sensor, the 60-nm SnO₂ film sensor, and the 60-nm-wall SnO₂ nanotube-array sensor. However, at temperatures below 100°C, all the samples were insensitive to the 5% H₂ gas. Further investigation on the sensors with thinner SnO₂ is necessary for improving the sensing ability at temperatures below 100 °C.Keywords: atomic layer deposition, nanotube arrays, gas sensor, tin dioxide
Procedia PDF Downloads 242