Search results for: transfer data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 27032

Search results for: transfer data

25892 Macrocycles Enable Tuning of Uranyl Electrochemistry by Lewis Acids

Authors: Amit Kumar, Davide Lionetti, Victor Day, James Blakemore

Abstract:

Capture and activation of the water-soluble uranyl dication (UO22+) remains a challenging problem, as few rational approaches are available for modulating the reactivity of this species. Here, we report the divergent synthesis of heterobimetallic complexes in which UO22+ is held in close proximity to a range of redox-inactive metals by tailored macrocyclic ligands. Crystallographic and spectroscopic studies confirm assembly of homologous UVI(μ-OAr)2Mn+ cores with a range of mono-, di-, and trivalent Lewis acids (Mn+). X-ray diffraction (XRD) and cyclic voltammetry (CV) data suggest preferential binding of K+ in an 18-crown-6-like cavity and Na+ in a 15-crown-5-like cavity, both appended to Schiff-base type sites that selectively bind UO22+. CV data demonstrate that the UVI/UV reduction potential in these complexes shifts positive and the rate of electron transfer decreases with increasing Lewis acidity of the incorporated redox-inactive metals. Moreover, spectroelectrochemical studies confirm the formation of [UV] species in the case of monometallic UO22+ complex, consistent with results from prior studies. However, unique features were observed during spectroelectrochemical studies in the presence of the K+ ion, suggesting new insights into electronic structure may be accessible with the heterobimetallic complexes. Overall, these findings suggest that interactions with Lewis acids could be effectively leveraged for rational tuning of the electronic and thermochemical properties of the 5f elements, reminiscent of strategies more commonly employed with 3d transition metals.

Keywords: electrochemistry, Lewis acid, macrocycle, uranyl

Procedia PDF Downloads 136
25891 Process Integration: Mathematical Model for Contaminant Removal in Refinery Process Stream

Authors: Wasif Mughees, Malik Al-Ahmad

Abstract:

This research presents the graphical design analysis and mathematical programming technique to dig out the possible water allocation distribution to minimize water usage in process units. The study involves the mass and property integration in its core methodology. Tehran Oil Refinery is studied to implement the focused water pinch technology for regeneration, reuse and recycling of water streams. Process data is manipulated in terms of sources and sinks, which are given in terms of properties. Sources are the streams to be allocated. Sinks are the units which can accept the sources. Suspended Solids (SS) is taken as a single contaminant. The model minimizes the mount of freshwater from 340 to 275m3/h (19.1%). Redesigning and allocation of water streams was built. The graphical technique and mathematical programming shows the consistency of results which confirms mass transfer dependency of water streams.

Keywords: minimization, water pinch, process integration, pollution prevention

Procedia PDF Downloads 314
25890 The Inverse Problem in the Process of Heat and Moisture Transfer in Multilayer Walling

Authors: Bolatbek Rysbaiuly, Nazerke Rysbayeva, Aigerim Rysbayeva

Abstract:

Relevance: Energy saving elevated to public policy in almost all developed countries. One of the areas for energy efficiency is improving and tightening design standards. In the tie with the state standards, make high demands for thermal protection of buildings. Constructive arrangement of layers should ensure normal operation in which the humidity of materials of construction should not exceed a certain level. Elevated levels of moisture in the walls can be attributed to a defective condition, as moisture significantly reduces the physical, mechanical and thermal properties of materials. Absence at the design stage of modeling the processes occurring in the construction and predict the behavior of structures during their work in the real world leads to an increase in heat loss and premature aging structures. Method: To solve this problem, widely used method of mathematical modeling of heat and mass transfer in materials. The mathematical modeling of heat and mass transfer are taken into the equation interconnected layer [1]. In winter, the thermal and hydraulic conductivity characteristics of the materials are nonlinear and depends on the temperature and moisture in the material. In this case, the experimental method of determining the coefficient of the freezing or thawing of the material becomes much more difficult. Therefore, in this paper we propose an approximate method for calculating the thermal conductivity and moisture permeability characteristics of freezing or thawing material. Questions. Following the development of methods for solving the inverse problem of mathematical modeling allows us to answer questions that are closely related to the rational design of fences: Where the zone of condensation in the body of the multi-layer fencing; How and where to apply insulation rationally his place; Any constructive activities necessary to provide for the removal of moisture from the structure; What should be the temperature and humidity conditions for the normal operation of the premises enclosing structure; What is the longevity of the structure in terms of its components frost materials. Tasks: The proposed mathematical model to solve the following problems: To assess the condition of the thermo-physical designed structures at different operating conditions and select appropriate material layers; Calculate the temperature field in a structurally complex multilayer structures; When measuring temperature and moisture in the characteristic points to determine the thermal characteristics of the materials constituting the surveyed construction; Laboratory testing to significantly reduce test time, and eliminates the climatic chamber and expensive instrumentation experiments and research; Allows you to simulate real-life situations that arise in multilayer enclosing structures associated with freezing, thawing, drying and cooling of any layer of the building material.

Keywords: energy saving, inverse problem, heat transfer, multilayer walling

Procedia PDF Downloads 395
25889 Host Preference, Impact of Host Transfer and Insecticide Susceptibility among Aphis gossypii Group (Order: Hemiptera) in Jamaica

Authors: Desireina Delancy, Tannice Hall, Eric Garraway, Dwight Robinson

Abstract:

Aphis gossypii, as a pest, directly damages its host plant by extracting phloem sap (sucking) and indirectly damages it by the transmission of viruses, ultimately affecting the yield of the host. Due to its polyphagous nature, this species affects a wide range of host plants, some of which may serve as a reservoir for colonisation of important crops. In Jamaica, there have been outbreaks of viral plant pathogens that were transmitted by Aphis gossypii. Three such examples are Citrus tristeza virus, the Watermelon mosaic virus, and Papaya ringspot virus. Aphis gossypii also heavily colonized economically significant host plants, including pepper, eggplant, watermelon, cucumber, and hibiscus. To facilitate integrated pest management, it is imperative to understand the biology of the aphid and its host preference. Preliminary work in Jamaica has indicated differences in biology and host preference, as well as host variety within the species. However, specific details of fecundity, colony growth, host preference, distribution, and insecticide resistance of Aphis gossypii were unknown to the best of our knowledge. The aim was to investigate the following in relation to Aphis gossypii: influence of the host plant on colonization, life span, fecundity, population size, and morphology; the impact of host transfer on fecundity and population size as a measure of host preference and host transfer success and susceptibility to four commonly used insecticides. Fecundity and colony size were documented daily from aphids acclimatized on Capsicum chinense Jacquin 1776, Cucumis sativus Linnaeus 1630, Gossypium hirsutum Linnaeus 1751 and Abelmoschus esculentus (L.) Moench 1794 for three generations. The same measures were used after third instar aphids were transferred among the hosts as a measure of suitability and success. Mortality, and fecundity of survivors, were determined after aphids were exposed to varying concentrations of Actara®, Diazinon™, Karate Zeon®, and Pegasus®. Host preference results indicated that, over a 24-day period, Aphis gossypii reached its largest colony size on G. hirsutum (x̄ 381.80), with January – February being the most fecund period. Host transfer experiments were all significantly different, with the most significant occurring between transfers from C. chinense to C. sativus (p < 0.05). Colony sizes were found to increase significantly every 5 days, which has implications for regimes implemented to monitor and evaluate plots. Insecticides ranked on lethality are Karate Zeon®> Actara®> Pegasus® > Diazinon™. The highest LC50 values were obtained for aphids on G. hirsutum and C. chinense was with Pegasus® and for those on C. sativus with Diazinon™. Survivors of insecticide treatments had colony sizes on average that were 98 % less than untreated aphids. Cotton was preferred both in the field and in the glasshouse. It is on cotton the aphids settled first, had the highest fecundity, and the lowest mortality. Cotton can serve as reservoir for (re)populating other cotton or different host species based on migration due to overcrowding, heavy showers, high wind, or ant attendance. Host transfer success between all three hosts is highly probable within an intercropping system. Survivors of insecticide treatments can successfully repopulate host plants.

Keywords: Aphis gossypii, host-plant preference, colonization sequence, host transfers, insecticide susceptibility

Procedia PDF Downloads 92
25888 Ontological Modeling Approach for Statistical Databases Publication in Linked Open Data

Authors: Bourama Mane, Ibrahima Fall, Mamadou Samba Camara, Alassane Bah

Abstract:

At the level of the National Statistical Institutes, there is a large volume of data which is generally in a format which conditions the method of publication of the information they contain. Each household or business data collection project includes a dissemination platform for its implementation. Thus, these dissemination methods previously used, do not promote rapid access to information and especially does not offer the option of being able to link data for in-depth processing. In this paper, we present an approach to modeling these data to publish them in a format intended for the Semantic Web. Our objective is to be able to publish all this data in a single platform and offer the option to link with other external data sources. An application of the approach will be made on data from major national surveys such as the one on employment, poverty, child labor and the general census of the population of Senegal.

Keywords: Semantic Web, linked open data, database, statistic

Procedia PDF Downloads 173
25887 Experimental Study on Heat and Mass Transfer of Humidifier for Fuel Cell

Authors: You-Kai Jhang, Yang-Cheng Lu

Abstract:

Major contributions of this study are threefold: designing a new model of planar-membrane humidifier for Proton Exchange Membrane Fuel Cell (PEMFC), an index to measure the Effectiveness (εT) of that humidifier, and an air compressor system to replicate related planar-membrane humidifier experiments. PEMFC as a kind of renewable energy has become more and more important in recent years due to its reliability and durability. To maintain the efficiency of the fuel cell, the membrane of PEMFC need to be controlled in a good hydration condition. How to maintain proper membrane humidity is one of the key issues to optimize PEMFC. We developed new humidifier to recycle water vapor from cathode air outlet so as to keep the moisture content of cathode air inlet in a PEMFC. By measuring parameters such as dry side air outlet dew point temperature, dry side air inlet temperature and humidity, wet side air inlet temperature and humidity, and differential pressure between dry side and wet side, we calculated indices obtained by dew point approach temperature (DPAT), water flux (J), water recovery ratio (WRR), effectiveness (εT), and differential pressure (ΔP). We discussed six topics including sealing effect, flow rate effect, flow direction effect, channel effect, temperature effect, and humidity effect by using these indices. Gas cylinders are used as sources of air supply in many studies of humidifiers. Gas cylinder depletes quickly during experiment at 1kW air flow rate, and it causes replication difficult. In order to ensure high stable air quality and better replication of experimental data, this study designs an air supply system to overcome this difficulty. The experimental result shows that the best rate of pressure loss of humidifier is 0.133×10³ Pa(g)/min at the torque of 25 (N.m). The best humidifier performance ranges from 30-40 (LPM) of air flow rates. The counter flow configured humidifies moisturizes the dry side inlet air more effectively than the parallel flow humidifier. From the performance measurements of the channel plates various rib widths studied in this study, it is found that the narrower the rib width is, the more the performance of humidifier improves. Raising channel width in same hydraulic diameter (Dh ) will obtain higher εT and lower ΔP. Moreover, increasing the dry side air inlet temperature or humidity will lead to lower εT. In addition, when the dry side air inlet temperature exceeds 50°C, the effect becomes even more obvious.

Keywords: PEM fuel cell, water management, membrane humidifier, heat and mass transfer, humidifier performance

Procedia PDF Downloads 170
25886 Scale-Up Study of Gas-Liquid Two Phase Flow in Downcomer

Authors: Jayanth Abishek Subramanian, Ramin Dabirian, Ilias Gavrielatos, Ram Mohan, Ovadia Shoham

Abstract:

Downcomers are important conduits for multiphase flow transfer from offshore platforms to the seabed. Uncertainty in the predictions of the pressure drop of multiphase flow between platforms is often dominated by the uncertainty associated with the prediction of holdup and pressure drop in the downcomer. The objectives of this study are to conduct experimental and theoretical scale-up study of the downcomer. A 4-in. diameter vertical test section was designed and constructed to study two-phase flow in downcomer. The facility is equipped with baffles for flow area restriction, enabling interchangeable annular slot openings between 30% and 61.7%. Also, state-of-the-art instrumentation, the capacitance Wire-Mesh Sensor (WMS) was utilized to acquire the experimental data. A total of 76 experimental data points were acquired, including falling film under 30% and 61.7% annular slot opening for air-water and air-Conosol C200 oil cases as well as gas carry-under for 30% and 61.7% opening utilizing air-Conosol C200 oil. For all experiments, the parameters such as falling film thickness and velocity, entrained liquid holdup in the core, gas void fraction profiles at the cross-sectional area of the liquid column, the void fraction and the gas carry under were measured. The experimental results indicated that the film thickness and film velocity increase as the flow area reduces. Also, the increase in film velocity increases the gas entrainment process. Furthermore, the results confirmed that the increase of gas entrainment for the same liquid flow rate leads to an increase in the gas carry-under. A power comparison method was developed to enable evaluation of the Lopez (2011) model, which was created for full bore downcomer, with the novel scale-up experiment data acquired from the downcomer with the restricted area for flow. Comparison between the experimental data and the model predictions shows a maximum absolute average discrepancy of 22.9% and 21.8% for the falling film thickness and velocity, respectively; and a maximum absolute average discrepancy of 22.2% for fraction of gas carried with the liquid (oil).

Keywords: two phase flow, falling film, downcomer, wire-mesh sensor

Procedia PDF Downloads 161
25885 Neural Synchronization - The Brain’s Transfer of Sensory Data

Authors: David Edgar

Abstract:

To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.

Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)

Procedia PDF Downloads 120
25884 The Role of Data Protection Officer in Managing Individual Data: Issues and Challenges

Authors: Nazura Abdul Manap, Siti Nur Farah Atiqah Salleh

Abstract:

For decades, the misuse of personal data has been a critical issue. Malaysia has accepted responsibility by implementing the Malaysian Personal Data Protection Act 2010 to secure personal data (PDPA 2010). After more than a decade, this legislation is set to be revised by the current PDPA 2023 Amendment Bill to align with the world's key personal data protection regulations, such as the European Union General Data Protection Regulations (GDPR). Among the other suggested adjustments is the Data User's appointment of a Data Protection Officer (DPO) to ensure the commercial entity's compliance with the PDPA 2010 criteria. The change is expected to be enacted in parliament fairly soon; nevertheless, based on the experience of the Personal Data Protection Department (PDPD) in implementing the Act, it is projected that there will be a slew of additional concerns associated with the DPO mandate. Consequently, the goal of this article is to highlight the issues that the DPO will encounter and how the Personal Data Protection Department should respond to this subject. The study result was produced using a qualitative technique based on an examination of the current literature. This research reveals that there are probable obstacles experienced by the DPO, and thus, there should be a definite, clear guideline in place to aid DPO in executing their tasks. It is argued that appointing a DPO is a wise measure in ensuring that the legal data security requirements are met.

Keywords: guideline, law, data protection officer, personal data

Procedia PDF Downloads 75
25883 Analysis of Taxonomic Compositions, Metabolic Pathways and Antibiotic Resistance Genes in Fish Gut Microbiome by Shotgun Metagenomics

Authors: Anuj Tyagi, Balwinder Singh, Naveen Kumar B. T., Niraj K. Singh

Abstract:

Characterization of diverse microbial communities in specific environment plays a crucial role in the better understanding of their functional relationship with the ecosystem. It is now well established that gut microbiome of fish is not the simple replication of microbiota of surrounding local habitat, and extensive species, dietary, physiological and metabolic variations in fishes may have a significant impact on its composition. Moreover, overuse of antibiotics in human, veterinary and aquaculture medicine has led to rapid emergence and propagation of antibiotic resistance genes (ARGs) in the aquatic environment. Microbial communities harboring specific ARGs not only get a preferential edge during selective antibiotic exposure but also possess the significant risk of ARGs transfer to other non-resistance bacteria within the confined environments. This phenomenon may lead to the emergence of habitat-specific microbial resistomes and subsequent emergence of virulent antibiotic-resistant pathogens with severe fish and consumer health consequences. In this study, gut microbiota of freshwater carp (Labeo rohita) was investigated by shotgun metagenomics to understand its taxonomic composition and functional capabilities. Metagenomic DNA, extracted from the fish gut, was subjected to sequencing on Illumina NextSeq to generate paired-end (PE) 2 x 150 bp sequencing reads. After the QC of raw sequencing data by Trimmomatic, taxonomic analysis by Kraken2 taxonomic sequence classification system revealed the presence of 36 phyla, 326 families and 985 genera in the fish gut microbiome. At phylum level, Proteobacteria accounted for more than three-fourths of total bacterial populations followed by Actinobacteria (14%) and Cyanobacteria (3%). Commonly used probiotic bacteria (Bacillus, Lactobacillus, Streptococcus, and Lactococcus) were found to be very less prevalent in fish gut. After sequencing data assembly by MEGAHIT v1.1.2 assembler and PROKKA automated analysis pipeline, pathway analysis revealed the presence of 1,608 Metacyc pathways in the fish gut microbiome. Biosynthesis pathways were found to be the most dominant (51%) followed by degradation (39%), energy-metabolism (4%) and fermentation (2%). Almost one-third (33%) of biosynthesis pathways were involved in the synthesis of secondary metabolites. Metabolic pathways for the biosynthesis of 35 antibiotic types were also present, and these accounted for 5% of overall metabolic pathways in the fish gut microbiome. Fifty-one different types of antibiotic resistance genes (ARGs) belonging to 15 antimicrobial resistance (AMR) gene families and conferring resistance against 24 antibiotic types were detected in fish gut. More than 90% ARGs in fish gut microbiome were against beta-lactams (penicillins, cephalosporins, penems, and monobactams). Resistance against tetracycline, macrolides, fluoroquinolones, and phenicols ranged from 0.7% to 1.3%. Some of the ARGs for multi-drug resistance were also found to be located on sequences of plasmid origin. The presence of pathogenic bacteria and ARGs on plasmid sequences suggested the potential risk due to horizontal gene transfer in the confined gut environment.

Keywords: antibiotic resistance, fish gut, metabolic pathways, microbial diversity

Procedia PDF Downloads 140
25882 Investigation of Growth Yield and Antioxidant Activity of Monascus purpureus Extract Isolated from Stirred Tank Bioreactor

Authors: M. Pourshirazi, M. Esmaelifar, A. Aliahmadi, F. Yazdian, A. S. Hatamian Zarami, S. J. Ashrafi

Abstract:

Monascus purpureus is an antioxidant-producing fungus whose secondary metabolites can be used in drug industries. The growth yield and antioxidant activity of extract were investigated in 3-L liquid fermentation media in a 5-L stirred tank bioreactor (STD) at 30°C, pH 5.93 and darkness for 4 days with 150 rpm agitation and 40% dissolved oxygen. Results were compared to extract isolated from Erlenmeyer flask with the same condition. The growth yield was 0.21 and 0.17 in STD condition and Erlenmeyer flask, respectively. Furthermore, the IC50 of DPPH scavenging activity was 256.32 µg/ml and 150.43 µg/ml for STD extract and flask extract, respectively. Our data demonstrated that transferring the growth condition into the STD caused an increase in growth yield but not in antioxidant activity. Accordingly, there is no relationship between growth rate and secondary metabolites formation. More studies are needed to determine the mass transfer coefficient and also evaluating the hydrodynamic condition have to be done in the future studies.

Keywords: Monascus purpureus, bioreactor, antioxidant, growth yield

Procedia PDF Downloads 400
25881 Public-Private Partnership Projects in Canada: A Case Study Approach

Authors: Samuel Carpintero

Abstract:

Public-private partnerships (PPP) arrangements have emerged all around the world as a response to infrastructure deficits and the need to refurbish existing infrastructure. The motivations of governments for embarking on PPPs for the delivery of public infrastructure are manifold, and include on-time and on-budget delivery as well as access to private project management expertise. The PPP formula has been used by some State governments in United States and Canada, where the participation of private companies in financing and managing infrastructure projects has increased significantly in the last decade, particularly in the transport sector. On the one hand, this paper examines the various ways used in these two countries in the implementation of PPP arrangements, with a particular focus on risk transfer. The examination of risk transfer in this paper is carried out with reference to the following key PPP risk categories: construction risk, revenue risk, operating risk and availability risk. The main difference between both countries is that in Canada the demand risk remains usually within the public sector whereas in the United States this risk is usually transferred to the private concessionaire. The aim is to explore which lessons can be learnt from both models than might be useful for other countries. On the other hand, the paper also analyzes why the Spanish companies have been so successful in winning PPP contracts in North America during the past decade. Contrary to the Latin American PPP market, the Spanish companies do not have any cultural advantage in the case of the United States and Canada. Arguably, some relevant reasons for the success of the Spanish groups are their extensive experience in PPP projects (that dates back to the late 1960s in some cases), their high technical level (that allows them to be aggressive in their bids), and their good position and track-record in the financial markets. The article’s empirical base consists of data provided by official sources of both countries as well as information collected through face-to-face interviews with public and private representatives of the stakeholders participating in some of the PPP schemes. Interviewees include private project managers of the concessionaires, representatives of banks involved as financiers in the projects, and experts in the PPP industry with close knowledge of the North American market. Unstructured in-depth interviews have been adopted as a means of investigation for this study because of its powers to achieve honest and robust responses and to ensure realism in the collection of an overall impression of stakeholders’ perspectives.

Keywords: PPP, concession, infrastructure, construction

Procedia PDF Downloads 295
25880 Data Collection Based on the Questionnaire Survey In-Hospital Emergencies

Authors: Nouha Mhimdi, Wahiba Ben Abdessalem Karaa, Henda Ben Ghezala

Abstract:

The methods identified in data collection are diverse: electronic media, focus group interviews and short-answer questionnaires [1]. The collection of poor-quality data resulting, for example, from poorly designed questionnaires, the absence of good translators or interpreters, and the incorrect recording of data allow conclusions to be drawn that are not supported by the data or to focus only on the average effect of the program or policy. There are several solutions to avoid or minimize the most frequent errors, including obtaining expert advice on the design or adaptation of data collection instruments; or use technologies allowing better "anonymity" in the responses [2]. In this context, we opted to collect good quality data by doing a sizeable questionnaire-based survey on hospital emergencies to improve emergency services and alleviate the problems encountered. At the level of this paper, we will present our study, and we will detail the steps followed to achieve the collection of relevant, consistent and practical data.

Keywords: data collection, survey, questionnaire, database, data analysis, hospital emergencies

Procedia PDF Downloads 103
25879 Shark Detection and Classification with Deep Learning

Authors: Jeremy Jenrette, Z. Y. C. Liu, Pranav Chimote, Edward Fox, Trevor Hastie, Francesco Ferretti

Abstract:

Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation. We created a database of shark images by sourcing 24,546 images covering 219 species of sharks from the web application spark pulse and the social network Instagram. We used object detection to extract shark features and inflate this database to 53,345 images. We packaged object-detection and image classification models into a Shark Detector bundle. We developed the Shark Detector to recognize and classify sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: boosting training datasets, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity. The Shark Detector located sharks in baited remote footage and YouTube videos with an average accuracy of 89\%, and classified located subjects to the species level with 69\% accuracy (n =\ eight species). The Shark Detector sorted heterogeneous datasets of images sourced from Instagram with 91\% accuracy and classified species with 70\% accuracy (n =\ 17 species). Data-mining Instagram can inflate training datasets and increase the Shark Detector’s accuracy as well as facilitate archiving of historical and novel shark observations. Base accuracy of genus prediction was 68\% across 25 genera. The average base accuracy of species prediction within each genus class was 85\%. The Shark Detector can classify 45 species. All data-generation methods were processed without manual interaction. As media-based remote monitoring strives to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.

Keywords: classification, data mining, Instagram, remote monitoring, sharks

Procedia PDF Downloads 112
25878 Studies of the Corrosion Kinetics of Metal Alloys in Stagnant Simulated Seawater Environment

Authors: G. Kabir, A. M. Mohammed, M. A. Bawa

Abstract:

The paper presents corrosion behaviors of Naval Brass, aluminum alloy and carbon steel in simulated seawater under stagnant conditions. The behaviors were characterized on the variation of chloride ions concentration in the range of 3.0wt% and 3.5wt% and exposure time. The weight loss coupon-method immersion technique was employed. The weight loss for the various alloys was measured. Based on the obtained results, the corrosion rate was determined. It was found that the corrosion rates of the various alloys are related to the chloride ions concentrations, exposure time and kinetics of passive film formation of the various alloys. Carbon steel, suffers corrosion many folds more than Naval Brass. This indicated that the alloy exhibited relatively strong resistance to corrosion in the exposure environment of the seawater. Whereas, the aluminum alloy exhibited an excellent and beneficial resistance to corrosion more than the Naval Brass studied. Despite the prohibitive cost, Naval Brass and aluminum alloy, indicated to have beneficial corrosion behavior that can offer wide range of application in seashore operations. The corrosion kinetics parameters indicated that the corrosion reaction is limited by diffusion mass transfer of the corrosion reaction elements and not by reaction controlled.

Keywords: alloys, chloride ions concentration, corrosion kinetics, corrosion rate, diffusion mass transfer, exposure time, seawater, weight loss

Procedia PDF Downloads 300
25877 Monte Carlo Risk Analysis of a Carbon Abatement Technology

Authors: Hameed Rukayat Opeyemi, Pericles Pilidis, Pagone Emanuele

Abstract:

Climate change represents one of the single most challenging problems facing the world today. According to the National Oceanic and Administrative Association, Atmospheric temperature rose almost 25% since 1958, Artic sea ice has shrunk 40% since 1959 and global sea levels have risen more than 5.5 cm since 1990. Power plants are the major culprits of GHG emission to the atmosphere. Several technologies have been proposed to reduce the amount of GHG emitted to the atmosphere from power plant, one of which is the less researched Advanced zero emission power plant. The advanced zero emission power plants make use of mixed conductive membrane (MCM) reactor also known as oxygen transfer membrane (OTM) for oxygen transfer. The MCM employs membrane separation process. The membrane separation process was first introduced in 1899 when Walter Hermann Nernst investigated electric current between metals and solutions. He found that when a dense ceramic is heated, current of oxygen molecules move through it. In the bid to curb the amount of GHG emitted to the atmosphere, the membrane separation process was applied to the field of power engineering in the low carbon cycle known as the Advanced zero emission power plant (AZEP cycle). The AZEP cycle was originally invented by Norsk Hydro, Norway and ABB Alstom power (now known as Demag Delaval Industrial turbo machinery AB), Sweden. The AZEP drew a lot of attention because its ability to capture ~100% CO2 and also boasts of about 30-50 % cost reduction compared to other carbon abatement technologies, the penalty in efficiency is also not as much as its counterparts and crowns it with almost zero NOx emissions due to very low nitrogen concentrations in the working fluid. The advanced zero emission power plants differ from a conventional gas turbine in the sense that its combustor is substituted with the mixed conductive membrane (MCM-reactor). The MCM-reactor is made up of the combustor, low temperature heat exchanger LTHX (referred to by some authors as air pre-heater the mixed conductive membrane responsible for oxygen transfer and the high temperature heat exchanger and in some layouts, the bleed gas heat exchanger. Air is taken in by the compressor and compressed to a temperature of about 723 Kelvin and pressure of 2 Mega-Pascals. The membrane area needed for oxygen transfer is reduced by increasing the temperature of 90% of the air using the LTHX; the temperature is also increased to facilitate oxygen transfer through the membrane. The air stream enters the LTHX through the transition duct leading to inlet of the LTHX. The temperature of the air stream is then increased to about 1150 K depending on the design point specification of the plant and the efficiency of the heat exchanging system. The amount of oxygen transported through the membrane is directly proportional to the temperature of air going through the membrane. The AZEP cycle was developed using the Fortran software and economic analysis was conducted using excel and Matlab followed by optimization case study. This paper discusses techno-economic analysis of four possible layouts of the AZEP cycle. The Simple bleed gas heat exchange layout (100 % CO2 capture), Bleed gas heat exchanger layout with flue gas turbine (100 % CO2 capture), Pre-expansion reheating layout (Sequential burning layout) – AZEP 85 % (85 % CO2 capture) and Pre-expansion reheating layout (Sequential burning layout) with flue gas turbine– AZEP 85 % (85 % CO2 capture). This paper discusses Montecarlo risk analysis of four possible layouts of the AZEP cycle.

Keywords: gas turbine, global warming, green house gases, power plants

Procedia PDF Downloads 470
25876 Federated Learning in Healthcare

Authors: Ananya Gangavarapu

Abstract:

Convolutional Neural Networks (CNN) based models are providing diagnostic capabilities on par with the medical specialists in many specialty areas. However, collecting the medical data for training purposes is very challenging because of the increased regulations around data collections and privacy concerns around personal health data. The gathering of the data becomes even more difficult if the capture devices are edge-based mobile devices (like smartphones) with feeble wireless connectivity in rural/remote areas. In this paper, I would like to highlight Federated Learning approach to mitigate data privacy and security issues.

Keywords: deep learning in healthcare, data privacy, federated learning, training in distributed environment

Procedia PDF Downloads 138
25875 Nanoparticle Based Green Inhibitor for Corrosion Protection of Zinc in Acidic Medium

Authors: Neha Parekh, Divya Ladha, Poonam Wadhwani, Nisha Shah

Abstract:

Nano scaled materials have attracted tremendous interest as corrosion inhibitor due to their high surface area on the metal surfaces. It is well known that the zinc oxide nanoparticles have higher reactivity towards aqueous acidic solution. This work presents a new method to incorporate zinc oxide nanoparticles with white sesame seeds extract (nano-green inhibitor) for corrosion protection of zinc in acidic medium. The morphology of the zinc oxide nanoparticles was investigated by TEM and DLS. The corrosion inhibition efficiency of the green inhibitor and nano-green inhibitor was determined by Gravimetric and electrochemical impedance spectroscopy (EIS) methods. Gravimetric measurements suggested that nano-green inhibitor is more effective than green inhibitor. Furthermore, with the increasing temperature, inhibition efficiency increases for both the inhibitors. In addition, it was established the Temkin adsorption isotherm fits well with the experimental data for both the inhibitors. The effect of temperature and Temkin adsorption isotherm revealed Chemisorption mechanism occurring in the system. The activation energy (Ea) and other thermodynamic parameters for inhibition process were calculated. The data of EIS showed that the charge transfer controls the corrosion process. The surface morphology of zinc metal (specimen) in absence and presence of green inhibitor and nano-green inhibitor were performed using Scanning Electron Microscopy (SEM) and Atomic Force Microscopy (AFM) techniques. The outcomes indicated a formation of a protective layer over zinc metal (specimen).

Keywords: corrosion, green inhibitor, nanoparticles, zinc

Procedia PDF Downloads 447
25874 The Utilization of Big Data in Knowledge Management Creation

Authors: Daniel Brian Thompson, Subarmaniam Kannan

Abstract:

The huge weightage of knowledge in this world and within the repository of organizations has already reached immense capacity and is constantly increasing as time goes by. To accommodate these constraints, Big Data implementation and algorithms are utilized to obtain new or enhanced knowledge for decision-making. With the transition from data to knowledge provides the transformational changes which will provide tangible benefits to the individual implementing these practices. Today, various organization would derive knowledge from observations and intuitions where this information or data will be translated into best practices for knowledge acquisition, generation and sharing. Through the widespread usage of Big Data, the main intention is to provide information that has been cleaned and analyzed to nurture tangible insights for an organization to apply to their knowledge-creation practices based on facts and figures. The translation of data into knowledge will generate value for an organization to make decisive decisions to proceed with the transition of best practices. Without a strong foundation of knowledge and Big Data, businesses are not able to grow and be enhanced within the competitive environment.

Keywords: big data, knowledge management, data driven, knowledge creation

Procedia PDF Downloads 112
25873 Survey on Data Security Issues Through Cloud Computing Amongst Sme’s in Nairobi County, Kenya

Authors: Masese Chuma Benard, Martin Onsiro Ronald

Abstract:

Businesses have been using cloud computing more frequently recently because they wish to take advantage of its advantages. However, employing cloud computing also introduces new security concerns, particularly with regard to data security, potential risks and weaknesses that could be exploited by attackers, and various tactics and strategies that could be used to lessen these risks. This study examines data security issues on cloud computing amongst sme’s in Nairobi county, Kenya. The study used the sample size of 48, the research approach was mixed methods, The findings show that data owner has no control over the cloud merchant's data management procedures, there is no way to ensure that data is handled legally. This implies that you will lose control over the data stored in the cloud. Data and information stored in the cloud may face a range of availability issues due to internet outages; this can represent a significant risk to data kept in shared clouds. Integrity, availability, and secrecy are all mentioned.

Keywords: data security, cloud computing, information, information security, small and medium-sized firms (SMEs)

Procedia PDF Downloads 81
25872 Cloud Design for Storing Large Amount of Data

Authors: M. Strémy, P. Závacký, P. Cuninka, M. Juhás

Abstract:

Main goal of this paper is to introduce our design of private cloud for storing large amount of data, especially pictures, and to provide good technological backend for data analysis based on parallel processing and business intelligence. We have tested hypervisors, cloud management tools, storage for storing all data and Hadoop to provide data analysis on unstructured data. Providing high availability, virtual network management, logical separation of projects and also rapid deployment of physical servers to our environment was also needed.

Keywords: cloud, glusterfs, hadoop, juju, kvm, maas, openstack, virtualization

Procedia PDF Downloads 350
25871 Estimation of Missing Values in Aggregate Level Spatial Data

Authors: Amitha Puranik, V. S. Binu, Seena Biju

Abstract:

Missing data is a common problem in spatial analysis especially at the aggregate level. Missing can either occur in covariate or in response variable or in both in a given location. Many missing data techniques are available to estimate the missing data values but not all of these methods can be applied on spatial data since the data are autocorrelated. Hence there is a need to develop a method that estimates the missing values in both response variable and covariates in spatial data by taking account of the spatial autocorrelation. The present study aims to develop a model to estimate the missing data points at the aggregate level in spatial data by accounting for (a) Spatial autocorrelation of the response variable (b) Spatial autocorrelation of covariates and (c) Correlation between covariates and the response variable. Estimating the missing values of spatial data requires a model that explicitly account for the spatial autocorrelation. The proposed model not only accounts for spatial autocorrelation but also utilizes the correlation that exists between covariates, within covariates and between a response variable and covariates. The precise estimation of the missing data points in spatial data will result in an increased precision of the estimated effects of independent variables on the response variable in spatial regression analysis.

Keywords: spatial regression, missing data estimation, spatial autocorrelation, simulation analysis

Procedia PDF Downloads 375
25870 Analysis of Composite Health Risk Indicators Built at a Regional Scale and Fine Resolution to Detect Hotspot Areas

Authors: Julien Caudeville, Muriel Ismert

Abstract:

Analyzing the relationship between environment and health has become a major preoccupation for public health as evidenced by the emergence of the French national plans for health and environment. These plans have identified the following two priorities: (1) to identify and manage geographic areas, where hotspot exposures are suspected to generate a potential hazard to human health; (2) to reduce exposure inequalities. At a regional scale and fine resolution of exposure outcome prerequisite, environmental monitoring networks are not sufficient to characterize the multidimensionality of the exposure concept. In an attempt to increase representativeness of spatial exposure assessment approaches, risk composite indicators could be built using additional available databases and theoretical framework approaches to combine factor risks. To achieve those objectives, combining data process and transfer modeling with a spatial approach is a fundamental prerequisite that implies the need to first overcome different scientific limitations: to define interest variables and indicators that could be built to associate and describe the global source-effect chain; to link and process data from different sources and different spatial supports; to develop adapted methods in order to improve spatial data representativeness and resolution. A GIS-based modeling platform for quantifying human exposure to chemical substances (PLAINE: environmental inequalities analysis platform) was used to build health risk indicators within the Lorraine region (France). Those indicators combined chemical substances (in soil, air and water) and noise risk factors. Tools have been developed using modeling, spatial analysis and geostatistic methods to build and discretize interest variables from different supports and resolutions on a 1 km2 regular grid within the Lorraine region. By example, surface soil concentrations have been estimated by developing a Kriging method able to integrate surface and point spatial supports. Then, an exposure model developed by INERIS was used to assess the transfer from soil to individual exposure through ingestion pathways. We used distance from polluted soil site to build a proxy for contaminated site. Air indicator combined modeled concentrations and estimated emissions to take in account 30 polluants in the analysis. For water, drinking water concentrations were compared to drinking water standards to build a score spatialized using a distribution unit serve map. The Lden (day-evening-night) indicator was used to map noise around road infrastructures. Aggregation of the different factor risks was made using different methodologies to discuss weighting and aggregation procedures impact on the effectiveness of risk maps to take decisions for safeguarding citizen health. Results permit to identify pollutant sources, determinants of exposure, and potential hotspots areas. A diagnostic tool was developed for stakeholders to visualize and analyze the composite indicators in an operational and accurate manner. The designed support system will be used in many applications and contexts: (1) mapping environmental disparities throughout the Lorraine region; (2) identifying vulnerable population and determinants of exposure to set priorities and target for pollution prevention, regulation and remediation; (3) providing exposure database to quantify relationships between environmental indicators and cancer mortality data provided by French Regional Health Observatories.

Keywords: health risk, environment, composite indicator, hotspot areas

Procedia PDF Downloads 245
25869 Numerical Simulation of Transient 3D Temperature and Kerf Formation in Laser Fusion Cutting

Authors: Karim Kheloufi, El Hachemi Amara

Abstract:

In the present study, a three-dimensional transient numerical model was developed to study the temperature field and cutting kerf shape during laser fusion cutting. The finite volume model has been constructed, based on the Navier–Stokes equations and energy conservation equation for the description of momentum and heat transport phenomena, and the Volume of Fluid (VOF) method for free surface tracking. The Fresnel absorption model is used to handle the absorption of the incident wave by the surface of the liquid metal and the enthalpy-porosity technique is employed to account for the latent heat during melting and solidification of the material. To model the physical phenomena occurring at the liquid film/gas interface, including momentum/heat transfer, a new approach is proposed which consists of treating friction force, pressure force applied by the gas jet and the heat absorbed by the cutting front surface as source terms incorporated into the governing equations. All these physics are coupled and solved simultaneously in Fluent CFD®. The main objective of using a transient phase change model in the current case is to simulate the dynamics and geometry of a growing laser-cutting generated kerf until it becomes fully developed. The model is used to investigate the effect of some process parameters on temperature fields and the formed kerf geometry.

Keywords: laser cutting, numerical simulation, heat transfer, fluid flow

Procedia PDF Downloads 331
25868 Non-Family Members as Successors of Choice in South African Family Businesses

Authors: Jonathan Marks, Lauren Katz

Abstract:

Family firms are a vital component of a country’s stability, prosperity and development. Their sustainability, longevity and continuity are critical. Given the premise that family firms wish to continue the business for the benefit of the family, the family founder / owner is faced with an emotionally charged transition option; either to transfer the family business to a family member or to transfer the firm to a non-family member. The rationale employed by family founders to select non-family members as successors/ executives of choice and the concomitant rationale employed by non-family members to select family firms as employers of choice, has been under-researched in the literature of family business succession planning. This qualitative study used semi-structured interviews to gain access to family firm founders/ owners, non-family successors/ executives and industry experts on family business. The findings indicated that the rationale for family members to select non-family successors/ executives was underpinned by the objective to grow the family firm for the benefit of the family. If non-family members were the most suitable candidates to ensure this outcome, family members were comfortable to employ non-family members. Non- family members, despite the knowledge that benefit lay primarily with family members, chose to work for family firms for personal benefits in terms of wealth, security and close connections. A commonly shared value system was a pre-requisite for all respondents. The research study provides insights from family founders/ owners, non-family successors/ executives, and industry experts on the subject of succession planning outside the family structure.

Keywords: agency theory, family business, institutional logics, non-family successors, Stewardship Theory

Procedia PDF Downloads 365
25867 Association Rules Mining and NOSQL Oriented Document in Big Data

Authors: Sarra Senhadji, Imene Benzeguimi, Zohra Yagoub

Abstract:

Big Data represents the recent technology of manipulating voluminous and unstructured data sets over multiple sources. Therefore, NOSQL appears to handle the problem of unstructured data. Association rules mining is one of the popular techniques of data mining to extract hidden relationship from transactional databases. The algorithm for finding association dependencies is well-solved with Map Reduce. The goal of our work is to reduce the time of generating of frequent itemsets by using Map Reduce and NOSQL database oriented document. A comparative study is given to evaluate the performances of our algorithm with the classical algorithm Apriori.

Keywords: Apriori, Association rules mining, Big Data, Data Mining, Hadoop, MapReduce, MongoDB, NoSQL

Procedia PDF Downloads 156
25866 Wireless Information Transfer Management and Case Study of a Fire Alarm System in a Residential Building

Authors: Mohsen Azarmjoo, Mehdi Mehdizadeh Koupaei, Maryam Mehdizadeh Koupaei, Asghar Mahdlouei Azar

Abstract:

The increasing prevalence of wireless networks in our daily lives has made them indispensable. The aim of this research is to investigate the management of information transfer in wireless networks and the integration of renewable solar energy resources in a residential building. The focus is on the transmission of electricity and information through wireless networks, as well as the utilization of sensors and wireless fire alarm systems. The research employs a descriptive approach to examine the transmission of electricity and information on a wireless network with electric and optical telephone lines. It also investigates the transmission of signals from sensors and wireless fire alarm systems via radio waves. The methodology includes a detailed analysis of security, comfort conditions, and costs related to the utilization of wireless networks and renewable solar energy resources. The study reveals that it is feasible to transmit electricity on a network cable using two pairs of network cables without the need for separate power cabling. Additionally, the integration of renewable solar energy systems in residential buildings can reduce dependence on traditional energy carriers. The use of sensors and wireless remote information processing can enhance the safety and efficiency of energy usage in buildings and the surrounding spaces.

Keywords: renewable energy, intelligentization, wireless sensors, fire alarm system

Procedia PDF Downloads 51
25865 Immunization-Data-Quality in Public Health Facilities in the Pastoralist Communities: A Comparative Study Evidence from Afar and Somali Regional States, Ethiopia

Authors: Melaku Tsehay

Abstract:

The Consortium of Christian Relief and Development Associations (CCRDA), and the CORE Group Polio Partners (CGPP) Secretariat have been working with Global Alliance for Vac-cines and Immunization (GAVI) to improve the immunization data quality in Afar and Somali Regional States. The main aim of this study was to compare the quality of immunization data before and after the above interventions in health facilities in the pastoralist communities in Ethiopia. To this end, a comparative-cross-sectional study was conducted on 51 health facilities. The baseline data was collected in May 2019, while the end line data in August 2021. The WHO data quality self-assessment tool (DQS) was used to collect data. A significant improvment was seen in the accuracy of the pentavalent vaccine (PT)1 (p = 0.012) data at the health posts (HP), while PT3 (p = 0.010), and Measles (p = 0.020) at the health centers (HC). Besides, a highly sig-nificant improvment was observed in the accuracy of tetanus toxoid (TT)2 data at HP (p < 0.001). The level of over- or under-reporting was found to be < 8%, at the HP, and < 10% at the HC for PT3. The data completeness was also increased from 72.09% to 88.89% at the HC. Nearly 74% of the health facilities timely reported their respective immunization data, which is much better than the baseline (7.1%) (p < 0.001). These findings may provide some hints for the policies and pro-grams targetting on improving immunization data qaulity in the pastoralist communities.

Keywords: data quality, immunization, verification factor, pastoralist region

Procedia PDF Downloads 110
25864 Assessment of the Role of Plasmid in Multidrug Resistance in Extended Spectrum βEtalactamase Producing Escherichia Coli Stool Isolates from Diarrhoeal Patients in Kano Metropolis Nigeria

Authors: Abdullahi Musa, Yakubu Kukure Enebe Ibrahim, Adeshina Gujumbola

Abstract:

The emergence of multidrug resistance in clinical Escherichia coli has been associated with plasmid-mediated genes. DNA transfer among bacteria is critical to the dissemination of resistance. Plasmids have proved to be the ideal vehicles for dissemination of resistance genes. Plasmids coding for antibiotic resistance were long being recognized by many researchers globally. The study aimed at determining the antibiotic susceptibility pattern of ESBL E. coli isolates claimed to be multidrug resistance using disc diffusion method. Antibacterial activity of the test isolates was carried out using disk diffusion methods. The results showed that, majority of the multidrug resistance among clinical isolates of ESBL E. coli was as a result of acquisition of plasmid carrying antibiotic-resistance genes. Production of these ESBL enzymes by these organisms which are normally carried by plasmid and transfer from one bacterium to another has greatly contributed to the rapid spread of antibiotic resistance amongst E. coli isolates, which lead to high economic burden, increase morbidity and mortality rate, complication in therapy and limit treatment options. To curtail these problems, it is of significance to checkmate the rate at which over the counter drugs are sold and antibiotic misused in animal feeds. This will play a very important role in minimizing the spread of resistance bacterial strains in our environment.

Keywords: Escherichia coli, plasmid, multidrug resistance, ESBL, pan drug resistance

Procedia PDF Downloads 63
25863 Thermal Management of Ground Heat Exchangers Applied in High Power LED

Authors: Yuan-Ching Chiang, Chien-Yeh Hsu, Chen Chih-Hao, Sih-Li Chen

Abstract:

The p-n junction temperature of LEDs directly influences their operating life and luminous efficiency. An excessively high p-n junction temperature minimizes the output flux of LEDs, decreasing their brightness and influencing the photon wavelength; consequently, the operating life of LEDs decreases and their luminous output changes. The maximum limit of the p-n junction temperature of LEDs is approximately 120 °C. The purpose of this research was to devise an approach for dissipating heat generated in a confined space when LEDs operate at low temperatures to reduce light decay. The cooling mode of existing commercial LED lights can be divided into natural- and forced convection cooling. In natural convection cooling, the volume of LED encapsulants must be increased by adding more fins to increase the cooling area. However, this causes difficulties in achieving efficient LED lighting at high power. Compared with forced convection cooling, heat transfer through water convection is associated with a higher heat transfer coefficient per unit area; therefore, we dissipated heat by using a closed loop water cooling system. Nevertheless, cooling water exposed to air can be easily influenced by environmental factors. Thus, we incorporated a ground heat exchanger into the water cooling system to minimize the influence of air on cooling water and then observed the relationship between the amounts of heat dissipated through the ground and LED efficiency.

Keywords: helical ground heat exchanger, high power LED, ground source cooling system, heat dissipation

Procedia PDF Downloads 575