Search results for: stochastic approximation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 957

Search results for: stochastic approximation

57 Microbiological Profile of UTI along with Their Antibiotic Sensitivity Pattern with Special Reference to Nitrofurantoin

Authors: Rupinder Bakshi, Geeta Walia, Anita Gupta

Abstract:

Introduction: Urinary tract infections (UTI) are considered to be one of the most common bacterial infections with an estimated annual global incidence of 150 million. Antimicrobial drug resistance is one of the major threats due to widespread usage of uncontrolled antibiotics. Materials and Methods: A total number of 9149 urine samples were collected from R.H Patiala and processed in the Department of Microbiology G.M.C Patiala. Urine samples were inoculated on MacConkey’s and blood agar plates by using calibrated loop delivering 0.001 ml of sample and incubated at 37 °C for 24 hrs. The organisms were identified by colony characters, gram’s staining and biochemical reactions. Antimicrobial susceptibility of the isolates was determined against various antimicrobial agents (Hi – Media Mumbai India) by Kirby-Bauer disk diffusion method on Muller Hinton agar plates. Results: Maximum patients were in the age group of 21-30 yrs followed by 31-40 yrs. Males (34%) are less prone to urinary tract infections than females (66%). Out of 9149 urine sample, the culture was positive in 25% (2290) samples. Esch. coli was the most common isolate 60.3% (n = 1378) followed by Klebsiella pneumoniae 13.5% (n = 310), Proteus spp. 9% (n = 209), Staphylococcus aureus 7.6 % (n = 173), Pseudomonas aeruginosa 3.7% (n = 84), Citrobacter spp. 3.1 % (70), Staphylococcus saprophyticus 1.8 % (n = 142), Enterococcus faecalis 0.8%(n=19) and Acinetobacter spp. 0.2%(n=5). Gram negative isolates showed higher sensitivity towards, Piperacillin +Tazobactum (67%), Amikacin (80%), Nitrofurantoin (82%), Aztreonam (100%), Imipenem (100%) and Meropenam (100%) while gram positive showed good response towards Netilmicin (69%), Nitrofurantoin (79%), Linezolid (98%), Vancomycin (100%) and Teicoplanin (100%). 465 (23%) isolates were resistant to Penicillins, 1st generation and 2nd generation Cehalosporins which were further tested by double disk approximation test and combined disk method for ESBL production. Out of 465 isolates, 375 were ESBLs consisting of n 264 (70.6%) Esch.coli and 111 (29.4%) Klebsiella pneumoniae. Susceptibility of ESBL producers to Imipenem, Nitrofurantoin and Amikacin were found to be 100%, 76%, and 75% respectively. Conclusion: Uropathogens are increasingly showing resistance to many antibiotics making empiric management of outpatients UTIs challenging. Ampicillin, Cotrimoxazole, and Ciprofloxacin should not be used in empiric treatment. Nitrofurantoin could be used in lower urinary tract infection. Knowledge of uropathogens and their antimicrobial susceptibility pattern in a geographical region will help inappropriate and judicious antibiotic usage in a health care setup.

Keywords: Urinary Tract Infection, UTI, antibiotic susceptibility pattern, ESBL

Procedia PDF Downloads 344
56 Consumption of Animal and Vegetable Protein on Muscle Power in Road Cyclists from 18 to 20 Years in Bogota, Colombia

Authors: Oscar Rubiano, Oscar Ortiz, Natalia Morales, Lida Alfonso, Johana Alvarado, Adriana Gutierrez, Daniel Botero

Abstract:

Athletes who usually use protein supplements, are those who practice strength and power sports, whose goal is to achieve a large muscle mass. However, it has also been explored in sports or endurance activities such as cycling, and where despite requiring high power, prominent muscle development can impede good competitive performance due to the determinant of body mass for good performance of the athlete body. This research shows, the effect with protein supplements establishes a protein - muscle mass ratio, although in a lesser proportion the relationship between protein types and muscle power. Thus, we intend to explore as a first approximation, the behavior of muscle power in lower limbs after the intake of two protein supplements from different sources. The aim of the study was to describe the behavior of muscle power in lower limbs after the consumption of animal protein (AP) and vegetable protein (VP) in four route cyclists from 18 to 20 years of the Bogota cycling league. The methodological design of this study is quantitative, with a non-probabilistic sampling, based on a pre-experimental model. The jumping power was evaluated before and after the intervention by means of the squat jump test (SJ), Counter movement jump (CMJ) and Abalacov (AB). Cyclists consumed a drink with whey protein and a soy isolate after training four times a week for three months. The amount of protein in each cyclist, was calculated according to body weight (0.5 g / kg of muscle mass). The results show that subjects who consumed PV improved muscle strength and landing strength. In contrast, the power and landing force decreased for subjects who consumed PA. For the group that consumed PV, the increase was positive at 164.26 watts, 135.70 watts and 33.96 watts for the AB, SJ and CMJ jumps respectively. While for PA, the differences of the medians were negative at -32.29 watts, -82.79 watts and -143.86 watts for the AB, SJ and CMJ jumps respectively. The differences of the medians in the AB jump were positive for both the PV (121.61 Newton) and PA (454.34 Newton) cases, however, the difference was greater for PA. For the SJ jump, the difference for the PA cases was 371.52 Newton, while for the PV cases the difference was negative -448.56 Newton, so the difference was greater in the SJ jump for PA. In jump CMJ, the differences of the medians were negative for the cases of PA and PV, being -7.05 for PA and - 958.2 for PV. So the difference was greater for PA. The conclusion of this study shows that serum protein supplementation showed no improvement in muscle power in the lower limbs of the cyclists studied, which could suggest that whey protein does not have a beneficial effect on performance in terms of power, either, showed an impact on body composition. In contrast, supplementation with soy isolate showed positive effects on muscle power, body.

Keywords: animal protein (AP), muscle power, supplements, vegetable protein (VP)

Procedia PDF Downloads 177
55 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea

Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng

Abstract:

During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.

Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea

Procedia PDF Downloads 172
54 Calculation of Organ Dose for Adult and Pediatric Patients Undergoing Computed Tomography Examinations: A Software Comparison

Authors: Aya Al Masri, Naima Oubenali, Safoin Aktaou, Thibault Julien, Malorie Martin, Fouad Maaloul

Abstract:

Introduction: The increased number of performed 'Computed Tomography (CT)' examinations raise public concerns regarding associated stochastic risk to patients. In its Publication 102, the ‘International Commission on Radiological Protection (ICRP)’ emphasized the importance of managing patient dose, particularly from repeated or multiple examinations. We developed a Dose Archiving and Communication System that gives multiple dose indexes (organ dose, effective dose, and skin-dose mapping) for patients undergoing radiological imaging exams. The aim of this study is to compare the organ dose values given by our software for patients undergoing CT exams with those of another software named "VirtualDose". Materials and methods: Our software uses Monte Carlo simulations to calculate organ doses for patients undergoing computed tomography examinations. The general calculation principle consists to simulate: (1) the scanner machine with all its technical specifications and associated irradiation cases (kVp, field collimation, mAs, pitch ...) (2) detailed geometric and compositional information of dozens of well identified organs of computational hybrid phantoms that contain the necessary anatomical data. The mass as well as the elemental composition of the tissues and organs that constitute our phantoms correspond to the recommendations of the international organizations (namely the ICRP and the ICRU). Their body dimensions correspond to reference data developed in the United States. Simulated data was verified by clinical measurement. To perform the comparison, 270 adult patients and 150 pediatric patients were used, whose data corresponds to exams carried out in France hospital centers. The comparison dataset of adult patients includes adult males and females for three different scanner machines and three different acquisition protocols (Head, Chest, and Chest-Abdomen-Pelvis). The comparison sample of pediatric patients includes the exams of thirty patients for each of the following age groups: new born, 1-2 years, 3-7 years, 8-12 years, and 13-16 years. The comparison for pediatric patients were performed on the “Head” protocol. The percentage of the dose difference were calculated for organs receiving a significant dose according to the acquisition protocol (80% of the maximal dose). Results: Adult patients: for organs that are completely covered by the scan range, the maximum percentage of dose difference between the two software is 27 %. However, there are three organs situated at the edges of the scan range that show a slightly higher dose difference. Pediatric patients: the percentage of dose difference between the two software does not exceed 30%. These dose differences may be due to the use of two different generations of hybrid phantoms by the two software. Conclusion: This study shows that our software provides a reliable dosimetric information for patients undergoing Computed Tomography exams.

Keywords: adult and pediatric patients, computed tomography, organ dose calculation, software comparison

Procedia PDF Downloads 162
53 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors

Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin

Abstract:

IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).

Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)

Procedia PDF Downloads 139
52 Controlling Deforestation in the Densely Populated Region of Central Java Province, Banjarnegara District, Indonesia

Authors: Guntur Bagus Pamungkas

Abstract:

As part of a tropical country that is normally rich in forest land areas, Indonesia has always been in the world's spotlight due to its significantly increasing process of deforestation. In one hand, it is related to the mainstay for maintaining the sustainability of the earth's ecosystem functions. On the other hand, they also cover the various potential sources of the global economy. Therefore, it can always be the target of different scale of investors to excessively exploit them. No wonder the emergence of disasters in various characteristics always comes up. In fact, the deforestation phenomenon does not only occur in various forest land areas in the main islands of Indonesia but also includes Java Island, the most densely populated areas in the world. This island only remains the forest land of about 9.8% of the total forest land in Indonesia due to its long history of it, especially in Central Java Province, the most densely populated area in Java. Again, not surprisingly, this province belongs to the area with the highest frequency of disasters because of it, landslides in particular. One of the areas that often experience it is Banjarnegara District, especially in mountainous areas that lies in the range from 1000 to 3000 meters above sea level, where the remains of land forest area can easyly still be found. Even among them still leaves less untouchable tropical rain forest whose area also covers part of a neighboring district, Pekalongan, which is considered to be the rest of the world's little paradise on Earth. The district's landscape is indeed beautiful, especially in the Dieng area, a major tourist destination in Central Java Province after Borobudur Temple. However, annually hazardous always threatens this district due to this landslide disaster. Even, there was a tragic event that was buried with its inhabitants a few decades ago. This research aims to find part of the concept of effective forest management through monitoring the presence of remaining forest areas in this area. The research implemented monitoring of deforestation rates using the Stochastic Cellular Automata-Markov Chain (SCA-MC) method, which serves to provide a spatial simulation of land use and cover changes (LULCC). This geospatial process uses the Landsat-8 OLI image product with Thermal Infra-Red Sensors (TIRS) Band 10 in 2020 and Landsat 5 TM with TIRS Band 6 in 2010. Then it is also integrated with physical and social geography issues using the QGIS 2.18.11 application with the Mollusce Plugin, which serves to clarify and calculate the area of land use and cover, especially in forest areas—using the LULCC method, which calculates the rate of forest area reduction in 2010-2020 in Banjarnegara District. Since the dependence of this area on the use of forest land is quite high, concepts and preventive actions are needed, such as rehabilitation and reforestation of critical lands through providing proper monitoring and targeted forest management to restore its ecosystem in the future.

Keywords: deforestation, populous area, LULCC method, proper control and effective forest management

Procedia PDF Downloads 135
51 Stochastic Approach for Technical-Economic Viability Analysis of Electricity Generation Projects with Natural Gas Pressure Reduction Turbines

Authors: Roberto M. G. Velásquez, Jonas R. Gazoli, Nelson Ponce Jr, Valério L. Borges, Alessandro Sete, Fernanda M. C. Tomé, Julian D. Hunt, Heitor C. Lira, Cristiano L. de Souza, Fabio T. Bindemann, Wilmar Wounnsoscky

Abstract:

Nowadays, society is working toward reducing energy losses and greenhouse gas emissions, as well as seeking clean energy sources, as a result of the constant increase in energy demand and emissions. Energy loss occurs in the gas pressure reduction stations at the delivery points in natural gas distribution systems (city gates). Installing pressure reduction turbines (PRT) parallel to the static reduction valves at the city gates enhances the energy efficiency of the system by recovering the enthalpy of the pressurized natural gas, obtaining in the pressure-lowering process shaft work and generating electrical power. Currently, the Brazilian natural gas transportation network has 9,409 km in extension, while the system has 16 national and 3 international natural gas processing plants, including more than 143 delivery points to final consumers. Thus, the potential of installing PRT in Brazil is 66 MW of power, which could yearly avoid the emission of 235,800 tons of CO2 and generate 333 GWh/year of electricity. On the other hand, an economic viability analysis of these energy efficiency projects is commonly carried out based on estimates of the project's cash flow obtained from several variables forecast. Usually, the cash flow analysis is performed using representative values of these variables, obtaining a deterministic set of financial indicators associated with the project. However, in most cases, these variables cannot be predicted with sufficient accuracy, resulting in the need to consider, to a greater or lesser degree, the risk associated with the calculated financial return. This paper presents an approach applied to the technical-economic viability analysis of PRTs projects that explicitly considers the uncertainties associated with the input parameters for the financial model, such as gas pressure at the delivery point, amount of energy generated by TRP, the future price of energy, among others, using sensitivity analysis techniques, scenario analysis, and Monte Carlo methods. In the latter case, estimates of several financial risk indicators, as well as their empirical probability distributions, can be obtained. This is a methodology for the financial risk analysis of PRT projects. The results of this paper allow a more accurate assessment of the potential PRT project's financial feasibility in Brazil. This methodology will be tested at the Cuiabá thermoelectric plant, located in the state of Mato Grosso, Brazil, and can be applied to study the potential in other countries.

Keywords: pressure reduction turbine, natural gas pressure drop station, energy efficiency, electricity generation, monte carlo methods

Procedia PDF Downloads 113
50 Fuzzy Availability Analysis of a Battery Production System

Authors: Merve Uzuner Sahin, Kumru D. Atalay, Berna Dengiz

Abstract:

In today’s competitive market, there are many alternative products that can be used in similar manner and purpose. Therefore, the utility of the product is an important issue for the preferability of the brand. This utility could be measured in terms of its functionality, durability, reliability. These all are affected by the system capabilities. Reliability is an important system design criteria for the manufacturers to be able to have high availability. Availability is the probability that a system (or a component) is operating properly to its function at a specific point in time or a specific period of times. System availability provides valuable input to estimate the production rate for the company to realize the production plan. When considering only the corrective maintenance downtime of the system, mean time between failure (MTBF) and mean time to repair (MTTR) are used to obtain system availability. Also, the MTBF and MTTR values are important measures to improve system performance by adopting suitable maintenance strategies for reliability engineers and practitioners working in a system. Failure and repair time probability distributions of each component in the system should be known for the conventional availability analysis. However, generally, companies do not have statistics or quality control departments to store such a large amount of data. Real events or situations are defined deterministically instead of using stochastic data for the complete description of real systems. A fuzzy set is an alternative theory which is used to analyze the uncertainty and vagueness in real systems. The aim of this study is to present a novel approach to compute system availability using representation of MTBF and MTTR in fuzzy numbers. Based on the experience in the system, it is decided to choose 3 different spread of MTBF and MTTR such as 15%, 20% and 25% to obtain lower and upper limits of the fuzzy numbers. To the best of our knowledge, the proposed method is the first application that is used fuzzy MTBF and fuzzy MTTR for fuzzy system availability estimation. This method is easy to apply in any repairable production system by practitioners working in industry. It is provided that the reliability engineers/managers/practitioners could analyze the system performance in a more consistent and logical manner based on fuzzy availability. This paper presents a real case study of a repairable multi-stage production line in lead-acid battery production factory in Turkey. The following is focusing on the considered wet-charging battery process which has a higher production level than the other types of battery. In this system, system components could exist only in two states, working or failed, and it is assumed that when a component in the system fails, it becomes as good as new after repair. Instead of classical methods, using fuzzy set theory and obtaining intervals for these measures would be very useful for system managers, practitioners to analyze system qualifications to find better results for their working conditions. Thus, much more detailed information about system characteristics is obtained.

Keywords: availability analysis, battery production system, fuzzy sets, triangular fuzzy numbers (TFNs)

Procedia PDF Downloads 224
49 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior

Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli

Abstract:

The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.

Keywords: energy simulation, modelling calibration, occupant behavior, university building

Procedia PDF Downloads 141
48 Patterns of TV Simultaneous Interpreting of Emotive Overtones in Trump’s Victory Speech from English into Arabic

Authors: Hanan Al-Jabri

Abstract:

Simultaneous interpreting is deemed to be the most challenging mode of interpreting by many scholars. The special constraints involved in this task including time constraints, different linguistic systems, and stress pose a great challenge to most interpreters. These constraints are likely to maximise when the interpreting task is done live on TV. The TV interpreter is exposed to a wide variety of audiences with different backgrounds and needs and is mostly asked to interpret high profile tasks which raise his/her levels of stress, which further complicate the task. Under these constraints, which require fast and efficient performance, TV interpreters of four TV channels were asked to render Trump's victory speech into Arabic. However, they had also to deal with the burden of rendering English emotive overtones employed by the speaker into a whole different linguistic system. The current study aims at investigating the way TV interpreters, who worked in the simultaneous mode, handled this task; it aims at exploring and evaluating the TV interpreters’ linguistic choices and whether the original emotive effect was maintained, upgraded, downgraded or abandoned in their renditions. It also aims at exploring the possible difficulties and challenges that emerged during this process and might have influenced the interpreters’ linguistic choices. To achieve its aims, the study analysed Trump’s victory speech delivered on November 6, 2016, along with four Arabic simultaneous interpretations produced by four TV channels: Al-Jazeera, RT, CBC News, and France 24. The analysis of the study relied on two frameworks: a macro and a micro framework. The former presents an overview of the wider context of the English speech as well as an overview of the speaker and his political background to help understand the linguistic choices he made in the speech, and the latter framework investigates the linguistic tools which were employed by the speaker to stir people’s emotions. These tools were investigated based on Shamaa’s (1978) classification of emotive meaning according to their linguistic level: phonological, morphological, syntactic, and semantic and lexical levels. Moreover, this level investigates the patterns of rendition which were detected in the Arabic deliveries. The results of the study identified different rendition patterns in the Arabic deliveries, including parallel rendition, approximation, condensation, elaboration, transformation, expansion, generalisation, explicitation, paraphrase, and omission. The emerging patterns, as suggested by the analysis, were influenced by factors such as speedy and continuous delivery of some stretches, and highly-dense segments among other factors. The study aims to contribute to a better understanding of TV simultaneous interpreting between English and Arabic, as well as the practices of TV interpreters when rendering emotiveness especially that little is known about interpreting practices in the field of TV, particularly between Arabic and English.

Keywords: emotive overtones, interpreting strategies, political speeches, TV interpreting

Procedia PDF Downloads 159
47 Experimental and Numerical Investigation of Fracture Behavior of Foamed Concrete Based on Three-Point Bending Test of Beams with Initial Notch

Authors: M. Kozłowski, M. Kadela

Abstract:

Foamed concrete is known for its low self-weight and excellent thermal and acoustic properties. For many years, it has been used worldwide for insulation to foundations and roof tiles, as backfill to retaining walls, sound insulation, etc. However, in the last years it has become a promising material also for structural purposes e.g. for stabilization of weak soils. Due to favorable properties of foamed concrete, many interests and studies were involved to analyze its strength, mechanical, thermal and acoustic properties. However, these studies do not cover the investigation of fracture energy which is the core factor governing the damage and fracture mechanisms. Only limited number of publications can be found in literature. The paper presents the results of experimental investigation and numerical campaign of foamed concrete based on three-point bending test of beams with initial notch. First part of the paper presents the results of a series of static loading tests performed to investigate the fracture properties of foamed concrete of varying density. Beam specimens with dimensions of 100×100×840 mm with a central notch were tested in three-point bending. Subsequently, remaining halves of the specimens with dimensions of 100×100×420 mm were tested again as un-notched beams in the same set-up with reduced distance between supports. The tests were performed in a hydraulic displacement controlled testing machine with a load capacity of 5 kN. Apart from measuring the loading and mid-span displacement, a crack mouth opening displacement (CMOD) was monitored. Based on the load – displacement curves of notched beams the values of fracture energy and tensile stress at failure were calculated. The flexural tensile strength was obtained on un-notched beams with dimensions of 100×100×420 mm. Moreover, cube specimens 150×150×150 mm were tested in compression to determine the compressive strength. Second part of the paper deals with numerical investigation of the fracture behavior of beams with initial notch presented in the first part of the paper. Extended Finite Element Method (XFEM) was used to simulate and analyze the damage and fracture process. The influence of meshing and variation of mechanical properties on results was investigated. Numerical models simulate correctly the behavior of beams observed during three-point bending. The numerical results show that XFEM can be used to simulate different fracture toughness of foamed concrete and fracture types. Using the XFEM and computer simulation technology allow for reliable approximation of load–bearing capacity and damage mechanisms of beams made of foamed concrete, which provides some foundations for realistic structural applications.

Keywords: foamed concrete, fracture energy, three-point bending, XFEM

Procedia PDF Downloads 300
46 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics

Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin

Abstract:

Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.

Keywords: convolutional neural networks, deep learning, shallow correctors, sign language

Procedia PDF Downloads 100
45 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator

Authors: Yildiz Stella Dak, Jale Tezcan

Abstract:

Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.

Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection

Procedia PDF Downloads 330
44 An in silico Approach for Exploring the Intercellular Communication in Cancer Cells

Authors: M. Cardenas-Garcia, P. P. Gonzalez-Perez

Abstract:

Intercellular communication is a necessary condition for cellular functions and it allows a group of cells to survive as a population. Throughout this interaction, the cells work in a coordinated and collaborative way which facilitates their survival. In the case of cancerous cells, these take advantage of intercellular communication to preserve their malignancy, since through these physical unions they can send signs of malignancy. The Wnt/β-catenin signaling pathway plays an important role in the formation of intercellular communications, being also involved in a large number of cellular processes such as proliferation, differentiation, adhesion, cell survival, and cell death. The modeling and simulation of cellular signaling systems have found valuable support in a wide range of modeling approaches, which cover a wide spectrum ranging from mathematical models; e.g., ordinary differential equations, statistical methods, and numerical methods– to computational models; e.g., process algebra for modeling behavior and variation in molecular systems. Based on these models, different simulation tools have been developed from mathematical ones to computational ones. Regarding cellular and molecular processes in cancer, its study has also found a valuable support in different simulation tools that, covering a spectrum as mentioned above, have allowed the in silico experimentation of this phenomenon at the cellular and molecular level. In this work, we simulate and explore the complex interaction patterns of intercellular communication in cancer cells using the Cellulat bioinformatics tool, a computational simulation tool developed by us and motivated by two key elements: 1) a biochemically inspired model of self-organizing coordination in tuple spaces, and 2) the Gillespie’s algorithm, a stochastic simulation algorithm typically used to mimic systems of chemical/biochemical reactions in an efficient and accurate way. The main idea behind the Cellulat simulation tool is to provide an in silico experimentation environment that complements and guides in vitro experimentation in intra and intercellular signaling networks. Unlike most of the cell signaling simulation tools, such as E-Cell, BetaWB and Cell Illustrator which provides abstractions to model only intracellular behavior, Cellulat is appropriate for modeling both intracellular signaling and intercellular communication, providing the abstractions required to model –and as a result, simulate– the interaction mechanisms that involve two or more cells, that is essential in the scenario discussed in this work. During the development of this work we made evident the application of our computational simulation tool (Cellulat) for the modeling and simulation of intercellular communication between normal and cancerous cells, and in this way, propose key molecules that may prevent the arrival of malignant signals to the cells that surround the tumor cells. In this manner, we could identify the significant role that has the Wnt/β-catenin signaling pathway in cellular communication, and therefore, in the dissemination of cancer cells. We verified, using in silico experiments, how the inhibition of this signaling pathway prevents that the cells that surround a cancerous cell are transformed.

Keywords: cancer cells, in silico approach, intercellular communication, key molecules, modeling and simulation

Procedia PDF Downloads 249
43 Improving Fingerprinting-Based Localization System Using Generative AI

Authors: Getaneh Berie Tarekegn, Li-Chia Tai

Abstract:

With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarms, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 42
42 Uncertainty Quantification of Crack Widths and Crack Spacing in Reinforced Concrete

Authors: Marcel Meinhardt, Manfred Keuser, Thomas Braml

Abstract:

Cracking of reinforced concrete is a complex phenomenon induced by direct loads or restraints affecting reinforced concrete structures as soon as the tensile strength of the concrete is exceeded. Hence it is important to predict where cracks will be located and how they will propagate. The bond theory and the crack formulas in the actual design codes, for example, DIN EN 1992-1-1, are all based on the assumption that the reinforcement bars are embedded in homogeneous concrete without taking into account the influence of transverse reinforcement and the real stress situation. However, it can often be observed that real structures such as walls, slabs or beams show a crack spacing that is orientated to the transverse reinforcement bars or to the stirrups. In most Finite Element Analysis studies, the smeared crack approach is used for crack prediction. The disadvantage of this model is that the typical strain localization of a crack on element level can’t be seen. The crack propagation in concrete is a discontinuous process characterized by different factors such as the initial random distribution of defects or the scatter of material properties. Such behavior presupposes the elaboration of adequate models and methods of simulation because traditional mechanical approaches deal mainly with average material parameters. This paper concerned with the modelling of the initiation and the propagation of cracks in reinforced concrete structures considering the influence of transverse reinforcement and the real stress distribution in reinforced concrete (R/C) beams/plates in bending action. Therefore, a parameter study was carried out to investigate: (I) the influence of the transversal reinforcement to the stress distribution in concrete in bending mode and (II) the crack initiation in dependence of the diameter and distance of the transversal reinforcement to each other. The numerical investigations on the crack initiation and propagation were carried out with a 2D reinforced concrete structure subjected to quasi static loading and given boundary conditions. To model the uncertainty in the tensile strength of concrete in the Finite Element Analysis correlated normally and lognormally distributed random filed with different correlation lengths were generated. The paper also presents and discuss different methods to generate random fields, e.g. the Covariance Matrix Decomposition Method. For all computations, a plastic constitutive law with softening was used to model the crack initiation and the damage of the concrete in tension. It was found that the distributions of crack spacing and crack widths are highly dependent of the used random field. These distributions are validated to experimental studies on R/C panels which were carried out at the Laboratory for Structural Engineering at the University of the German Armed Forces in Munich. Also, a recommendation for parameters of the random field for realistic modelling the uncertainty of the tensile strength is given. The aim of this research was to show a method in which the localization of strains and cracks as well as the influence of transverse reinforcement on the crack initiation and propagation in Finite Element Analysis can be seen.

Keywords: crack initiation, crack modelling, crack propagation, cracks, numerical simulation, random fields, reinforced concrete, stochastic

Procedia PDF Downloads 157
41 Improving Fingerprinting-Based Localization (FPL) System Using Generative Artificial Intelligence (GAI)

Authors: Getaneh Berie Tarekegn, Li-Chia Tai

Abstract:

With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 47
40 Vortex Flows under Effects of Buoyant-Thermocapillary Convection

Authors: Malika Imoula, Rachid Saci, Renee Gatignol

Abstract:

A numerical investigation is carried out to analyze vortex flows in a free surface cylinder, driven by the independent rotation and differentially heated boundaries. As a basic uncontrolled isothermal flow, we consider configurations which exhibit steady axisymmetric toroidal type vortices which occur at the free surface; under given rates of the bottom disk uniform rotation and for selected aspect ratios of the enclosure. In the isothermal case, we show that sidewall differential rotation constitutes an effective kinematic means of flow control: the reverse flow regions may be suppressed under very weak co-rotation rates, while an enhancement of the vortex patterns is remarked under weak counter-rotation. However, in this latter case, high rates of counter-rotation reduce considerably the strength of the meridian flow and cause its confinement to a narrow layer on the bottom disk, while the remaining bulk flow is diffusion dominated and controlled by the sidewall rotation. The main control parameters in this case are the rotational Reynolds number, the cavity aspect ratio and the rotation rate ratio defined. Then, the study proceeded to consider the sensitivity of the vortex pattern, within the Boussinesq approximation, to a small temperature gradient set between the ambient fluid and an axial thin rod mounted on the cavity axis. Two additional parameters are introduced; namely, the Richardson number Ri and the Marangoni number Ma (or the thermocapillary Reynolds number). Results revealed that reducing the rod length induces the formation of on-axis bubbles instead of toroidal structures. Besides, the stagnation characteristics are significantly altered under the combined effects of buoyant-thermocapillary convection. Buoyancy, induced under sufficiently high Ri, was shown to predominate over the thermocapillay motion; causing the enhancement (suppression) of breakdown when the rod is warmer (cooler) than the ambient fluid. However, over small ranges of Ri, the sensitivity of the flow to surface tension gradients was clearly evidenced and results showed its full control over the occurrence and location of breakdown. In particular, detailed timewise evolution of the flow indicated that weak thermocapillary motion was sufficient to prevent the formation of toroidal patterns. These latter detach from the surface and undergo considerable size reduction while moving towards the bulk flow before vanishing. Further calculations revealed that the pattern reappears with increasing time as steady bubble type on the rod. However, in the absence of the central rod and also in the case of small rod length l, the flow evolved into steady state without any breakdown.

Keywords: buoyancy, cylinder, surface tension, toroidal vortex

Procedia PDF Downloads 359
39 Temporal and Spatio-Temporal Stability Analyses in Mixed Convection of a Viscoelastic Fluid in a Porous Medium

Authors: P. Naderi, M. N. Ouarzazi, S. C. Hirata, H. Ben Hamed, H. Beji

Abstract:

The stability of mixed convection in a Newtonian fluid medium heated from below and cooled from above, also known as the Poiseuille-Rayleigh-Bénard problem, has been extensively investigated in the past decades. To our knowledge, mixed convection in porous media has received much less attention in the published literature. The present paper extends the mixed convection problem in porous media for the case of a viscoelastic fluid flow owing to its numerous environmental and industrial applications such as the extrusion of polymer fluids, solidification of liquid crystals, suspension solutions and petroleum activities. Without a superimposed through-flow, the natural convection problem of a viscoelastic fluid in a saturated porous medium has already been treated. The effects of the viscoelastic properties of the fluid on the linear and nonlinear dynamics of the thermoconvective instabilities have also been treated in this work. Consequently, the elasticity of the fluid can lead either to a Hopf bifurcation, giving rise to oscillatory structures in the strongly elastic regime, or to a stationary bifurcation in the weakly elastic regime. The objective of this work is to examine the influence of the main horizontal flow on the linear and characteristics of these two types of instabilities. Under the Boussinesq approximation and Darcy's law extended to a viscoelastic fluid, a temporal stability approach shows that the conditions for the appearance of longitudinal rolls are identical to those found in the absence of through-flow. For the general three-dimensional (3D) perturbations, a Squire transformation allows the deduction of the complex frequencies associated with the 3D problem using those obtained by solving the two-dimensional one. The numerical resolution of the eigenvalue problem concludes that the through-flow has a destabilizing effect and selects a convective configuration organized in purely transversal rolls which oscillate in time and propagate in the direction of the main flow. In addition, by using the mathematical formalism of absolute and convective instabilities, we study the nature of unstable three-dimensional disturbances. It is shown that for a non-vanishing through-flow, general three-dimensional instabilities are convectively unstable which means that in the absence of a continuous noise source these instabilities are drifted outside the porous medium, and no long-term pattern is observed. In contrast, purely transversal rolls may exhibit a transition to absolute instability regime and therefore affect the porous medium everywhere including in the absence of a noise source. The absolute instability threshold, the frequency and the wave number associated with purely transversal rolls are determined as a function of the Péclet number and the viscoelastic parameters. Results are discussed and compared to those obtained from laboratory experiments in the case of Newtonian fluids.

Keywords: instability, mixed convection, porous media, and viscoelastic fluid

Procedia PDF Downloads 341
38 Case-Based Reasoning for Modelling Random Variables in the Reliability Assessment of Existing Structures

Authors: Francesca Marsili

Abstract:

The reliability assessment of existing structures with probabilistic methods is becoming an increasingly important and frequent engineering task. However probabilistic reliability methods are based on an exhaustive knowledge of the stochastic modeling of the variables involved in the assessment; at the moment standards for the modeling of variables are absent, representing an obstacle to the dissemination of probabilistic methods. The framework according to probability distribution functions (PDFs) are established is represented by the Bayesian statistics, which uses Bayes Theorem: a prior PDF for the considered parameter is established based on information derived from the design stage and qualitative judgments based on the engineer past experience; then, the prior model is updated with the results of investigation carried out on the considered structure, such as material testing, determination of action and structural properties. The application of Bayesian statistics arises two different kind of problems: 1. The results of the updating depend on the engineer previous experience; 2. The updating of the prior PDF can be performed only if the structure has been tested, and quantitative data that can be statistically manipulated have been collected; performing tests is always an expensive and time consuming operation; furthermore, if the considered structure is an ancient building, destructive tests could compromise its cultural value and therefore should be avoided. In order to solve those problems, an interesting research path is represented by investigating Artificial Intelligence (AI) techniques that can be useful for the automation of the modeling of variables and for the updating of material parameters without performing destructive tests. Among the others, one that raises particular attention in relation to the object of this study is constituted by Case-Based Reasoning (CBR). In this application, cases will be represented by existing buildings where material tests have already been carried out and an updated PDFs for the material mechanical parameters has been computed through a Bayesian analysis. Then each case will be composed by a qualitative description of the material under assessment and the posterior PDFs that describe its material properties. The problem that will be solved is the definition of PDFs for material parameters involved in the reliability assessment of the considered structure. A CBR system represent a good candi¬date in automating the modelling of variables because: 1. Engineers already draw an estimation of the material properties based on the experience collected during the assessment of similar structures, or based on similar cases collected in literature or in data-bases; 2. Material tests carried out on structure can be easily collected from laboratory database or from literature; 3. The system will provide the user of a reliable probabilistic description of the variables involved in the assessment that will also serve as a tool in support of the engineer’s qualitative judgments. Automated modeling of variables can help in spreading probabilistic reliability assessment of existing buildings in the common engineering practice, and target at the best intervention and further tests on the structure; CBR represents a technique which may help to achieve this.

Keywords: reliability assessment of existing buildings, Bayesian analysis, case-based reasoning, historical structures

Procedia PDF Downloads 337
37 Application of the Standard Deviation in Regulating Design Variation of Urban Solutions Generated through Evolutionary Computation

Authors: Mohammed Makki, Milad Showkatbakhsh, Aiman Tabony

Abstract:

Computational applications of natural evolutionary processes as problem-solving tools have been well established since the mid-20th century. However, their application within architecture and design has only gained ground in recent years, with an increasing number of academics and professionals in the field electing to utilize evolutionary computation to address problems comprised from multiple conflicting objectives with no clear optimal solution. Recent advances in computer science and its consequent constructive influence on the architectural discourse has led to the emergence of multiple algorithmic processes capable of simulating the evolutionary process in nature within an efficient timescale. Many of the developed processes of generating a population of candidate solutions to a design problem through an evolutionary based stochastic search process are often driven through the application of both environmental and architectural parameters. These methods allow for conflicting objectives to be simultaneously, independently, and objectively optimized. This is an essential approach in design problems with a final product that must address the demand of a multitude of individuals with various requirements. However, one of the main challenges encountered through the application of an evolutionary process as a design tool is the ability for the simulation to maintain variation amongst design solutions in the population while simultaneously increasing in fitness. This is most commonly known as the ‘golden rule’ of balancing exploration and exploitation over time; the difficulty of achieving this balance in the simulation is due to the tendency of either variation or optimization being favored as the simulation progresses. In such cases, the generated population of candidate solutions has either optimized very early in the simulation, or has continued to maintain high levels of variation to which an optimal set could not be discerned; thus, providing the user with a solution set that has not evolved efficiently to the objectives outlined in the problem at hand. As such, the experiments presented in this paper seek to achieve the ‘golden rule’ by incorporating a mathematical fitness criterion for the development of an urban tissue comprised from the superblock as its primary architectural element. The mathematical value investigated in the experiments is the standard deviation factor. Traditionally, the standard deviation factor has been used as an analytical value rather than a generative one, conventionally used to measure the distribution of variation within a population by calculating the degree by which the majority of the population deviates from the mean. A higher standard deviation value delineates a higher number of the population is clustered around the mean and thus limited variation within the population, while a lower standard deviation value is due to greater variation within the population and a lack of convergence towards an optimal solution. The results presented will aim to clarify the extent to which the utilization of the standard deviation factor as a fitness criterion can be advantageous to generating fitter individuals in a more efficient timeframe when compared to conventional simulations that only incorporate architectural and environmental parameters.

Keywords: architecture, computation, evolution, standard deviation, urban

Procedia PDF Downloads 133
36 Learning And Teaching Conditions For Students With Special Needs: Asset-Oriented Perspectives And Approaches

Authors: Dr. Luigi Iannacci

Abstract:

This research critically explores the current educational landscape with respect to special education and dominant deficit/medical model discourses that continue to forward unresponsive problematic approaches to teaching students with disabilities. Asset-oriented perspectives and social/critical models of disability are defined and explicated in order to offer alternatives to these dominant discourses. To that end, a framework that draws on Brian Camborne’s conditions of learning and applications of his work in relation to instruction conceptualize learning conditions and their significance to students with special needs. Methodologically, the research is designed as Critical Narrative Inquiry (CNI). Critical incidents, interviews, documents, artefacts etc. are drawn on and narratively constructed to explore how disability is presently configured in language, discourses, pedagogies and interactions with students deemed disabled. This data was collected using ethnographic methods and as such, through participant-observer field work that occurred directly in classrooms. This narrative approach aims to make sense of complex classroom interactions and ways of reconceptualizing approaches to students with special needs. CNI is situated in the critical paradigm and primarily concerned with culture, language and participation as issues of power in need of critique with the intent of change in the direction of social justice. Research findings highlight the ways in which Cambourne’s learning conditions, such as demonstration, approximation, engagement, responsibility, immersion, expectation, employment (transfer, use), provide a clear understanding of what is central to and constitutes a responsive and inclusive this instructional frame. Examples of what each of these conditions look like in practice are therefore offered in order to concretely demonstrate the ways in which various pedagogical choices and questions can enable classroom spaces to be responsive to the assets and challenges students with special needs have and experience. These particular approaches are also illustrated through an exploration of multiliteracies theory and pedagogy and what this research and approach allows educators to draw on, facilitate and foster in terms of the ways in which students with special needs can make sense of and demonstrate their understanding of skills, content and knowledge. The contextual information, theory, research and instructional frame focused on throughout this inquiry ultimately demonstrate what inclusive classroom spaces and practice can look like. These perspectives and conceptualizations are in stark contrast to dominant deficit driven approaches that ensure current pedagogically impoverished teaching focused on narrow, limited and limiting understandings of special needs learners and their ways of knowing and acquiring/demonstrating knowledge.

Keywords: asset-oriented approach, social/critical model of disability, conditions for learning and teaching, students with special needs

Procedia PDF Downloads 68
35 Numerical Simulation of Filtration Gas Combustion: Front Propagation Velocity

Authors: Yuri Laevsky, Tatyana Nosova

Abstract:

The phenomenon of filtration gas combustion (FGC) had been discovered experimentally at the beginning of 80’s of the previous century. It has a number of important applications in such areas as chemical technologies, fire-explosion safety, energy-saving technologies, oil production. From the physical point of view, FGC may be defined as the propagation of region of gaseous exothermic reaction in chemically inert porous medium, as the gaseous reactants seep into the region of chemical transformation. The movement of the combustion front has different modes, and this investigation is focused on the low-velocity regime. The main characteristic of the process is the velocity of the combustion front propagation. Computation of this characteristic encounters substantial difficulties because of the strong heterogeneity of the process. The mathematical model of FGC is formed by the energy conservation laws for the temperature of the porous medium and the temperature of gas and the mass conservation law for the relative concentration of the reacting component of the gas mixture. In this case the homogenization of the model is performed with the use of the two-temperature approach when at each point of the continuous medium we specify the solid and gas phases with a Newtonian heat exchange between them. The construction of a computational scheme is based on the principles of mixed finite element method with the usage of a regular mesh. The approximation in time is performed by an explicit–implicit difference scheme. Special attention was given to determination of the combustion front propagation velocity. Straight computation of the velocity as grid derivative leads to extremely unstable algorithm. It is worth to note that the term ‘front propagation velocity’ makes sense for settled motion when some analytical formulae linking velocity and equilibrium temperature are correct. The numerical implementation of one of such formulae leading to the stable computation of instantaneous front velocity has been proposed. The algorithm obtained has been applied in subsequent numerical investigation of the FGC process. This way the dependence of the main characteristics of the process on various physical parameters has been studied. In particular, the influence of the combustible gas mixture consumption on the front propagation velocity has been investigated. It also has been reaffirmed numerically that there is an interval of critical values of the interfacial heat transfer coefficient at which a sort of a breakdown occurs from a slow combustion front propagation to a rapid one. Approximate boundaries of such an interval have been calculated for some specific parameters. All the results obtained are in full agreement with both experimental and theoretical data, confirming the adequacy of the model and the algorithm constructed. The presence of stable techniques to calculate the instantaneous velocity of the combustion wave allows considering the semi-Lagrangian approach to the solution of the problem.

Keywords: filtration gas combustion, low-velocity regime, mixed finite element method, numerical simulation

Procedia PDF Downloads 301
34 Numerical Investigation of the Boundary Conditions at Liquid-Liquid Interfaces in the Presence of Surfactants

Authors: Bamikole J. Adeyemi, Prashant Jadhawar, Lateef Akanji

Abstract:

Liquid-liquid interfacial flow is an important process that has applications across many spheres. One such applications are residual oil mobilization, where crude oil and low salinity water are emulsified due to lowered interfacial tension under the condition of low shear rates. The amphiphilic components (asphaltenes and resins) in crude oil are considered to assemble at the interface between the two immiscible liquids. To justify emulsification, drag and snap-off suppression as the main effects of low salinity water, mobilization of residual oil is visualized as thickening and slip of the wetting phase at the brine/crude oil interface which results in the squeezing and drag of the non-wetting phase to the pressure sinks. Meanwhile, defining the boundary conditions for such a system can be very challenging since the interfacial dynamics do not only depend on interfacial tension but also the flow rate. Hence, understanding the flow boundary condition at the brine/crude oil interface is an important step towards defining the influence of low salinity water composition on residual oil mobilization. This work presents a numerical evaluation of three slip boundary conditions that may apply at liquid-liquid interfaces. A mathematical model was developed to describe the evolution of a viscoelastic interfacial thin liquid film. The base model is developed by the asymptotic expansion of the full Navier-Stokes equations for fluid motion due to gradients of surface tension. This model was upscaled to describe the dynamics of the film surface deformation. Subsequently, Jeffrey’s model was integrated into the formulations to account for viscoelastic stress within a long wave approximation of the Navier-Stokes equations. To study the fluid response to a prescribed disturbance, a linear stability analysis (LSA) was performed. The dispersion relation and the corresponding characteristic equation for the growth rate were obtained. Three slip (slip, 1; locking, -1; and no-slip, 0) boundary conditions were examined using the resulted characteristic equation. Also, the dynamics of the evolved interfacial thin liquid film were numerically evaluated by considering the influence of the boundary conditions. The linear stability analysis shows that the boundary conditions of such systems are greatly impacted by the presence of amphiphilic molecules when three different values of interfacial tension were tested. The results for slip and locking conditions are consistent with the fundamental solution representation of the diffusion equation where there is film decay. The interfacial films at both boundary conditions respond to exposure time in a similar manner with increasing growth rate which resulted in the formation of more droplets with time. Contrarily, no-slip boundary condition yielded an unbounded growth and it is not affected by interfacial tension.

Keywords: boundary conditions, liquid-liquid interfaces, low salinity water, residual oil mobilization

Procedia PDF Downloads 129
33 Study of Potato Cyst Nematodes (Globodera Rostochiensis, Globodera pallida) in Georgia

Authors: Ekatereine Abashidze, Nino Nazarashvili, Dali Gaganidze, Oleg Gorgadze, Mariam Aznarashvili, Eter Gvritishvili

Abstract:

Potato is one of the leading agricultural crops in Georgia. Georgia produces early and late potato varieties in almost all regions. Potato production is equal to 25,000 ha and its average yield is 20-25 t/ha. Among the plant pests that limit potato production and quality, the potato cyst nematodes (Globodera pallida (Stone) Behrens and Globodera rostochiensis (Wollenveber) Behrens) are harmful around the world. PCN is among the most difficult plant pests to control. Cysts protected by a durable wall can survive for over 30 years . Control of PCN (G. pallida and G. rostochiensis) is regulated by Council Directive 2007/33/EE C. There was no legislative regulation of these pests in Georgia before 2016. By Resolution #302 from July 1, 2016, developed within the action plan of the DCFTA (Deep and Comprehensive Free Trade Area) the Government of Georgia established control over potato cyst nematodes. The Agreement about the legal acts approximation to EU legislation concerns the approval of rules of PCN control and research of these pests. Taking into consideration the above mentioned, it is necessary to study PCN (G. pallida and G. rostochiensis) in the potato-growing areas of Georgia. The aim of this research is to conduct survey of potato cyst nematodes (Globodera rostochiensis and G. pallida) in two geographically distinct regions of Georgia - Samtskhe - Javakheti and Svanetii and to identify the species G. Rostochiensis and G. Pallida by the morphological - morphometric and molecular methods. Soil samples were taken in each village, in a zig-zag pattern on the potato fields of the private sector, using the Metlitsky method. Samples were taken also from infested potato plant roots. To extract nematode cysts from soil samples Fanwick can be used according to standard methods by EPPO. Cysts were measured under a stereoscopic microscope (Leica M50). Identification of the nematod species was carried out according to morphological and morphometric characteristics of the cysts and larvae using appropriate protocols EPPO. For molecular identification, a multiplex PCR test was performed by the universal ITS5 and cyst nematodes’ (G. pallida, G. rostochiensis) specific primers. To identify the species of potato cyst nematodes (PCN) in two regions (Samtskhe-Javakheti and Svaneti) were taken 200 samples, among them: 80 samples in Samtskhe-Javakheti region and 120 in Svaneti region. Cysts of Globiodera spp. were revealed in 50 samples obtained from Samtskhe-Javakheti and 80 samples from Svaneti regions. Morphological, morphometric and molecular analysis of two forms of PCN found in investigated regions of Georgia shows that one form of PCN belongs to G. rostoshiensi; the second form is the different species of Globodera sp.t is the subject of future research. Despite the different geographic locations, larvae and cysts of G. rostoshiensi were found in both regions. But cysts and larvae of G. pallida were not reported. Acknowledgement: The research has been supported by the Shota Rustaveli National Scientific Foundation of Georgia: Project # FR17_235.

Keywords: cyst nematode, globodera rostochiensis, globodera pallida, morphologic-morphometric measurement

Procedia PDF Downloads 200
32 Probabilistic Study of Impact Threat to Civil Aircraft and Realistic Impact Energy

Authors: Ye Zhang, Chuanjun Liu

Abstract:

In-service aircraft is exposed to different types of threaten, e.g. bird strike, ground vehicle impact, and run-way debris, or even lightning strike, etc. To satisfy the aircraft damage tolerance design requirements, the designer has to understand the threatening level for different types of the aircraft structures, either metallic or composite. Exposing to low-velocity impacts may produce very serious internal damages such as delaminations and matrix cracks without leaving visible mark onto the impacted surfaces for composite structures. This internal damage can cause significant reduction in the load carrying capacity of structures. The semi-probabilistic method provides a practical and proper approximation to establish the impact-threat based energy cut-off level for the damage tolerance evaluation of the aircraft components. Thus, the probabilistic distribution of impact threat and the realistic impact energy level cut-offs are the essential establishments required for the certification of aircraft composite structures. A new survey of impact threat to civil aircraft in-service has recently been carried out based on field records concerning around 500 civil aircrafts (mainly single aisles) and more than 4.8 million flight hours. In total 1,006 damages caused by low-velocity impact events had been screened out from more than 8,000 records including impact dents, scratches, corrosions, delaminations, cracks etc. The impact threat dependency on the location of the aircraft structures and structural configuration was analyzed. Although the survey was mainly focusing on the metallic structures, the resulting low-energy impact data are believed likely representative to general civil aircraft, since the service environments and the maintenance operations are independent of the materials of the structures. The probability of impact damage occurrence (Po) and impact energy exceedance (Pe) are the two key parameters for describing the statistic distribution of impact threat. With the impact damage events from the survey, Po can be estimated as 2.1x10-4 per flight hour. Concerning the calculation of Pe, a numerical model was developed using the commercial FEA software ABAQUS to backward estimate the impact energy based on the visible damage characteristics. The relationship between the visible dent depth and impact energy was established and validated by drop-weight impact experiments. Based on survey results, Pe was calculated and assumed having a log-linear relationship versus the impact energy. As the product of two aforementioned probabilities, Po and Pe, it is reasonable and conservative to assume Pa=PoxPe=10-5, which indicates that the low-velocity impact events are similarly likely as the Limit Load events. Combing Pa with two probabilities Po and Pe obtained based on the field survey, the cutoff level of realistic impact energy was estimated and valued as 34 J. In summary, a new survey was recently done on field records of civil aircraft to investigate the probabilistic distribution of impact threat. Based on the data, two probabilities, Po and Pe, were obtained. Considering a conservative assumption of Pa, the cutoff energy level for the realistic impact energy has been determined, which provides potential applicability in damage tolerance certification of future civil aircraft.

Keywords: composite structure, damage tolerance, impact threat, probabilistic

Procedia PDF Downloads 308
31 R&D Diffusion and Productivity in a Globalized World: Country Capabilities in an MRIO Framework

Authors: S. Jimenez, R.Duarte, J.Sanchez-Choliz, I. Villanua

Abstract:

There is a certain consensus in economic literature about the factors that have influenced in historical differences in growth rates observed between developed and developing countries. However, it is less clear what elements have marked different paths of growth in developed economies in recent decades. R&D has always been seen as one of the major sources of technological progress, and productivity growth, which is directly influenced by technological developments. Following recent literature, we can say that ‘innovation pushes the technological frontier forward’ as well as encourage future innovation through the creation of externalities. In other words, productivity benefits from innovation are not fully appropriated by innovators, but it also spread through the rest of the economies encouraging absorptive capacities, what have become especially important in a context of increasing fragmentation of production This paper aims to contribute to this literature in two ways, first, exploring alternative indexes of R&D flows embodied in inter-country, inter-sectorial flows of good and services (as approximation to technology spillovers) capturing structural and technological characteristic of countries and, second, analyzing the impact of direct and embodied R&D on the evolution of labor productivity at the country/sector level in recent decades. The traditional way of calculation through a multiregional input-output framework assumes that all countries have the same capabilities to absorb technology, but it is not, each one has different structural features and, this implies, different capabilities as part of literature, claim. In order to capture these differences, we propose to use a weight based on specialization structure indexes; one related with the specialization of countries in high-tech sectors and the other one based on a dispersion index. We propose these two measures because, as far as we understood, country capabilities can be captured through different ways; countries specialization in knowledge-intensive sectors, such as Chemicals or Electrical Equipment, or an intermediate technology effort across different sectors. Results suggest the increasing importance of country capabilities while increasing the trade openness. Besides, if we focus in the country rankings, we can observe that with high-tech weighted R&D embodied countries as China, Taiwan and Germany arose the top five despite not having the highest intensities of R&D expenditure, showing the importance of country capabilities. Additionally, through a fixed effects panel data model we show that, in fact, R&D embodied is important to explain labor productivity increases, in fact, even more that direct R&D investments. This is reflecting that globalization is more important than has been said until now. However, it is true that almost all analysis done in relation with that consider the effect of t-1 direct R&D intensity over economic growth. Nevertheless, from our point of view R&D evolve as a delayed flow and it is necessary some time to be able to see its effects on the economy, as some authors have already claimed. Our estimations tend to corroborate this hypothesis obtaining a gap between 4-5 years.

Keywords: economic growth, embodied, input-output, technology

Procedia PDF Downloads 124
30 Cobb Angle Measurement from Coronal X-Rays Using Artificial Neural Networks

Authors: Andrew N. Saylor, James R. Peters

Abstract:

Scoliosis is a complex 3D deformity of the thoracic and lumbar spines, clinically diagnosed by measurement of a Cobb angle of 10 degrees or more on a coronal X-ray. The Cobb angle is the angle made by the lines drawn along the proximal and distal endplates of the respective proximal and distal vertebrae comprising the curve. Traditionally, Cobb angles are measured manually using either a marker, straight edge, and protractor or image measurement software. The task of measuring the Cobb angle can also be represented by a function taking the spine geometry rendered using X-ray imaging as input and returning the approximate angle. Although the form of such a function may be unknown, it can be approximated using artificial neural networks (ANNs). The performance of ANNs is affected by many factors, including the choice of activation function and network architecture; however, the effects of these parameters on the accuracy of scoliotic deformity measurements are poorly understood. Therefore, the objective of this study was to systematically investigate the effect of ANN architecture and activation function on Cobb angle measurement from the coronal X-rays of scoliotic subjects. The data set for this study consisted of 609 coronal chest X-rays of scoliotic subjects divided into 481 training images and 128 test images. These data, which included labeled Cobb angle measurements, were obtained from the SpineWeb online database. In order to normalize the input data, each image was resized using bi-linear interpolation to a size of 500 × 187 pixels, and the pixel intensities were scaled to be between 0 and 1. A fully connected (dense) ANN with a fixed cost function (mean squared error), batch size (10), and learning rate (0.01) was developed using Python Version 3.7.3 and TensorFlow 1.13.1. The activation functions (sigmoid, hyperbolic tangent [tanh], or rectified linear units [ReLU]), number of hidden layers (1, 3, 5, or 10), and number of neurons per layer (10, 100, or 1000) were varied systematically to generate a total of 36 network conditions. Stochastic gradient descent with early stopping was used to train each network. Three trials were run per condition, and the final mean squared errors and mean absolute errors were averaged to quantify the network response for each condition. The network that performed the best used ReLU neurons had three hidden layers, and 100 neurons per layer. The average mean squared error of this network was 222.28 ± 30 degrees2, and the average mean absolute error was 11.96 ± 0.64 degrees. It is also notable that while most of the networks performed similarly, the networks using ReLU neurons, 10 hidden layers, and 1000 neurons per layer, and those using Tanh neurons, one hidden layer, and 10 neurons per layer performed markedly worse with average mean squared errors greater than 400 degrees2 and average mean absolute errors greater than 16 degrees. From the results of this study, it can be seen that the choice of ANN architecture and activation function has a clear impact on Cobb angle inference from coronal X-rays of scoliotic subjects.

Keywords: scoliosis, artificial neural networks, cobb angle, medical imaging

Procedia PDF Downloads 129
29 A Quality Index Optimization Method for Non-Invasive Fetal ECG Extraction

Authors: Lucia Billeci, Gennaro Tartarisco, Maurizio Varanini

Abstract:

Fetal cardiac monitoring by fetal electrocardiogram (fECG) can provide significant clinical information about the healthy condition of the fetus. Despite this potentiality till now the use of fECG in clinical practice has been quite limited due to the difficulties in its measuring. The recovery of fECG from the signals acquired non-invasively by using electrodes placed on the maternal abdomen is a challenging task because abdominal signals are a mixture of several components and the fetal one is very weak. This paper presents an approach for fECG extraction from abdominal maternal recordings, which exploits the characteristics of pseudo-periodicity of fetal ECG. It consists of devising a quality index (fQI) for fECG and of finding the linear combinations of preprocessed abdominal signals, which maximize these fQI (quality index optimization - QIO). It aims at improving the performances of the most commonly adopted methods for fECG extraction, usually based on maternal ECG (mECG) estimating and canceling. The procedure for the fECG extraction and fetal QRS (fQRS) detection is completely unsupervised and based on the following steps: signal pre-processing; maternal ECG (mECG) extraction and maternal QRS detection; mECG component approximation and canceling by weighted principal component analysis; fECG extraction by fQI maximization and fetal QRS detection. The proposed method was compared with our previously developed procedure, which obtained the highest at the Physionet/Computing in Cardiology Challenge 2013. That procedure was based on removing the mECG from abdominal signals estimated by a principal component analysis (PCA) and applying the Independent component Analysis (ICA) on the residual signals. Both methods were developed and tuned using 69, 1 min long, abdominal measurements with fetal QRS annotation of the dataset A provided by PhysioNet/Computing in Cardiology Challenge 2013. The QIO-based and the ICA-based methods were compared in analyzing two databases of abdominal maternal ECG available on the Physionet site. The first is the Abdominal and Direct Fetal Electrocardiogram Database (ADdb) which contains the fetal QRS annotations thus allowing a quantitative performance comparison, the second is the Non-Invasive Fetal Electrocardiogram Database (NIdb), which does not contain the fetal QRS annotations so that the comparison between the two methods can be only qualitative. In particular, the comparison on NIdb was performed defining an index of quality for the fetal RR series. On the annotated database ADdb the QIO method, provided the performance indexes Sens=0.9988, PPA=0.9991, F1=0.9989 overcoming the ICA-based one, which provided Sens=0.9966, PPA=0.9972, F1=0.9969. The comparison on NIdb was performed defining an index of quality for the fetal RR series. The index of quality resulted higher for the QIO-based method compared to the ICA-based one in 35 records out 55 cases of the NIdb. The QIO-based method gave very high performances with both the databases. The results of this study foresees the application of the algorithm in a fully unsupervised way for the implementation in wearable devices for self-monitoring of fetal health.

Keywords: fetal electrocardiography, fetal QRS detection, independent component analysis (ICA), optimization, wearable

Procedia PDF Downloads 280
28 Various Shaped ZnO and ZnO/Graphene Oxide Nanocomposites and Their Use in Water Splitting Reaction

Authors: Sundaram Chandrasekaran, Seung Hyun Hur

Abstract:

Exploring strategies for oxygen vacancy engineering under mild conditions and understanding the relationship between dislocations and photoelectrochemical (PEC) cell performance are challenging issues for designing high performance PEC devices. Therefore, it is very important to understand that how the oxygen vacancies (VO) or other defect states affect the performance of the photocatalyst in photoelectric transfer. So far, it has been found that defects in nano or micro crystals can have two possible significances on the PEC performance. Firstly, an electron-hole pair produced at the interface of photoelectrode and electrolyte can recombine at the defect centers under illumination of light, thereby reducing the PEC performances. On the other hand, the defects could lead to a higher light absorption in the longer wavelength region and may act as energy centers for the water splitting reaction that can improve the PEC performances. Even if the dislocation growth of ZnO has been verified by the full density functional theory (DFT) calculations and local density approximation calculations (LDA), it requires further studies to correlate the structures of ZnO and PEC performances. Exploring the hybrid structures composed of graphene oxide (GO) and ZnO nanostructures offer not only the vision of how the complex structure form from a simple starting materials but also the tools to improve PEC performances by understanding the underlying mechanisms of mutual interactions. As there are few studies for the ZnO growth with other materials and the growth mechanism in those cases has not been clearly explored yet, it is very important to understand the fundamental growth process of nanomaterials with the specific materials, so that rational and controllable syntheses of efficient ZnO-based hybrid materials can be designed to prepare nanostructures that can exhibit significant PEC performances. Herein, we fabricated various ZnO nanostructures such as hollow sphere, bucky bowl, nanorod and triangle, investigated their pH dependent growth mechanism, and correlated the PEC performances with them. Especially, the origin of well-controlled dislocation-driven growth and its transformation mechanism of ZnO nanorods to triangles on the GO surface were discussed in detail. Surprisingly, the addition of GO during the synthesis process not only tunes the morphology of ZnO nanocrystals and also creates more oxygen vacancies (oxygen defects) in the lattice of ZnO, which obviously suggest that the oxygen vacancies be created by the redox reaction between GO and ZnO in which the surface oxygen is extracted from the surface of ZnO by the functional groups of GO. On the basis of our experimental and theoretical analysis, the detailed mechanism for the formation of specific structural shapes and oxygen vacancies via dislocation, and its impact in PEC performances are explored. In water splitting performance, the maximum photocurrent density of GO-ZnO triangles was 1.517mA/cm-2 (under UV light ~ 360 nm) vs. RHE with high incident photon to current conversion Efficiency (IPCE) of 10.41%, which is the highest among all samples fabricated in this study and also one of the highest IPCE reported so far obtained from GO-ZnO triangular shaped photocatalyst.

Keywords: dislocation driven growth, zinc oxide, graphene oxide, water splitting

Procedia PDF Downloads 294