Search results for: convergence type
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7315

Search results for: convergence type

5095 Systematic Evaluation of Convolutional Neural Network on Land Cover Classification from Remotely Sensed Images

Authors: Eiman Kattan, Hong Wei

Abstract:

In using Convolutional Neural Network (CNN) for classification, there is a set of hyperparameters available for the configuration purpose. This study aims to evaluate the impact of a range of parameters in CNN architecture i.e. AlexNet on land cover classification based on four remotely sensed datasets. The evaluation tests the influence of a set of hyperparameters on the classification performance. The parameters concerned are epoch values, batch size, and convolutional filter size against input image size. Thus, a set of experiments were conducted to specify the effectiveness of the selected parameters using two implementing approaches, named pertained and fine-tuned. We first explore the number of epochs under several selected batch size values (32, 64, 128 and 200). The impact of kernel size of convolutional filters (1, 3, 5, 7, 10, 15, 20, 25 and 30) was evaluated against the image size under testing (64, 96, 128, 180 and 224), which gave us insight of the relationship between the size of convolutional filters and image size. To generalise the validation, four remote sensing datasets, AID, RSD, UCMerced and RSCCN, which have different land covers and are publicly available, were used in the experiments. These datasets have a wide diversity of input data, such as number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in both training and testing. The results have shown that increasing the number of epochs leads to a higher accuracy rate, as expected. However, the convergence state is highly related to datasets. For the batch size evaluation, it has shown that a larger batch size slightly decreases the classification accuracy compared to a small batch size. For example, selecting the value 32 as the batch size on the RSCCN dataset achieves the accuracy rate of 90.34 % at the 11th epoch while decreasing the epoch value to one makes the accuracy rate drop to 74%. On the other extreme, setting an increased value of batch size to 200 decreases the accuracy rate at the 11th epoch is 86.5%, and 63% when using one epoch only. On the other hand, selecting the kernel size is loosely related to data set. From a practical point of view, the filter size 20 produces 70.4286%. The last performed image size experiment shows a dependency in the accuracy improvement. However, an expensive performance gain had been noticed. The represented conclusion opens the opportunities toward a better classification performance in various applications such as planetary remote sensing.

Keywords: CNNs, hyperparamters, remote sensing, land cover, land use

Procedia PDF Downloads 169
5094 Mirror of Princes as a Literary Genre in Classic Arabic Literature

Authors: Samir Kittaniy

Abstract:

The “Mirrors of Princes” is considered one of the most important literary types in Arabic and Islamic heritage. The term can be found in various types of “Adab”. The paper deals with the phrase: “Mirrors of princes” itself, showing its nature and the extent of its spread among researchers. Thus, the article relates to one of the main cultural pillars of the literary heritage. Creative individuals within the framework of this type of “Adab” have viewed the rulers as the ultimate goal they try to reach in their classification efforts, with the aim of educating, entertaining and amusing. Most literary classifications were submitted as a gift to the rulers, in an attempt to get closer to them. Pragmatic moral and political advices were among the most prominent issues to gain the approval of rulers.

Keywords: Islam, Arabic, literature, Middle East, mirrors of princes

Procedia PDF Downloads 522
5093 The Effect of Fuel Type on Synthesis of CeO2-MgO Nano-Powder by Combustion Method

Authors: F. Ghafoori-Najafabadi, R. Sarraf-Mamoory, N. Riahi-Noori

Abstract:

In this study, nanocrystalline CeO2-MgO powders were synthesized by combustion reactions using citric acid, ethylene glycol, and glycine as different fuels and nitrate as an oxidant. The powders obtained with different kinds of fuels are characterized by scanning electron microscopy (SEM) and X-ray diffraction (XRD). The size and morphology of the particles and the extent of agglomeration in the powders were studied using SEM analysis. It is observed that the variation of fuel has an intense influence on the particle size and morphology of the resulting powder. X-ray diffraction revealed that any combined phases were observed, and that MgO and CeO2 phases were formed, separately.

Keywords: nanoparticle, combustion synthesis, CeO2-MgO, nano-powder

Procedia PDF Downloads 411
5092 Terrestrial Laser Scans to Assess Aerial LiDAR Data

Authors: J. F. Reinoso-Gordo, F. J. Ariza-López, A. Mozas-Calvache, J. L. García-Balboa, S. Eddargani

Abstract:

The DEMs quality may depend on several factors such as data source, capture method, processing type used to derive them, or the cell size of the DEM. The two most important capture methods to produce regional-sized DEMs are photogrammetry and LiDAR; DEMs covering entire countries have been obtained with these methods. The quality of these DEMs has traditionally been evaluated by the national cartographic agencies through punctual sampling that focused on its vertical component. For this type of evaluation there are standards such as NMAS and ASPRS Positional Accuracy Standards for Digital Geospatial Data. However, it seems more appropriate to carry out this evaluation by means of a method that takes into account the superficial nature of the DEM and, therefore, its sampling is superficial and not punctual. This work is part of the Research Project "Functional Quality of Digital Elevation Models in Engineering" where it is necessary to control the quality of a DEM whose data source is an experimental LiDAR flight with a density of 14 points per square meter to which we call Point Cloud Product (PCpro). In the present work it is described the capture data on the ground and the postprocessing tasks until getting the point cloud that will be used as reference (PCref) to evaluate the PCpro quality. Each PCref consists of a patch 50x50 m size coming from a registration of 4 different scan stations. The area studied was the Spanish region of Navarra that covers an area of 10,391 km2; 30 patches homogeneously distributed were necessary to sample the entire surface. The patches have been captured using a Leica BLK360 terrestrial laser scanner mounted on a pole that reached heights of up to 7 meters; the position of the scanner was inverted so that the characteristic shadow circle does not exist when the scanner is in direct position. To ensure that the accuracy of the PCref is greater than that of the PCpro, the georeferencing of the PCref has been carried out with real-time GNSS, and its accuracy positioning was better than 4 cm; this accuracy is much better than the altimetric mean square error estimated for the PCpro (<15 cm); The kind of DEM of interest is the corresponding to the bare earth, so that it was necessary to apply a filter to eliminate vegetation and auxiliary elements such as poles, tripods, etc. After the postprocessing tasks the PCref is ready to be compared with the PCpro using different techniques: cloud to cloud or after a resampling process DEM to DEM.

Keywords: data quality, DEM, LiDAR, terrestrial laser scanner, accuracy

Procedia PDF Downloads 101
5091 Deep Reinforcement Learning Approach for Trading Automation in The Stock Market

Authors: Taylan Kabbani, Ekrem Duman

Abstract:

The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making.

Keywords: the stock market, deep reinforcement learning, MDP, twin delayed deep deterministic policy gradient, sentiment analysis, technical indicators, autonomous agent

Procedia PDF Downloads 178
5090 The Need for the Utilization of Instructional Materials on the Teaching and Learning of Agricultural Science Education in Developing Countries

Authors: Ogoh Andrew Enokela

Abstract:

This paper dwelt on the need for the utilization of instructional materials with highlights on the type of instructional materials, selection, uses and their importance on the learning and teaching of Agricultural Science Education in developing countries. It further discussed the concept of improvisation with some recommendation in terms of availability, utilization on the teaching and learning of Agricultural Science Education.

Keywords: instructional materials, agricultural science education, improvisation, teaching and learning

Procedia PDF Downloads 323
5089 Medical Complications in Diabetic Recipients after Kidney Transplantation

Authors: Hakan Duger, Alparslan Ersoy, Canan Ersoy

Abstract:

Diabetes mellitus is the most common etiology of end-stage renal disease (ESRD). Also, diabetic nephropathy is the etiology of ESRD in approximately 23% of kidney transplant recipients. A successful kidney transplant improves the quality of life and reduces the mortality risk for most patients. However, patients require close follow-up after transplantation due to medical complications. Diabetes mellitus can affect patient morbidity and mortality due to possible effects of immunosuppressive therapy on glucose metabolism. We compared the frequency of medical complications and the outcomes in diabetic and non-diabetic kidney transplant recipients. Materials and Methods: This retrospective study conducted in 498 patients who underwent kidney transplant surgery at our center in 10-year periods. The patients were divided into two groups: diabetics (46 ± 10 year, 26 males, 16 females) and non-diabetics (39 ± 12 year, 259 males, 197 females). The medical complications, graft functions, causes of graft loss and death were obtained from medical records. Results: There was no significant difference between recipient age, duration of dialysis, body mass index, gender, donor type, donor age, dialysis type, histories of HBV, HCV and coronary artery disease between two groups. The history of hypertension in diabetics was higher (69% vs. 36%, p < 0.001). The ratios of hypertension (50.1% vs. 57.1%), pneumonia (21.9% vs. 20%), urinary infection (16.9% vs. 20%), transaminase elevation (11.5% vs. 20%), hyperpotasemia (14.7% vs. 17.1%), hyponatremia (9.7% vs. 20%), hypotension (7.1% vs. 7.9%), hypocalcemia (1.4% vs. 0%), thrombocytopenia (8.6% vs. 8.6%), hypoglycemia (0.7% vs. 0%) and neutropenia (1.8% vs. 0%) were comparable in non-diabetic and diabetic groups, respectively. The frequency of hyperglycaemia in diabetics was higher (8.6% vs. 54.3%, p < 0.001). After transplantation, primary non-function (3.4% vs. 2.6%), delayed graft function (25.1% vs. 34.2%) and acute rejection (7.3% vs. 10.5%) ratios of in non-diabetic and diabetic groups were similar, respectively. Hospitalization durations in non-diabetics and diabetics were 22.5 ± 17.5 and 18.7 ± 13 day (p=0.094). Mean serum creatinine levels in non-diabetics and diabetics were 1.54 ± 0.74 and 1.52 ± 0.62 mg/dL at 6th month. Forty patients had graft loss. The ratios of graft loss and death in non-diabetic and diabetic groups were 8.2% vs. 7.1% and 7.1% vs. 2.6% (p > 0.05). There was no significant relationship between graft and patient survivals with the development of medical complication. Conclusion: As a result, medical complications are common in the early period. Hyperglycaemia was frequently seen following transplantation due to the effects of immunosuppressant regimens. However, the frequency of other medical complications in diabetic patients did not differ from non-diabetic one. The most important cause of death is still infections. The development of medical complications during the first 6 months did not significantly affect transplant outcomes.

Keywords: kidney transplantation, diabetes mellitus, complication, graft function

Procedia PDF Downloads 330
5088 The Correlation between Air Pollution and Tourette Syndrome

Authors: Mengnan Sun

Abstract:

It is unclear about the association between air pollution and Tourette Syndrome (TS), although people have suspected that air pollution might trigger TS. TS is a type of neural system disease usually found among children. The number of TS patients has significantly increased in recent decades, suggesting an importance and urgency to examine the possible triggers or conditions that are associated with TS. In this study, the correlation between air pollution and three allergic diseases---asthma, allergic conjunctivitis (AC), and allergic rhinitis (AR)---is examined. Then, a correlation between these allergic diseases and TS is proved. In this way, this study establishes a positive correlation between air pollution and TS. Measures the public can take to help TS patients are also analyzed at the end of this article. The article hopes to raise people’s awareness to reduce air pollution for the good of TS patients or people with other disorders that are associated with air pollution.

Keywords: air pollution, allergic diseases, climate change, Tourette Syndrome

Procedia PDF Downloads 63
5087 Computational and Experimental Determination of Acoustic Impedance of Internal Combustion Engine Exhaust

Authors: A. O. Glazkov, A. S. Krylova, G. G. Nadareishvili, A. S. Terenchenko, S. I. Yudin

Abstract:

The topic of the presented materials concerns the design of the exhaust system for a certain internal combustion engine. The exhaust system can be divided into two parts. The first is the engine exhaust manifold, turbocharger, and catalytic converters, which are called “hot part.” The second part is the gas exhaust system, which contains elements exclusively for reducing exhaust noise (mufflers, resonators), the accepted designation of which is the "cold part." The design of the exhaust system from the point of view of acoustics, that is, reducing the exhaust noise to a predetermined level, consists of working on the second part. Modern computer technology and software make it possible to design "cold part" with high accuracy in a given frequency range but with the condition of accurately specifying the input parameters, namely, the amplitude spectrum of the input noise and the acoustic impedance of the noise source in the form of an engine with a "hot part". Getting this data is a difficult problem: high temperatures, high exhaust gas velocities (turbulent flows), and high sound pressure levels (non-linearity mode) do not allow the calculated results to be applied with sufficient accuracy. The aim of this work is to obtain the most reliable acoustic output parameters of an engine with a "hot part" based on a complex of computational and experimental studies. The presented methodology includes several parts. The first part is a finite element simulation of the "cold part" of the exhaust system (taking into account the acoustic impedance of radiation of outlet pipe into open space) with the result in the form of the input impedance of "cold part". The second part is a finite element simulation of the "hot part" of the exhaust system (taking into account acoustic characteristics of catalytic units and geometry of turbocharger) with the result in the form of the input impedance of the "hot part". The next third part of the technique consists of the mathematical processing of the results according to the proposed formula for the convergence of the mathematical series of summation of multiple reflections of the acoustic signal "cold part" - "hot part". This is followed by conducting a set of tests on an engine stand with two high-temperature pressure sensors measuring pulsations in the nozzle between "hot part" and "cold part" of the exhaust system and subsequent processing of test results according to a well-known technique in order to separate the "incident" and "reflected" waves. The final stage consists of the mathematical processing of all calculated and experimental data to obtain a result in the form of a spectrum of the amplitude of the engine noise and its acoustic impedance.

Keywords: acoustic impedance, engine exhaust system, FEM model, test stand

Procedia PDF Downloads 59
5086 Intellectual Capital as Resource Based Business Strategy

Authors: Vidya Nimkar Tayade

Abstract:

Introduction: Intellectual capital of an organization is a key factor to success. Many companies invest a huge amount in their Research and development activities. Any innovation is helpful not only to that particular company but also to many other companies, industry and mankind as a whole. Companies undertake innovative changes for increasing their capital profitability and indirectly increase in pay packages of their employees. The quality of human capital can also improve due to such positive changes. Employees become more skilled and experienced due to such innovations and inventions. For increasing intangible capital, the author has referred to a couple of books and referred case studies to come to a conclusion. Different charts and tables are also referred to by the author. Case studies are more important because they are proven and established techniques. They enable students to apply theoretical concepts in real-world situations. It gives solutions to an open-ended problem with multiple potential solutions. There are three different strategies for undertaking intellectual capital increase. They are: Research push strategy/ Technology pushed approach, Market pull strategy/ approach and Open innovation strategy/approach. Research push strategy, In this strategy, research is undertaken and innovation is achieved on its own. After invention inventor company protects such invention and finds buyers for such invention. In this way, the invention is pushed into the market. In this method, research and development are undertaken first and the outcome of this research is commercialized. Market pull strategy, In this strategy, commercial opportunities are identified first and our research is concentrated in that particular area. For solving a particular problem, research is undertaken. It becomes easier to commercialize this type of invention. Because what is the problem is identified first and in that direction, research and development activities are carried on. Open invention strategy, In this type of research, more than one company enters into an agreement of research. The benefits of the outcome of this research will be shared by both companies. Internal and external ideas and technologies are involved. These ideas are coordinated and then they are commercialized. Due to globalization, people from the outside company are also invited to undertake research and development activities. Remuneration of employees of both the companies can increase and the benefit of commercialization of such invention is also shared by both the companies. Conclusion: In modern days, not only can tangible assets be commercialized, but also intangible assets can also be commercialized. The benefits of such an invention can be shared by more than one company. Competition can become more meaningful. Pay packages of employees can improve. It Is a need for time to adopt such strategies to benefit employees, competitors, stakeholders.

Keywords: innovation, protection, management, commercialization

Procedia PDF Downloads 168
5085 Directed-Wald Test for Distinguishing Long Memory and Nonlinearity Time Series: Power and Size Simulation

Authors: Heri Kuswanto, Philipp Sibbertsen, Irhamah

Abstract:

A Wald type test to distinguish between long memory and ESTAR nonlinearity has been developed. The test uses a directed-Wald statistic to overcome the problem of restricted parameters under the alternative. The test is derived from a model specification i.e. allows the transition parameter to appear as a nuisance parameter in the transition function. A simulation study has been conducted and it indicates that the approach leads a test with good size and power properties to distinguish between stationary long memory and ESTAR.

Keywords: directed-Wald test, ESTAR, long memory, distinguish

Procedia PDF Downloads 482
5084 Influence of Micro Fillers Content on the Mechanical Properties of Epoxy Composites

Authors: H. Unal, A. Mimaroglu, I. Ozsoy

Abstract:

In this study, the mechanical properties of micro filled epoxy composites were investigated. The matrix material is epoxy. Micro fillers are Al2O3 and TiO2 added in 10-30 wt% by weight ratio. Test samples were prepared using an open mould type die. Tensile, three point bending and hardness tests were carried out. The tensile strength, elastic modulus, elongation at break, flexural strength, flexural modulus and the hardness of the composite materials were obtained and evaluated. It was seen from the results that the level of the mechanical properties of the epoxy composites is highly influenced by micro filler content.

Keywords: composites, epoxy, fillers, mechanical properties

Procedia PDF Downloads 486
5083 Natural Gas Production Forecasts Using Diffusion Models

Authors: Md. Abud Darda

Abstract:

Different options for natural gas production in wide geographic areas may be described through diffusion of innovation models. This type of modeling approach provides an indirect estimate of an ultimately recoverable resource, URR, capture the quantitative effects of observed strategic interventions, and allow ex-ante assessments of future scenarios over time. In order to ensure a sustainable energy policy, it is important to forecast the availability of this natural resource. Considering a finite life cycle, in this paper we try to investigate the natural gas production of Myanmar and Algeria, two important natural gas provider in the world energy market. A number of homogeneous and heterogeneous diffusion models, with convenient extensions, have been used. Models validation has also been performed in terms of prediction capability.

Keywords: diffusion models, energy forecast, natural gas, nonlinear production

Procedia PDF Downloads 227
5082 Integration of Microarray Data into a Genome-Scale Metabolic Model to Study Flux Distribution after Gene Knockout

Authors: Mona Heydari, Ehsan Motamedian, Seyed Abbas Shojaosadati

Abstract:

Prediction of perturbations after genetic manipulation (especially gene knockout) is one of the important challenges in systems biology. In this paper, a new algorithm is introduced that integrates microarray data into the metabolic model. The algorithm was used to study the change in the cell phenotype after knockout of Gss gene in Escherichia coli BW25113. Algorithm implementation indicated that gene deletion resulted in more activation of the metabolic network. Growth yield was more and less regulating gene were identified for mutant in comparison with the wild-type strain.

Keywords: metabolic network, gene knockout, flux balance analysis, microarray data, integration

Procedia PDF Downloads 579
5081 Design and Construction of Temperature and Humidity Control Channel for a Bacteriological Incubator

Authors: Carlos R. Duharte Rodríguez, Ibrain Ceballo Acosta, Carmen B. Busoch Morlán, Angel Regueiro Gómez, Annet Martinez Hernández

Abstract:

This work shows the designing and characterization of a prototype of laboratory incubator as support of research in Microbiology, in particular during studies of bacterial growth in biological samples, with the help of optic methods (Turbidimetry) and electrometric measurements of bioimpedance. It shows the results of simulation and experimentation of the design proposed for the canals of measurement of the variables: temperature and humidity, with a high linearity from the adequate selection of sensors and analogue components of every channel, controlled with help of a microcontroller AT89C51 (ATMEL) with adequate benefits for this type of application.

Keywords: microbiology, bacterial growth, incubation station, microorganisms

Procedia PDF Downloads 401
5080 Regional Adjustment to the Analytical Attenuation Coefficient in the GMPM BSSA 14 for the Region of Spain

Authors: Gonzalez Carlos, Martinez Fransisco

Abstract:

There are various types of analysis that allow us to involve seismic phenomena that cause strong requirements for structures that are designed by society; one of them is a probabilistic analysis which works from prediction equations that have been created based on metadata seismic compiled in different regions. These equations form models that are used to describe the 5% damped pseudo spectra response for the various zones considering some easily known input parameters. The biggest problem for the creation of these models requires data with great robust statistics that support the results, and there are several places where this type of information is not available, for which the use of alternative methodologies helps to achieve adjustments to different models of seismic prediction.

Keywords: GMPM, 5% damped pseudo-response spectra, models of seismic prediction, PSHA

Procedia PDF Downloads 76
5079 Comics as an Intermediary for Media Literacy Education

Authors: Ryan C. Zlomek

Abstract:

The value of using comics in the literacy classroom has been explored since the 1930s. At that point in time researchers had begun to implement comics into daily lesson plans and, in some instances, had started the development process for comics-supported curriculum. In the mid-1950s, this type of research was cut short due to the work of psychiatrist Frederic Wertham whose research seemingly discovered a correlation between comic readership and juvenile delinquency. Since Wertham’s allegations the comics medium has had a hard time finding its way back to education. Now, over fifty years later, the definition of literacy is in mid-transition as the world has become more visually-oriented and students require the ability to interpret images as often as words. Through this transition, comics has found a place in the field of literacy education research as the shift focuses from traditional print to multimodal and media literacies. Comics are now believed to be an effective resource in bridging the gap between these different types of literacies. This paper seeks to better understand what students learn from the process of reading comics and how those skills line up with the core principles of media literacy education in the United States. In the first section, comics are defined to determine the exact medium that is being examined. The different conventions that the medium utilizes are also discussed. In the second section, the comics reading process is explored through a dissection of the ways a reader interacts with the page, panel, gutter, and different comic conventions found within a traditional graphic narrative. The concepts of intersubjective acts and visualization are attributed to the comics reading process as readers draw in real world knowledge to decode meaning. In the next section, the learning processes that comics encourage are explored parallel to the core principles of media literacy education. Each principle is explained and the extent to which comics can act as an intermediary for this type of education is theorized. In the final section, the author examines comics use in his computer science and technology classroom. He lays out different theories he utilizes from Scott McCloud’s text Understanding Comics and how he uses them to break down media literacy strategies with his students. The article concludes with examples of how comics has positively impacted classrooms around the United States. It is stated that integrating comics into the classroom will not solve all issues related to literacy education but, rather, that comics can be a powerful multimodal resource for educators looking for new mediums to explore with their students.

Keywords: comics, graphics novels, mass communication, media literacy, metacognition

Procedia PDF Downloads 299
5078 Turbulent Flow in Corrugated Pipes with Helical Grooves

Authors: P. Mendes, H. Stel, R. E. M. Morales

Abstract:

This article presents a numerical and experimental study of turbulent flow in corrugated pipes with helically “d-type" grooves, for Reynolds numbers between 7500 and 100,000. The ANSYS-CFX software is used to solve the RANS equations with the BSL two equation turbulence model, through the element-based finite-volume method approach. Different groove widths and helix angles are considered. Numerical results are validated with experimental pressure drop measurements for the friction factor. A correlation for the friction factor is also proposed considering the geometric parameters and Reynolds numbers evaluated.

Keywords: turbulent flow, corrugated pipe, helical, numerical, experimental, friction factor, correlation

Procedia PDF Downloads 484
5077 Energy Usage in Isolated Areas of Honduras

Authors: Bryan Jefry Sabillon, Arlex Molina Cedillo

Abstract:

Currently, the raise in the demand of electrical energy as a consequence of the development of technology and population growth, as well as some projections made by ‘La Agencia Internacional de la Energía’ (AIE) and research institutes, reveal alarming data about the expected raise of it in the next few decades. Because of this, something should be made to raise the awareness of the rational and efficient usage of this resource. Because of the global concern of providing electrical energy to isolated areas, projects consisting of energy generation using renewable resources are commonly carried out. On a socioeconomically and cultural point of view, it can be foreseen a positive impact that would result for the society to have this resource. This article is focused on the great potential that Honduras shows, as a country that is looking forward to produce renewable energy due to the crisis that it’s living nowadays. Because of this, we present a detailed research that exhibits the main necessities that the rural communities are facing today, to allay the negative aspects due to the scarcity of electrical energy. We also discuss which should be the type of electrical generation method to be used, according to the disposition, geography, climate, and of course the accessibility of each area. Honduras is actually in the process of developing new methods for the generation of energy; therefore, it is of our concern to talk about renewable energy, the exploitation of which is a global trend. Right now the countries’ main energetic generation methods are: hydrological, thermic, wind, biomass and photovoltaic (this is one of the main sources of clean electrical generation). The use of these resources was possible partially due to the studies made by the organizations that focus on electrical energy and its demand, such as ‘La Cooperación Alemana’ (GIZ), ‘La Secretaria de Energía y Recursos Naturales’ (SERNA), and ‘El Banco Centroamericano de Integración Económica’ (BCIE), which eased the complete guide that is to be used in the protocol to be followed to carry out the three stages of this type of projects: 1) Licences and Permitions, 2) Fincancial Aspects and 3) The inscription for the Protocol in Kyoto. This article pretends to take the reader through the necessary information (according to the difficult accessibility that each zone might present), about the best option of electrical generation in zones that are totally isolated from the net, pretending to use renewable resources to generate electrical energy. We finally conclude that the usage of hybrid systems of generation of energy for small remote communities brings about a positive impact, not only because of the fact of providing electrical energy but also because of the improvements in education, health, sustainable agriculture and livestock, and of course the advances in the generation of energy which is the main concern of this whole article.

Keywords: energy, isolated, renewable, accessibility

Procedia PDF Downloads 229
5076 Full Potential Calculation of Structural and Electronic Properties of Perovskite BiAlO3 and BiGaO3

Authors: M. Harmel, H. Khachai

Abstract:

The first principles within the full potential linearized augmented plane wave (FP-LAPW) method were applied to study the structural and electronic properties of cubic perovskite-type compounds BiAlO3 and BiGaO3. The lattice constant, bulk modulus, its pressure derivative, band structure and density of states were obtained. The results show that BiGaO3 should exhibit higher hardness and stiffness than BiAlO3. The Al–O or Ga–O bonds are typically covalent with a strong hybridization as well as Bi–O ones that have a significant ionic character. Both materials are weakly ionic and exhibit wide and indirect band gaps, which are typical of insulators.

Keywords: DFT, Ab initio, electronic structure, Perovskite structure, ferroelectrics

Procedia PDF Downloads 397
5075 Probiotics in Anxiety and Depression

Authors: Pilar Giffenig, Avanna Kotlarz, Taylor Dehring

Abstract:

Anxiety and depression are common mental illnesses in the U.S today. While there are various treatments for these mental health disorders, many of the medications come with a large variety of side effects that decrease medication compliance. Recent studies have looked at the impact of probiotics on anxiety and depression. Our goal was to determine whether probiotics could help relieve symptoms of anxiety and or depression. We conducted a literature search of three databases focusing on systematic reviews and RTC and found 25 articles, 8 of which were used for our analysis. Seven out of the eight articles showed that probiotics have the potential to significantly reduce symptoms of anxiety and depression. However, larger study sample sizes, type of probiotic, and correct dosage are required in future research to determine the role of probiotics in the treatment of anxiety and depression.

Keywords: probiotics, anxiety, depression, treatment, psychology, nutrition

Procedia PDF Downloads 270
5074 A Theoretical Approach of Tesla Pump

Authors: Cristian Sirbu-Dragomir, Stefan-Mihai Sofian, Adrian Predescu

Abstract:

This paper aims to study Tesla pumps for circulating biofluids. It is desired to make a small pump for the circulation of biofluids. This type of pump will be studied because it has the following characteristics: It doesn’t have blades which results in very small frictions; Reduced friction forces; Low production cost; Increased adaptability to different types of fluids; Low cavitation (towards 0); Low shocks due to lack of blades; Rare maintenance due to low cavity; Very small turbulences in the fluid; It has a low number of changes in the direction of the fluid (compared to rotors with blades); Increased efficiency at low powers.; Fast acceleration; The need for a low torque; Lack of shocks in blades at sudden starts and stops. All these elements are necessary to be able to make a small pump that could be inserted into the thoracic cavity. The pump will be designed to combat myocardial infarction. Because the pump must be inserted in the thoracic cavity, elements such as Low friction forces, shocks as low as possible, low cavitation and as little maintenance as possible are very important. The operation should be performed once, without having to change the rotor after a certain time. Given the very small size of the pump, the blades of a classic rotor would be very thin and sudden starts and stops could cause considerable damage or require a very expensive material. At the same time, being a medical procedure, the low cost is important in order to be easily accessible to the population. The lack of turbulence or vortices caused by a classic rotor is again a key element because when it comes to blood circulation, the flow must be laminar and not turbulent. The turbulent flow can even cause a heart attack. Due to these aspects, Tesla's model could be ideal for this work. Usually, the pump is considered to reach an efficiency of 40% being used for very high powers. However, the author of this type of pump claimed that the maximum efficiency that the pump can achieve is 98%. The key element that could help to achieve this efficiency or one as close as possible is the fact that the pump will be used for low volumes and pressures. The key elements to obtain the best efficiency for this model are the number of rotors placed in parallel and the distance between them. The distance between them must be small, which helps to obtain a pump as small as possible. The principle of operation of such a rotor is to place in several parallel discs cut inside. Thus the space between the discs creates the vacuum effect by pulling the liquid through the holes in the rotor and throwing it outwards. Also, a very important element is the viscosity of the liquid. It dictates the distance between the disks to achieve a lossless power flow.

Keywords: lubrication, temperature, tesla-pump, viscosity

Procedia PDF Downloads 179
5073 Fatigue-Induced Debonding Propagation in FM300 Adhesive

Authors: Reza Hedayati, Meysam Jahanbakhshi

Abstract:

Fracture Mechanics is used to predict debonding propagation in adhesive joint between aluminum and composite plates. Three types of loadings and two types of glass-epoxy composite sequences: [0/90]2s and [0/45/-45/90]s are considered for the composite plate and their results are compared. It was seen that generally the cases with stacking sequence of [0/45/-45/90]s have much shorter lives than cases with [0/90]2s. It was also seen that in cases with λ=0 the ends of the debonding front propagates forward more than its middle, while in cases with λ=0.5 or λ=1 it is vice versa. Moreover, regardless of value of λ, the difference between the debonding propagations of the ends and the middle of the debonding front is very close in cases λ=0.5 and λ=1. Another main conclusion was the non-dimensionalized debonding front profile is almost independent of sequence type or the applied load value.

Keywords: adhesive joint, debonding, fracture, LEFM, APDL

Procedia PDF Downloads 362
5072 Compact, Lightweight, Low Cost, Rectangular Core Power Transformers

Authors: Abidin Tortum, Kubra Kocabey

Abstract:

One of the sectors where the competition is experienced at the highest level in the world is the transformer sector, and sales can be made with a limited profit margin. For this reason, manufacturers must develop cost-cutting designs to achieve higher profits. The use of rectangular cores and coils in transformer design is one of the methods that can be used to reduce costs. According to the best knowledge we have obtained, we think that we are the first company producing rectangular core power transformers in our country. BETA, to reduce the cost of this project, more compact products to reveal, as we know it to increase the alleviate and competitiveness of the product, will perform cored coil design and production rectangle for the first-time power transformers in Turkey. The transformer to be designed shall be 16 MVA, 33/11 kV voltage level. With the rectangular design of the transformer core and windings, no-load losses can be reduced. Also, the least costly transformer type is rectangular. However, short-circuit forces on rectangular windings do not affect every point of the windings in the same way. Whereas more force is applied inwards to the mid-points of the low-voltage winding, the opposite occurs in the high-voltage winding. Therefore, the windings tend to deteriorate in the event of a short circuit. While trying to reach the project objectives, the difficulties in the design should be overcome. Rectangular core transformers to be produced in our country offer a more compact structure than conventional transformers. In other words, both height and width were smaller. Thus, the reducer takes up less space in the center. Because the transformer boiler is smaller, less oil is used, and its weight is lower. Biotemp natural ester fluid is used in rectangular transformer and the cooling performance of this oil is analyzed. The cost was also reduced with the reduction of dimensions. The decrease in the amount of oil used has also increased the environmental friendliness of the developed product. Transportation costs have been reduced by reducing the total weight. The amount of carbon emissions generated during the transportation process is reduced. Since the low-voltage winding is wound with a foil winding technique, a more resistant structure is obtained against short circuit forces. No-load losses were lower due to the use of a rectangular core. The project was handled in three phases. In the first stage, preliminary research and designs were carried out. In the second stage, the prototype manufacturing of the transformer whose designs have been completed has been started. The prototype developed in the last stage has been subjected to routine, type and special tests.

Keywords: rectangular core, power transformer, transformer, productivity

Procedia PDF Downloads 121
5071 Iron Yoke Dipole with High Quality Field for Collector Ring FAIR

Authors: Tatyana Rybitskaya, Alexandr Starostenko, Kseniya Ryabchenko

Abstract:

Collector ring (CR) of FAIR project is a large acceptance storage ring and field quality plays a major role in the magnet design. The CR will use normal conducting dipole magnets. There will be 24 H-type sector magnets with a maximum field value of 1.6 T. The integrated over the length of the magnet field quality as a function of radius is ∆B.l/B.l = ±1x10⁻⁴. Below 1.6 T the value ∆B.l/B.l can be higher with a linear approximation up to ±2.5x10⁻⁴ at the field level of 0.8 T. An iron-dominated magnet with required field quality is produced with standard technology as the quality is dominated by the yoke geometry.

Keywords: conventional magnet, iron yoke dipole, harmonic terms, particle accelerators

Procedia PDF Downloads 146
5070 Intensity Modulated Radiotherapy of Nasopharyngeal Carcinomas: Patterns of Loco Regional Relapse

Authors: Omar Nouri, Wafa Mnejja, Nejla Fourati, Fatma Dhouib, Wicem Siala, Ilhem Charfeddine, Afef Khanfir, Jamel Daoud

Abstract:

Background and objective: Induction chemotherapy (IC) followed by concomitant chemo radiotherapy with intensity modulated radiation (IMRT) technique is actually the recommended treatment modality for locally advanced nasopharyngeal carcinomas (NPC). The aim of this study was to evaluate the prognostic factors predicting loco regional relapse with this new treatment protocol. Patients and methods: A retrospective study of 52 patients with NPC treated between June 2016 and July 2019. All patients received IC according to the protocol of the Head and Neck Radiotherapy Oncology Group (Gortec) NPC 2006 (3 TPF courses) followed by concomitant chemo radiotherapy with weekly cisplatin (40 mg / m2). Patients received IMRT with integrated simultaneous boost (SIB) of 33 daily fractions at a dose of 69.96 Gy for high-risk volume, 60 Gy for intermediate risk volume and 54 Gy for low-risk volume. Median age was 49 years (19-69) with a sex ratio of 3.3. Forty five tumors (86.5%) were classified as stages III - IV according to the 2017 UICC TNM classification. Loco regional relapse (LRR) was defined as a local and/or regional progression that occurs at least 6 months after the end of treatment. Survival analysis was performed according to Kaplan-Meier method and Log-rank test was used to compare anatomy clinical and therapeutic factors that may influence loco regional free survival (LRFS). Results: After a median follow up of 42 months, 6 patients (11.5%) experienced LRR. A metastatic relapse was also noted for 3 of these patients (50%). Target volumes coverage was optimal for all patient with LRR. Four relapses (66.6%) were in high-risk target volume and two (33.3%) were borderline. Three years LRFS was 85,9%. Four factors predicted loco regional relapses: histologic type other than undifferentiated (UCNT) (p=0.027), a macroscopic pre chemotherapy tumor volume exceeding 100 cm³ (p=0.005), a reduction in IC doses exceeding 20% (p=0.016) and a total cumulative cisplatin dose less than 380 mg/m² (p=0.0.34). TNM classification and response to IC did not impact loco regional relapses. Conclusion: For nasopharyngeal carcinoma, tumors with initial high volume and/or histologic type other than UCNT, have a higher risk of loco regional relapse. Therefore, they require a more aggressive therapeutic approaches and a suitable monitoring protocol.

Keywords: loco regional relapse, modulation intensity radiotherapy, nasopharyngeal carcinoma, prognostic factors

Procedia PDF Downloads 128
5069 “Voiceless Memory” and Holodomor (Great Famine): The Power of Oral History to Challenge Official Historical Discourse

Authors: Tetiana Boriak

Abstract:

The study is called to test correlation between official sources, preserved in the archives, and “unofficial” oral history regarding the Great Famine of 1932–1933 in Ukraine. The research shows poor preservation of the sources, being deliberately destroyed by the totalitarian regime. It involves analysis of five stages of Holodomor oral history development. It is oral history that provides the mechanism of mass killing. The research proves that using only one type of historical sources leads to a certain line of reading history of the Holodomor, while usage of both types provides in-depth insight in the history of the famine.

Keywords: the Holodomor (the Great Famine), oral history, historical source, historical memory, totalitarianism.

Procedia PDF Downloads 108
5068 Systematic Mapping Study of Digitization and Analysis of Manufacturing Data

Authors: R. Clancy, M. Ahern, D. O’Sullivan, K. Bruton

Abstract:

The manufacturing industry is currently undergoing a digital transformation as part of the mega-trend Industry 4.0. As part of this phase of the industrial revolution, traditional manufacturing processes are being combined with digital technologies to achieve smarter and more efficient production. To successfully digitally transform a manufacturing facility, the processes must first be digitized. This is the conversion of information from an analogue format to a digital format. The objective of this study was to explore the research area of digitizing manufacturing data as part of the worldwide paradigm, Industry 4.0. The formal methodology of a systematic mapping study was utilized to capture a representative sample of the research area and assess its current state. Specific research questions were defined to assess the key benefits and limitations associated with the digitization of manufacturing data. Research papers were classified according to the type of research and type of contribution to the research area. Upon analyzing 54 papers identified in this area, it was noted that 23 of the papers originated in Germany. This is an unsurprising finding as Industry 4.0 is originally a German strategy with supporting strong policy instruments being utilized in Germany to support its implementation. It was also found that the Fraunhofer Institute for Mechatronic Systems Design, in collaboration with the University of Paderborn in Germany, was the most frequent contributing Institution of the research papers with three papers published. The literature suggested future research directions and highlighted one specific gap in the area. There exists an unresolved gap between the data science experts and the manufacturing process experts in the industry. The data analytics expertise is not useful unless the manufacturing process information is utilized. A legitimate understanding of the data is crucial to perform accurate analytics and gain true, valuable insights into the manufacturing process. There lies a gap between the manufacturing operations and the information technology/data analytics departments within enterprises, which was borne out by the results of many of the case studies reviewed as part of this work. To test the concept of this gap existing, the researcher initiated an industrial case study in which they embedded themselves between the subject matter expert of the manufacturing process and the data scientist. Of the papers resulting from the systematic mapping study, 12 of the papers contributed a framework, another 12 of the papers were based on a case study, and 11 of the papers focused on theory. However, there were only three papers that contributed a methodology. This provides further evidence for the need for an industry-focused methodology for digitizing and analyzing manufacturing data, which will be developed in future research.

Keywords: analytics, digitization, industry 4.0, manufacturing

Procedia PDF Downloads 111
5067 Diet and Exercise Intervention and Bio–Atherogenic Markers for Obesity Classes of Black South Africans with Type 2 Diabetes Mellitus Using Discriminant Analysis

Authors: Oladele V. Adeniyi, B. Longo-Mbenza, Daniel T. Goon

Abstract:

Background: Lipids are often low or in the normal ranges and controversial in the atherogenesis among Black Africans. The effect of the severity of obesity on some traditional and novel cardiovascular disease risk factors is unclear before and after a diet and exercise maintenance programme among obese black South Africans with type 2 diabetes mellitus (T2DM). Therefore, this study aimed to identify the risk factors to discriminate obesity classes among patients with T2DM before and after a diet and exercise programme. Methods: This interventional cohort of Black South Africans with T2DM was followed by a very – low calorie diet and exercise programme in Mthatha, between August and November 2013. Gender, age, and the levels of body mass index (BMI), blood pressure, monthly income, daily frequency of meals, blood random plasma glucose (RPG), serum creatinine, total cholesterol (TC), triglycerides (TG), LDL –C, HDL – C, Non-HDL, ratios of TC/HDL, TG/HDL, and LDL/HDL were recorded. Univariate analysis (ANOVA) and multivariate discriminant analysis were performed to separate obesity classes: normal weight (BMI = 18.5 – 24.9 kg/m2), overweight (BMI = 25 – 29.9 kg/m2), obesity Class 1 (BMI = 30 – 34.9 kg/m2), obesity Class 2 (BMI = 35 – 39.9 kg/m2), and obesity Class 3 (BMI ≥ 40 kg/m2). Results: At the baseline (1st Month September), all 327 patients were overweight/obese: 19.6% overweight, 42.8% obese class 1, 22.3% obese class 2, and 15.3% obese class 3. In discriminant analysis, only systolic blood pressure (SBP with positive association) and LDL/HDL ratio (negative association) significantly separated increasing obesity classes. At the post – evaluation (3rd Month November), out of all 327 patients, 19.9%, 19.3%, 37.6%, 15%, and 8.3% had normal weight, overweight, obesity class 1, obesity class 2, and obesity class 3, respectively. There was a significant negative association between serum creatinine and increase in BMI. In discriminant analysis, only age (positive association), SBP (U – shaped relationship), monthly income (inverted U – shaped association), daily frequency of meals (positive association), and LDL/HDL ratio (positive association) classified significantly increasing obesity classes. Conclusion: There is an epidemic of diabesity (Obesity + T2DM) in this Black South Africans with some weight loss. Further studies are needed to understand positive or negative linear correlations and paradoxical curvilinear correlations between these markers and increase in BMI among black South African T2DM patients.

Keywords: atherogenic dyslipidaemia, dietary interventions, obesity, south africans

Procedia PDF Downloads 367
5066 Modeling of Landslide-Generated Tsunamis in Georgia Strait, Southern British Columbia

Authors: Fatemeh Nemati, Lucinda Leonard, Gwyn Lintern, Richard Thomson

Abstract:

In this study, we will use modern numerical modeling approaches to estimate tsunami risks to the southern coast of British Columbia from landslides. Wave generation is to be simulated using the NHWAVE model, which solves the Navier-Stokes equations due to the more complex behavior of flow near the landslide source; far-field wave propagation will be simulated using the simpler model FUNWAVE_TVD with high-order Boussinesq-type wave equations, with a focus on the accurate simulation of wave propagation and regional- or coastal-scale inundation predictions.

Keywords: FUNWAVE-TVD, landslide-generated tsunami, NHWAVE, tsunami risk

Procedia PDF Downloads 155