Search results for: multiple input multiple output (MIMO)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7868

Search results for: multiple input multiple output (MIMO)

398 Energy Efficient Refrigerator

Authors: Jagannath Koravadi, Archith Gupta

Abstract:

In a world with constantly growing energy prices, and growing concerns about the global climate changes caused by increased energy consumption, it is becoming more and more essential to save energy wherever possible. Refrigeration systems are one of the major and bulk energy consuming systems now-a-days in industrial sectors, residential sectors and household environment. Refrigeration systems with considerable cooling requirements consume a large amount of electricity and thereby contribute greatly to the running costs. Therefore, a great deal of attention is being paid towards improvement of the performance of the refrigeration systems in this regard throughout the world. The Coefficient of Performance (COP) of a refrigeration system is used for determining the system's overall efficiency. The operating cost to the consumer and the overall environmental impact of a refrigeration system in turn depends on the COP or efficiency of the system. The COP of a refrigeration system should therefore be as high as possible. Slight modifications in the technical elements of the modern refrigeration systems have the potential to reduce the energy consumption, and improvements in simple operational practices with minimal expenses can have beneficial impact on COP of the system. Thus, the challenge is to determine the changes that can be made in a refrigeration system in order to improve its performance, reduce operating costs and power requirement, improve environmental outcomes, and achieve a higher COP. The opportunity here, and a better solution to this challenge, will be to incorporate modifications in conventional refrigeration systems for saving energy. Energy efficiency, in addition to improvement of COP, can deliver a range of savings such as reduced operation and maintenance costs, improved system reliability, improved safety, increased productivity, better matching of refrigeration load and equipment capacity, reduced resource consumption and greenhouse gas emissions, better working environment, and reduced energy costs. The present work aims at fabricating a working model of a refrigerator that will provide for effective heat recovery from superheated refrigerant with the help of an efficient de-superheater. The temperature of the refrigerant and water in the de-super heater at different intervals of time are measured to determine the quantity of waste heat recovered. It is found that the COP of the system improves by about 6% with the de-superheater and the power input to the compressor decreases by 4 % and also the refrigeration capacity increases by 4%.

Keywords: coefficiency of performance, de-superheater, refrigerant, refrigeration capacity, heat recovery

Procedia PDF Downloads 303
397 Experiments to Study the Vapor Bubble Dynamics in Nucleate Pool Boiling

Authors: Parul Goel, Jyeshtharaj B. Joshi, Arun K. Nayak

Abstract:

Nucleate boiling is characterized by the nucleation, growth and departure of the tiny individual vapor bubbles that originate in the cavities or imperfections present in the heating surface. It finds a wide range of applications, e.g. in heat exchangers or steam generators, core cooling in power reactors or rockets, cooling of electronic circuits, owing to its highly efficient transfer of large amount of heat flux over small temperature differences. Hence, it is important to be able to predict the rate of heat transfer and the safety limit heat flux (critical heat flux, heat flux higher than this can lead to damage of the heating surface) applicable for any given system. A large number of experimental and analytical works exist in the literature, and are based on the idea that the knowledge of the bubble dynamics on the microscopic scale can lead to the understanding of the full picture of the boiling heat transfer. However, the existing data in the literature are scattered over various sets of conditions and often in disagreement with each other. The correlations obtained from such data are also limited to the range of conditions they were established for and no single correlation is applicable over a wide range of parameters. More recently, a number of researchers have been trying to remove empiricism in the heat transfer models to arrive at more phenomenological models using extensive numerical simulations; these models require state-of-the-art experimental data for a wide range of conditions, first for input and later, for their validation. With this idea in mind, experiments with sub-cooled and saturated demineralized water have been carried out under atmospheric pressure to study the bubble dynamics- growth rate, departure size and frequencies for nucleate pool boiling. A number of heating elements have been used to study the dependence of vapor bubble dynamics on the heater surface finish and heater geometry along with the experimental conditions like the degree of sub-cooling, super heat and the heat flux. An attempt has been made to compare the data obtained with the existing data and the correlations in the literature to generate an exhaustive database for the pool boiling conditions.

Keywords: experiment, boiling, bubbles, bubble dynamics, pool boiling

Procedia PDF Downloads 274
396 Renewable Energy Storage Capacity Rating: A Forecast of Selected Load and Resource Scenario in Nigeria

Authors: Yakubu Adamu, Baba Alfa, Salahudeen Adamu Gene

Abstract:

As the drive towards clean, renewable and sustainable energy generation is gradually been reshaped by renewable penetration over time, energy storage has thus, become an optimal solution for utilities looking to reduce transmission and capacity cost, therefore the need for capacity resources to be adjusted accordingly such that renewable energy storage may have the opportunity to substitute for retiring conventional energy systems with higher capacity factors. Considering the Nigeria scenario, where Over 80% of the current Nigerian primary energy consumption is met by petroleum, electricity demand is set to more than double by mid-century, relative to 2025 levels. With renewable energy penetration rapidly increasing, in particular biomass, hydro power, solar and wind energy, it is expected to account for the largest share of power output in the coming decades. Despite this rapid growth, the imbalance between load and resources has created a hindrance to the development of energy storage capacity, load and resources, hence forecasting energy storage capacity will therefore play an important role in maintaining the balance between load and resources including supply and demand. Therefore, the degree to which this might occur, its timing and more importantly its sustainability, is the subject matter of the current research. Here, we forecast the future energy storage capacity rating and thus, evaluate the load and resource scenario in Nigeria. In doing so, We used the scenario-based International Energy Agency models, the projected energy demand and supply structure of the country through 2030 are presented and analysed. Overall, this shows that in high renewable (solar) penetration scenarios in Nigeria, energy storage with 4-6h duration can obtain over 86% capacity rating with storage comprising about 24% of peak load capacity. Therefore, the general takeaway from the current study is that most power systems currently used has the potential to support fairly large penetrations of 4-6 hour storage as capacity resources prior to a substantial reduction in capacity ratings. The data presented in this paper is a crucial eye-opener for relevant government agencies towards developing these energy resources in tackling the present energy crisis in Nigeria. However, if the transformation of the Nigeria. power system continues primarily through expansion of renewable generation, then longer duration energy storage will be needed to qualify as capacity resources. Hence, the analytical task from the current survey will help to determine whether and when long-duration storage becomes an integral component of the capacity mix that is expected in Nigeria by 2030.

Keywords: capacity, energy, power system, storage

Procedia PDF Downloads 11
395 Scrutinizing the Effective Parameters on Cuttings Movement in Deviated Wells: Experimental Study

Authors: Siyamak Sarafraz, Reza Esmaeil Pour, Saeed Jamshidi, Asghar Molaei Dehkordi

Abstract:

Cutting transport is one of the major problems in directional and extended reach oil and gas wells. Lack of sufficient attention to this issue may bring some troubles such as casing running, stuck pipe, excessive torque and drag, hole pack off, bit wear, decreased the rate of penetration (ROP), increased equivalent circulation density (ECD) and logging. Since it is practically impossible to directly observe the behavior of deep wells, a test setup was designed to investigate cutting transport phenomena. This experimental work carried out to scrutiny behavior of the effective variables in cutting transport. The test setup contained a test section with 17 feet long that made of a 3.28 feet long transparent glass pipe with 3 inch diameter, a storage tank with 100 liters capacity, drill pipe rotation which made of stainless steel with 1.25 inches diameter, pump to circulate drilling fluid, valve to adjust flow rate, bit and a camera to record all events which then converted to RGB images via the Image Processing Toolbox. After preparation of test process, each test performed separately, and weights of the output particles were measured and compared with each other. Observation charts were plotted to assess the behavior of viscosity, flow rate and RPM in inclinations of 0°, 30°, 60° and 90°. RPM was explored with other variables such as flow rate and viscosity in different angles. Also, effect of different flow rate was investigated in directional conditions. To access the precise results, captured image were analyzed to find out bed thickening and particles behave in the annulus. The results of this experimental study demonstrate that drill string rotation helps particles to be suspension and reduce the particle deposition cutting movement increased significantly. By raising fluid velocity, laminar flow converted to turbulence flow in the annulus. Increases in flow rate in horizontal section by considering a lower range of viscosity is more effective and improved cuttings transport performance.

Keywords: cutting transport, directional drilling, flow rate, hole cleaning, pipe rotation

Procedia PDF Downloads 259
394 Estimation of Relative Subsidence of Collapsible Soils Using Electromagnetic Measurements

Authors: Henok Hailemariam, Frank Wuttke

Abstract:

Collapsible soils are weak soils that appear to be stable in their natural state, normally dry condition, but rapidly deform under saturation (wetting), thus generating large and unexpected settlements which often yield disastrous consequences for structures unwittingly built on such deposits. In this study, a prediction model for the relative subsidence of stressed collapsible soils based on dielectric permittivity measurement is presented. Unlike most existing methods for soil subsidence prediction, this model does not require moisture content as an input parameter, thus providing the opportunity to obtain accurate estimation of the relative subsidence of collapsible soils using dielectric measurement only. The prediction model is developed based on an existing relative subsidence prediction model (which is dependent on soil moisture condition) and an advanced theoretical frequency and temperature-dependent electromagnetic mixing equation (which effectively removes the moisture content dependence of the original relative subsidence prediction model). For large scale sub-surface soil exploration purposes, the spatial sub-surface soil dielectric data over wide areas and high depths of weak (collapsible) soil deposits can be obtained using non-destructive high frequency electromagnetic (HF-EM) measurement techniques such as ground penetrating radar (GPR). For laboratory or small scale in-situ measurements, techniques such as an open-ended coaxial line with widely applicable time domain reflectometry (TDR) or vector network analysers (VNAs) are usually employed to obtain the soil dielectric data. By using soil dielectric data obtained from small or large scale non-destructive HF-EM investigations, the new model can effectively predict the relative subsidence of weak soils without the need to extract samples for moisture content measurement. Some of the resulting benefits are the preservation of the undisturbed nature of the soil as well as a reduction in the investigation costs and analysis time in the identification of weak (problematic) soils. The accuracy of prediction of the presented model is assessed by conducting relative subsidence tests on a collapsible soil at various initial soil conditions and a good match between the model prediction and experimental results is obtained.

Keywords: collapsible soil, dielectric permittivity, moisture content, relative subsidence

Procedia PDF Downloads 334
393 Productivity and Household Welfare Impact of Technology Adoption: A Microeconometric Analysis

Authors: Tigist Mekonnen Melesse

Abstract:

Since rural households are basically entitled to food through own production, improving productivity might lead to enhance the welfare of rural population through higher food availability at the household level and lowering the price of agricultural products. Increasing agricultural productivity through the use of improved technology is one of the desired outcomes from sensible food security and agricultural policy. The ultimate objective of this study was to evaluate the potential impact of improved agricultural technology adoption on smallholders’ crop productivity and welfare. The study is conducted in Ethiopia covering 1500 rural households drawn from four regions and 15 rural villages based on data collected by Ethiopian Rural Household Survey. Endogenous treatment effect model is employed in order to account for the selection bias on adoption decision that is expected from the self-selection of households in technology adoption. The treatment indicator, technology adoption is a binary variable indicating whether the household used improved seeds and chemical fertilizer or not. The outcome variables were cereal crop productivity, measured in real value of production and welfare of households, measured in real per capita consumption expenditure. Results of the analysis indicate that there is positive and significant effect of improved technology use on rural households’ crop productivity and welfare in Ethiopia. Adoption of improved seeds and chemical fertilizer alone will increase the crop productivity by 7.38 and 6.32 percent per year of each. Adoption of such technologies is also found to improve households’ welfare by 1.17 and 0.25 percent per month of each. The combined effect of both technologies when adopted jointly is increasing crop productivity by 5.82 percent and improving welfare by 0.42 percent. Besides, educational level of household head, farm size, labor use, participation in extension program, expenditure for input and number of oxen positively affect crop productivity and household welfare, while large household size negatively affect welfare of households. In our estimation, the average treatment effect of technology adoption (average treatment effect on the treated, ATET) is the same as the average treatment effect (ATE). This implies that the average predicted outcome for the treatment group is similar to the average predicted outcome for the whole population.

Keywords: Endogenous treatment effect, technologies, productivity, welfare, Ethiopia

Procedia PDF Downloads 618
392 An Overview of the Wind and Wave Climate in the Romanian Nearshore

Authors: Liliana Rusu

Abstract:

The goal of the proposed work is to provide a more comprehensive picture of the wind and wave climate in the Romanian nearshore, using the results provided by numerical models. The Romanian coastal environment is located in the western side of the Black Sea, the more energetic part of the sea, an area with heavy maritime traffic and various offshore operations. Information about the wind and wave climate in the Romanian waters is mainly based on observations at Gloria drilling platform (70 km from the coast). As regards the waves, the measurements of the wave characteristics are not so accurate due to the method used, being also available for a limited period. For this reason, the wave simulations that cover large temporal and spatial scales represent an option to describe better the wave climate. To assess the wind climate in the target area spanning 1992–2016, data provided by the NCEP-CFSR (U.S. National Centers for Environmental Prediction - Climate Forecast System Reanalysis) and consisting in wind fields at 10m above the sea level are used. The high spatial and temporal resolution of the wind fields is good enough to represent the wind variability over the area. For the same 25-year period, as considered for the wind climate, this study characterizes the wave climate from a wave hindcast data set that uses NCEP-CFSR winds as input for a model system SWAN (Simulating WAves Nearshore) based. The wave simulation results with a two-level modelling scale have been validated against both in situ measurements and remotely sensed data. The second level of the system, with a higher resolution in the geographical space (0.02°×0.02°), is focused on the Romanian coastal environment. The main wave parameters simulated at this level are used to analyse the wave climate. The spatial distributions of the wind speed, wind direction and the mean significant wave height have been computed as the average of the total data. As resulted from the amount of data, the target area presents a generally moderate wave climate that is affected by the storm events developed in the Black Sea basin. Both wind and wave climate presents high seasonal variability. All the results are computed as maps that help to find the more dangerous areas. A local analysis has been also employed in some key locations corresponding to highly sensitive areas, as for example the main Romanian harbors.

Keywords: numerical simulations, Romanian nearshore, waves, wind

Procedia PDF Downloads 315
391 Design and Evaluation of a Prototype for Non-Invasive Screening of Diabetes – Skin Impedance Technique

Authors: Pavana Basavakumar, Devadas Bhat

Abstract:

Diabetes is a disease which often goes undiagnosed until its secondary effects are noticed. Early detection of the disease is necessary to avoid serious consequences which could lead to the death of the patient. Conventional invasive tests for screening of diabetes are mostly painful, time consuming and expensive. There’s also a risk of infection involved, therefore it is very essential to develop non-invasive methods to screen and estimate the level of blood glucose. Extensive research is going on with this perspective, involving various techniques that explore optical, electrical, chemical and thermal properties of the human body that directly or indirectly depend on the blood glucose concentration. Thus, non-invasive blood glucose monitoring has grown into a vast field of research. In this project, an attempt was made to device a prototype for screening of diabetes by measuring electrical impedance of the skin and building a model to predict a patient’s condition based on the measured impedance. The prototype developed, passes a negligible amount of constant current (0.5mA) across a subject’s index finger through tetra polar silver electrodes and measures output voltage across a wide range of frequencies (10 KHz – 4 MHz). The measured voltage is proportional to the impedance of the skin. The impedance was acquired in real-time for further analysis. Study was conducted on over 75 subjects with permission from the institutional ethics committee, along with impedance, subject’s blood glucose values were also noted, using conventional method. Nonlinear regression analysis was performed on the features extracted from the impedance data to obtain a model that predicts blood glucose values for a given set of features. When the predicted data was depicted on Clarke’s Error Grid, only 58% of the values predicted were clinically acceptable. Since the objective of the project was to screen diabetes and not actual estimation of blood glucose, the data was classified into three classes ‘NORMAL FASTING’,’NORMAL POSTPRANDIAL’ and ‘HIGH’ using linear Support Vector Machine (SVM). Classification accuracy obtained was 91.4%. The developed prototype was economical, fast and pain free. Thus, it can be used for mass screening of diabetes.

Keywords: Clarke’s error grid, electrical impedance of skin, linear SVM, nonlinear regression, non-invasive blood glucose monitoring, screening device for diabetes

Procedia PDF Downloads 306
390 Semantic-Based Collaborative Filtering to Improve Visitor Cold Start in Recommender Systems

Authors: Baba Mbaye

Abstract:

In collaborative filtering recommendation systems, a user receives suggested items based on the opinions and evaluations of a community of users. This type of recommendation system uses only the information (notes in numerical values) contained in a usage matrix as input data. This matrix can be constructed based on users' behaviors or by offering users to declare their opinions on the items they know. The cold start problem leads to very poor performance for new users. It is a phenomenon that occurs at the beginning of use, in the situation where the system lacks data to make recommendations. There are three types of cold start problems: cold start for a new item, a new system, and a new user. We are interested in this article at the cold start for a new user. When the system welcomes a new user, the profile exists but does not have enough data, and its communities with other users profiles are still unknown. This leads to recommendations not adapted to the profile of the new user. In this paper, we propose an approach that improves cold start by using the notions of similarity and semantic proximity between users profiles during cold start. We will use the cold-metadata available (metadata extracted from the new user's data) useful in positioning the new user within a community. The aim is to look for similarities and semantic proximities with the old and current user profiles of the system. Proximity is represented by close concepts considered to belong to the same group, while similarity groups together elements that appear similar. Similarity and proximity are two close but not similar concepts. This similarity leads us to the construction of similarity which is based on: a) the concepts (properties, terms, instances) independent of ontology structure and, b) the simultaneous representation of the two concepts (relations, presence of terms in a document, simultaneous presence of the authorities). We propose an ontology, OIVCSRS (Ontology of Improvement Visitor Cold Start in Recommender Systems), in order to structure the terms and concepts representing the meaning of an information field, whether by the metadata of a namespace, or the elements of a knowledge domain. This approach allows us to automatically attach the new user to a user community, partially compensate for the data that was not initially provided and ultimately to associate a better first profile with the cold start. Thus, the aim of this paper is to propose an approach to improving cold start using semantic technologies.

Keywords: visitor cold start, recommender systems, collaborative filtering, semantic filtering

Procedia PDF Downloads 196
389 Marketing and Pharmaceutical Analysis of Medical Cosmetics in Bulgaria and Japan

Authors: V. Petkova, V. Valchanova, D. Grekova, K. Andreevska, S. T. Geurguiev, V. Madgarov, D. Grekov

Abstract:

Introduction: Production, distribution and sale of cosmetics is a global industry, which played a key role in the European Union (EU), the US and Japan. A major participant EU whose market cosmetics is greater than in the US and 2 times greater than that in Japan. The output value of the cosmetics industry in the EU is estimated at about € 35 billion in 2001. Nearly 5 billion cosmetic products (number of packages) are sold annually in the EU, and the main markets are France, Germany, Italy, Spain and the UK. The aim of the study is legal and marketing analysis of cosmetic products dispensed in a pharmacy. Materials and methodology: Historical legislative analysis - the method is applied in the analysis of changes in the legislative regulation of the activities of cosmetic products in Japan and Bulgaria Comparative legislative analysis - the method is applied when comparing the legislative requirements for cosmetic products in the already mentioned countries. Both methods are applied to the following regulations: 1) Japanese Pharmaceuticals Affairs Law, Tokyo, Japan, Ministry of Health, Labour and Welfare; 2) Law on Medicinal Products for Human Use; effective from 3.01.2014. Results: The legislative framework for cosmetic products in Bulgaria and Japan is close and generally includes general guidelines: Definition of a medicinal product; Categorization of drugs (with differences in sub-categories); Pre-registration and marketing approval of the competent authorities; Compulsory compliance with gmp (unlike cosmetics); Regulatory focus on product quality, efficacy and safety; Obligations for labeling of such products; Created systems Pharmacovigilance and commitment of all parties - industry and health professionals; The main similarities in the regulation of products classified as cosmetics are in the following segments: Full producer responsibility for product safety; Surveillance of market regulatory authorities; No need for pre-registration or pre-marketing approval (a basic requirement for notification); Without restrictions on sales channels; GMP manuals for cosmetics; Regulatory focus on product safety (than over efficiency); General requirements in labeling: The main differences in the regulation of products classified as cosmetics are in the following segments: Details in the regulation of cosmetic products; Future convergence of regulatory frameworks can contribute to the removal of barriers to trade, to encourage innovation, while simultaneously ensuring a high level of protection of consumer safety.

Keywords: cosmetics, legislation, comparative analysis, Bulgaria, Japan

Procedia PDF Downloads 573
388 Frequency Domain Decomposition, Stochastic Subspace Identification and Continuous Wavelet Transform for Operational Modal Analysis of Three Story Steel Frame

Authors: Ardalan Sabamehr, Ashutosh Bagchi

Abstract:

Recently, Structural Health Monitoring (SHM) based on the vibration of structures has attracted the attention of researchers in different fields such as: civil, aeronautical and mechanical engineering. Operational Modal Analysis (OMA) have been developed to identify modal properties of infrastructure such as bridge, building and so on. Frequency Domain Decomposition (FDD), Stochastic Subspace Identification (SSI) and Continuous Wavelet Transform (CWT) are the three most common methods in output only modal identification. FDD, SSI, and CWT operate based on the frequency domain, time domain, and time-frequency plane respectively. So, FDD and SSI are not able to display time and frequency at the same time. By the way, FDD and SSI have some difficulties in a noisy environment and finding the closed modes. CWT technique which is currently developed works on time-frequency plane and a reasonable performance in such condition. The other advantage of wavelet transform rather than other current techniques is that it can be applied for the non-stationary signal as well. The aim of this paper is to compare three most common modal identification techniques to find modal properties (such as natural frequency, mode shape, and damping ratio) of three story steel frame which was built in Concordia University Lab by use of ambient vibration. The frame has made of Galvanized steel with 60 cm length, 27 cm width and 133 cm height with no brace along the long span and short space. Three uniaxial wired accelerations (MicroStarin with 100mv/g accuracy) have been attached to the middle of each floor and gateway receives the data and send to the PC by use of Node Commander Software. The real-time monitoring has been performed for 20 seconds with 512 Hz sampling rate. The test is repeated for 5 times in each direction by hand shaking and impact hammer. CWT is able to detect instantaneous frequency by used of ridge detection method. In this paper, partial derivative ridge detection technique has been applied to the local maxima of time-frequency plane to detect the instantaneous frequency. The extracted result from all three methods have been compared, and it demonstrated that CWT has the better performance in term of its accuracy in noisy environment. The modal parameters such as natural frequency, damping ratio and mode shapes are identified from all three methods.

Keywords: ambient vibration, frequency domain decomposition, stochastic subspace identification, continuous wavelet transform

Procedia PDF Downloads 272
387 Wind Generator Control in Isolated Site

Authors: Glaoui Hachemi

Abstract:

Wind has been proven as a cost effective and reliable energy source. Technological advancements over the last years have placed wind energy in a firm position to compete with conventional power generation technologies. Algeria has a vast uninhabited land area where the south (desert) represents the greatest part with considerable wind regime. In this paper, an analysis of wind energy utilization as a viable energy substitute in six selected sites widely distributed all over the south of Algeria is presented. In this presentation, wind speed frequency distributions data obtained from the Algerian Meteorological Office are used to calculate the average wind speed and the available wind power. The annual energy produced by the Fuhrlander FL 30 wind machine is obtained using two methods. The analysis shows that in the southern Algeria, at 10 m height, the available wind power was found to vary between 160 and 280 W/m2, except for Tamanrasset. The highest potential wind power was found at Adrar, with 88 % of the time the wind speed is above 3 m/s. Besides, it is found that the annual wind energy generated by that machine lie between 33 and 61 MWh, except for Tamanrasset, with only 17 MWh. Since the wind turbines are usually installed at a height greater than 10 m, an increased output of wind energy can be expected. However, the wind resource appears to be suitable for power production on the south and it could provide a viable substitute to diesel oil for irrigation pumps and electricity generation. In this paper, a model of the wind turbine (WT) with permanent magnet generator (PMSG) and its associated controllers is presented. The increase of wind power penetration in power systems has meant that conventional power plants are gradually being replaced by wind farms. In fact, today wind farms are required to actively participate in power system operation in the same way as conventional power plants. In fact, power system operators have revised the grid connection requirements for wind turbines and wind farms, and now demand that these installations be able to carry out more or less the same control tasks as conventional power plants. For dynamic power system simulations, the PMSG wind turbine model includes an aerodynamic rotor model, a lumped mass representation of the drive train system and generator model. In this paper, we propose a model with an implementation in MATLAB / Simulink, each of the system components off-grid small wind turbines.

Keywords: windgenerator systems, permanent magnet synchronous generator (PMSG), wind turbine (WT) modeling, MATLAB simulink environment

Procedia PDF Downloads 317
386 Test Method Development for Evaluation of Process and Design Effect on Reinforced Tube

Authors: Cathal Merz, Gareth O’Donnell

Abstract:

Coil reinforced thin-walled (CRTW) tubes are used in medicine to treat problems affecting blood vessels within the body through minimally invasive procedures. The CRTW tube considered in this research makes up part of such a device and is inserted into the patient via their femoral or brachial arteries and manually navigated to the site in need of treatment. This procedure replaces the requirement to perform open surgery but is limited by reduction of blood vessel lumen diameter and increase in tortuosity of blood vessels deep in the brain. In order to maximize the capability of these procedures, CRTW tube devices are being manufactured with decreasing wall thicknesses in order to deliver treatment deeper into the body and to allow passage of other devices through its inner diameter. This introduces significant stresses to the device materials which have resulted in an observed increase in the breaking of the proximal segment of the device into two separate pieces after it has failed by buckling. As there is currently no international standard for measuring the mechanical properties of these CRTW tube devices, it is difficult to accurately analyze this problem. The aim of the current work is to address this discrepancy in the biomedical device industry by developing a measurement system that can be used to quantify the effect of process and design changes on CRTW tube performance, aiding in the development of better performing, next generation devices. Using materials testing frames, micro-computed tomography (micro-CT) imaging, experiment planning, analysis of variance (ANOVA), T-tests and regression analysis, test methods have been developed for assessing the impact of process and design changes on the device. The major findings of this study have been an insight into the suitability of buckle and three-point bend tests for the measurement of the effect of varying processing factors on the device’s performance, and guidelines for interpreting the output data from the test methods. The findings of this study are of significant interest with respect to verifying and validating key process and design changes associated with the device structure and material condition. Test method integrity evaluation is explored throughout.

Keywords: neurovascular catheter, coil reinforced tube, buckling, three-point bend, tensile

Procedia PDF Downloads 95
385 Pragmatic Development of Chinese Sentence Final Particles via Computer-Mediated Communication

Authors: Qiong Li

Abstract:

This study investigated in which condition computer-mediated communication (CMC) could promote pragmatic development. The focal feature included four Chinese sentence final particles (SFPs), a, ya, ba, and ne. They occur frequently in Chinese, and function as mitigators to soften the tone of speech. However, L2 acquisition of SFPs is difficult, suggesting the necessity of additional exposure to or explicit instruction on Chinese SFPs. This study follows this line and aims to explore two research questions: (1) Is CMC combined with data-driven instruction more effective than CMC alone in promoting L2 Chinese learners’ SFP use? (2) How does L2 Chinese learners’ SFP use change over time, as compared to the production of native Chinese speakers? The study involved 19 intermediate-level learners of Chinese enrolled at a private American university. They were randomly assigned to two groups: (1) the control group (N = 10), which was exposed to SFPs through CMC alone, (2) the treatment group (N = 9), which was exposed to SFPs via CMC and data-driven instruction. Learners interacted with native speakers on given topics through text-based CMC over Skype. Both groups went through six 30-minute CMC sessions on a weekly basis, with a one-week interval after the first two CMC sessions and a two-week interval after the second two CMC sessions (nine weeks in total). The treatment group additionally received a data-driven instruction after the first two sessions. Data analysis focused on three indices: token frequency, type frequency, and acceptability of SFP use. Token frequency was operationalized as the raw occurrence of SFPs per clause. Type frequency was the range of SFPs. Acceptability was rated by two native speakers using a rating rubric. The results showed that the treatment group made noticeable progress over time on the three indices. The production of SFPs approximated the native-like level. In contrast, the control group only slightly improved on token frequency. Only certain SFPs (a and ya) reached the native-like use. Potential explanations for the group differences were discussed in two aspects: the property of Chinese SFPs and the role of CMC and data-driven instruction. Though CMC provided the learners with opportunities to notice and observe SFP use, as a feature with low saliency, SFPs were not easily noticed in input. Data-driven instruction in the treatment group directed the learners’ attention to these particles, which facilitated the development.

Keywords: computer-mediated communication, data-driven instruction, pragmatic development, second language Chinese, sentence final particles

Procedia PDF Downloads 393
384 Prosodic Realization of Focus in the Public Speeches Delivered by Spanish Learners of English and English Native Speakers

Authors: Raúl Jiménez Vilches

Abstract:

Native (L1) speakers can mark prosodically one part of an utterance and make it more relevant as opposed to the rest of the constituents. Conversely, non-native (L2) speakers encounter problems when it comes to marking prosodically information structure in English. In fact, the L2 speaker’s choice for the prosodic realization of focus is not so clear and often obscures the intended pragmatic meaning and the communicative value in general. This paper reports some of the findings obtained in an L2 prosodic training course for Spanish learners of English within the context of public speaking. More specifically, it analyses the effects of the course experiment in relation to the non-native production of the tonic syllable to mark focus and compares it with the public speeches delivered by native English speakers. The whole experimental training was executed throughout eighteen input sessions (1,440 minutes total time) and all the sessions took place in the classroom. In particular, the first part of the course provided explicit instruction on the recognition and production of the tonic syllable and how the tonic syllable is used to express focus. The non-native and native oral presentations were acoustically analyzed using Praat software for speech analysis (7,356 words in total). The investigation adopted mixed and embedded methodologies. Quantitative information is needed when measuring acoustically the phonetic realization of focus. Qualitative data such as questionnaires, interviews, and observations were also used to interpret the quantitative data. The embedded experiment design was implemented through the analysis of the public speeches before and after the intervention. Results indicate that, even after the L2 prosodic training course, Spanish learners of English still show some major inconsistencies in marking focus effectively. Although there was occasional improvement regarding the choice for location and word classes, Spanish learners were, in general, far from achieving similar results to the ones obtained by the English native speakers in the two types of focus. The prosodic realization of focus seems to be one of the hardest areas of the English prosodic system to be mastered by Spanish learners. A funded research project is in the process of moving the present classroom-based experiment to an online environment (mobile app) and determining whether there is a more effective focus usage through CAPT (Computer-Assisted Pronunciation) tools.

Keywords: focus, prosody, public speaking, Spanish learners of English

Procedia PDF Downloads 71
383 Development of an Instrument for Measurement of Thermal Conductivity and Thermal Diffusivity of Tropical Fruit Juice

Authors: T. Ewetumo, K. D. Adedayo, Festus Ben

Abstract:

Knowledge of the thermal properties of foods is of fundamental importance in the food industry to establish the design of processing equipment. However, for tropical fruit juice, there is very little information in literature, seriously hampering processing procedures. This research work describes the development of an instrument for automated thermal conductivity and thermal diffusivity measurement of tropical fruit juice using a transient thermal probe technique based on line heat principle. The system consists of two thermocouple sensors, constant current source, heater, thermocouple amplifier, microcontroller, microSD card shield and intelligent liquid crystal. A fixed distance of 6.50mm was maintained between the two probes. When heat is applied, the temperature rise at the heater probe measured with time at time interval of 4s for 240s. The measuring element conforms as closely as possible to an infinite line source of heat in an infinite fluid. Under these conditions, thermal conductivity and thermal diffusivity are simultaneously measured, with thermal conductivity determined from the slope of a plot of the temperature rise of the heating element against the logarithm of time while thermal diffusivity was determined from the time it took the sample to attain a peak temperature and the time duration over a fixed diffusivity distance. A constant current source was designed to apply a power input of 16.33W/m to the probe throughout the experiment. The thermal probe was interfaced with a digital display and data logger by using an application program written in C++. Calibration of the instrument was done by determining the thermal properties of distilled water. Error due to convection was avoided by adding 1.5% agar to the water. The instrument has been used for measurement of thermal properties of banana, orange and watermelon. Thermal conductivity values of 0.593, 0.598, 0.586 W/m^o C and thermal diffusivity values of 1.053 ×〖10〗^(-7), 1.086 ×〖10〗^(-7), and 0.959 ×〖10〗^(-7) 〖m/s〗^2 were obtained for banana, orange and water melon respectively. Measured values were stored in a microSD card. The instrument performed very well as it measured the thermal conductivity and thermal diffusivity of the tropical fruit juice samples with statistical analysis (ANOVA) showing no significant difference (p>0.05) between the literature standards and estimated averages of each sample investigated with the developed instrument.

Keywords: thermal conductivity, thermal diffusivity, tropical fruit juice, diffusion equation

Procedia PDF Downloads 329
382 Simulation of Optimum Sculling Angle for Adaptive Rowing

Authors: Pornthep Rachnavy

Abstract:

The purpose of this paper is twofold. First, we believe that there are a significant relationship between sculling angle and sculling style among adaptive rowing. Second, we introduce a methodology used for adaptive rowing, namely simulation, to identify effectiveness of adaptive rowing. For our study we simulate the arms only single scull of adaptive rowing. The method for rowing fastest under the 1000 meter was investigated by study sculling angle using the simulation modeling. A simulation model of a rowing system was developed using the Matlab software package base on equations of motion consist of many variation for moving the boat such as oars length, blade velocity and sculling style. The boat speed, power and energy consumption on the system were compute. This simulation modeling can predict the force acting on the boat. The optimum sculling angle was performing by computer simulation for compute the solution. Input to the model are sculling style of each rower and sculling angle. Outputs of the model are boat velocity at 1000 meter. The present study suggests that the optimum sculling angle exist depends on sculling styles. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the first style is -57.00 and 22.0 degree. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the second style is -57.00 and 22.0 degree. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the third style is -51.57 and 28.65 degree. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the fourth style is -45.84 and 34.38 degree. A theoretical simulation for rowing has been developed and presented. The results suggest that it may be advantageous for the rowers to select the sculling angles proper to sculling styles. The optimum sculling angles of the rower depends on the sculling styles made by each rower. The investigated of this paper can be concludes in three directions: 1;. There is the optimum sculling angle in arms only single scull of adaptive rowing. 2. The optimum sculling angles depend on the sculling styles. 3. Computer simulation of rowing can identify opportunities for improving rowing performance by utilizing the kinematic description of rowing. The freedom to explore alternatives in speed, thrust and timing with the computer simulation will provide the coach with a tool for systematic assessments of rowing technique In addition, the ability to use the computer to examine the very complex movements during rowing will help both the rower and the coach to conceptualize the components of movements that may have been previously unclear or even undefined.

Keywords: simulation, sculling, adaptive, rowing

Procedia PDF Downloads 443
381 A Geo DataBase to Investigate the Maximum Distance Error in Quality of Life Studies

Authors: Paolino Di Felice

Abstract:

The background and significance of this study come from papers already appeared in the literature which measured the impact of public services (e.g., hospitals, schools, ...) on the citizens’ needs satisfaction (one of the dimensions of QOL studies) by calculating the distance between the place where they live and the location on the territory of the services. Those studies assume that the citizens' dwelling coincides with the centroid of the polygon that expresses the boundary of the administrative district, within the city, they belong to. Such an assumption “introduces a maximum measurement error equal to the greatest distance between the centroid and the border of the administrative district.”. The case study, this abstract reports about, investigates the implications descending from the adoption of such an approach but at geographical scales greater than the urban one, namely at the three levels of nesting of the Italian administrative units: the (20) regions, the (110) provinces, and the 8,094 municipalities. To carry out this study, it needs to be decided: a) how to store the huge amount of (spatial and descriptive) input data and b) how to process them. The latter aspect involves: b.1) the design of algorithms to investigate the geometry of the boundary of the Italian administrative units; b.2) their coding in a programming language; b.3) their execution and, eventually, b.4) archiving the results in a permanent support. The IT solution we implemented is centered around a (PostgreSQL/PostGIS) Geo DataBase structured in terms of three tables that fit well to the hierarchy of nesting of the Italian administrative units: municipality(id, name, provinceId, istatCode, regionId, geometry) province(id, name, regionId, geometry) region(id, name, geometry). The adoption of the DBMS technology allows us to implement the steps "a)" and "b)" easily. In particular, step "b)" is simplified dramatically by calling spatial operators and spatial built-in User Defined Functions within SQL queries against the Geo DB. The major findings coming from our experiments can be summarized as follows. The approximation that, on the average, descends from assimilating the residence of the citizens with the centroid of the administrative unit of reference is of few kilometers (4.9) at the municipalities level, while it becomes conspicuous at the other two levels (28.9 and 36.1, respectively). Therefore, studies such as those mentioned above can be extended up to the municipal level without affecting the correctness of the interpretation of the results, but not further. The IT framework implemented to carry out the experiments can be replicated for studies referring to the territory of other countries all over the world.

Keywords: quality of life, distance measurement error, Italian administrative units, spatial database

Procedia PDF Downloads 348
380 Integrated Performance Management System a Conceptual Design for PT. XYZ

Authors: Henrie Yunianto, Dermawan Wibisono

Abstract:

PT. XYZ is a family business (private company) in Indonesia that provide an educational program and consultation services. Since its establishment in 2011, the company has run without any strategic management system implemented. Though the company could survive until now. The management of PT. XYZ sees the business opportunity for such product is huge, even though the targeted market is very specific (niche), the volume is large (due to large population of Indonesia) and numbers of competitors are low (now). It can be said if the product life cycle is in between ‘Introduction stage’ and ‘growth’ stage. It is observed that nowadays the new entrants (competitors) are increasing, thus PT. XYZ consider reacting in facing the intense business rivalry by conducting the business in an appropriate manner. A Performance Management System is important to be implemented in accordance with the business sustainability and growth. The framework of Performance Management System chosen is Integrated Performance Management System (IPMS). IPMS framework has the advantages of its simplicity, linkage between its business variables and indicators where the company can see the connections between all factors measured. IPMS framework consists of perspectives: (1) Business Result, (2) Internal Processes, (3) Resource Availability. Variables and indicators were examined through deep analysis of the business external and internal environments, Strength-Weakness-Opportunity-Threat (SWOT) analysis, Porter’s five forces analysis. Analytical Hierarchy Process (AHP) analysis was then used to quantify the weight of each variable/indicators. AHP is needed since in this study, PT. XYZ, the data of existing performance indicator was not available. Later, where the IPMS is implemented, the real data measured can be examined to determine the weight factor of each indicators using correlation analysis (or other methods). In this study of IPMS design for PT. XYZ, the analysis shows that with current company goals, along with the AHP methodology, the critical indicators for each perspective are: (1) Business results: Customer satisfaction and Employee satisfaction, (2) Internal process: Marketing performance, Supplier quality, Production quality, Continues improvement; (3) Resources Availability: Leadership and company culture & value, Personal Competences, Productivity. Company and/or organization require performance management system to help them in achieving their vision and mission. Company strategy will be effectively defined and addressed by using performance management system. Integrated Performance Management System (IPMS) framework and AHP analysis help us in quantifying the factors which influence the business output expected.

Keywords: analytical hierarchy process, business strategy, differentiation strategy, integrated performance management system

Procedia PDF Downloads 286
379 On the Optimality Assessment of Nano-Particle Size Spectrometry and Its Association to the Entropy Concept

Authors: A. Shaygani, R. Saifi, M. S. Saidi, M. Sani

Abstract:

Particle size distribution, the most important characteristics of aerosols, is obtained through electrical characterization techniques. The dynamics of charged nano-particles under the influence of electric field in electrical mobility spectrometer (EMS) reveals the size distribution of these particles. The accuracy of this measurement is influenced by flow conditions, geometry, electric field and particle charging process, therefore by the transfer function (transfer matrix) of the instrument. In this work, a wire-cylinder corona charger was designed and the combined field-diffusion charging process of injected poly-disperse aerosol particles was numerically simulated as a prerequisite for the study of a multi-channel EMS. The result, a cloud of particles with non-uniform charge distribution, was introduced to the EMS. The flow pattern and electric field in the EMS were simulated using computational fluid dynamics (CFD) to obtain particle trajectories in the device and therefore to calculate the reported signal by each electrometer. According to the output signals (resulted from bombardment of particles and transferring their charges as currents), we proposed a modification to the size of detecting rings (which are connected to electrometers) in order to evaluate particle size distributions more accurately. Based on the capability of the system to transfer information contents about size distribution of the injected particles, we proposed a benchmark for the assessment of optimality of the design. This method applies the concept of Von Neumann entropy and borrows the definition of entropy from information theory (Shannon entropy) to measure optimality. Entropy, according to the Shannon entropy, is the ''average amount of information contained in an event, sample or character extracted from a data stream''. Evaluating the responses (signals) which were obtained via various configurations of detecting rings, the best configuration which gave the best predictions about the size distributions of injected particles, was the modified configuration. It was also the one that had the maximum amount of entropy. A reasonable consistency was also observed between the accuracy of the predictions and the entropy content of each configuration. In this method, entropy is extracted from the transfer matrix of the instrument for each configuration. Ultimately, various clouds of particles were introduced to the simulations and predicted size distributions were compared to the exact size distributions.

Keywords: aerosol nano-particle, CFD, electrical mobility spectrometer, von neumann entropy

Procedia PDF Downloads 314
378 The Development and Change of Settlement in Tainan County (1904-2015) Using Historical Geographic Information System

Authors: Wei Ting Han, Shiann-Far Kung

Abstract:

In the early time, most of the arable land is dry farming and using rainfall as water sources for irrigation in Tainan county. After the Chia-nan Irrigation System (CIS) was completed in 1930, Chia-nan Plain was more efficient allocation of limited water sources or irrigation, because of the benefit from irrigation systems, drainage systems, and land improvement projects. The problem of long-term drought, flood and salt damage in the past were also improved by CIS. The canal greatly improved the paddy field area and agricultural output, Tainan county has become one of the important agricultural producing areas in Taiwan. With the development of water conservancy facilities, affected by national policies and other factors, many agricultural communities and settlements are formed indirectly, also promoted the change of settlement patterns and internal structures. With the development of historical geographic information system (HGIS), Academia Sinica developed the WebGIS theme with the century old maps of Taiwan which is the most complete historical map of database in Taiwan. It can be used to overlay historical figures of different periods, present the timeline of the settlement change, also grasp the changes in the natural environment or social sciences and humanities, and the changes in the settlements presented by the visualized areas. This study will explore the historical development and spatial characteristics of the settlements in various areas of Tainan County. Using of large-scale areas to explore the settlement changes and spatial patterns of the entire county, through the dynamic time and space evolution from Japanese rule to the present day. Then, digitizing the settlement of different periods to perform overlay analysis by using Taiwan historical topographic maps in 1904, 1921, 1956 and 1989. Moreover, using document analysis to analyze the temporal and spatial changes of regional environment and settlement structure. In addition, the comparison analysis method is used to classify the spatial characteristics and differences between the settlements. Exploring the influence of external environments in different time and space backgrounds, such as government policies, major construction, and industrial development. This paper helps to understand the evolution of the settlement space and the internal structural changes in Tainan County.

Keywords: historical geographic information system, overlay analysis, settlement change, Tainan County

Procedia PDF Downloads 105
377 Effects of a Head Mounted Display Adaptation on Reaching Behaviour: Implications for a Therapeutic Approach in Unilateral Neglect

Authors: Taku Numao, Kazu Amimoto, Tomoko Shimada, Kyohei Ichikawa

Abstract:

Background: Unilateral spatial neglect (USN) is a common syndrome following damage to one hemisphere of the brain (usually the right side), in which a patient fails to report or respond to stimulation from the contralesional side. These symptoms are not due to primary sensory or motor deficits, but instead, reflect an inability to process input from that side of their environment. Prism adaptation (PA) is a therapeutic treatment for USN, wherein a patient’s visual field is artificially shifted laterally, resulting in a sensory-motor adaptation. However, patients with USN also tend to perceive a left-leaning subjective vertical in the frontal plane. The traditional PA cannot be used to correct a tilt in the subjective vertical, because a prism can only polarize, not twist, the surroundings. However, this can be accomplished using a head mounted display (HMD) and a web-camera. Therefore, this study investigated whether an HMD system could be used to correct the spatial perception of USN patients in the frontal as well as the horizontal plane. We recruited healthy subjects in order to collect data for the refinement of USN patient therapy. Methods: Eight healthy subjects sat on a chair wearing a HMD (Oculus rift DK2), with a web-camera (Ovrvision) displaying a 10 degree leftward rotation and a 10 degree counter-clockwise rotation along the frontal plane. Subjects attempted to point a finger at one of four targets, assigned randomly, a total of 48 times. Before and after the intervention, each subject’s body-centre judgment (BCJ) was tested by asking them to point a finger at a touch panel straight in front of their xiphisternum, 10 times sight unseen. Results: Intervention caused the location pointed to during the BCJ to shift 35 ± 17 mm (Ave ± SD) leftward in the horizontal plane, and 46 ± 29 mm downward in the frontal plane. The results in both planes were significant by paired-t-test (p<.01). Conclusions: The results in the horizontal plane are consistent with those observed following PA. Furthermore, the HMD and web-camera were able to elicit 3D effects, including in both the horizontal and frontal planes. Future work will focus on applying this method to patients with and without USN, and investigating whether subject posture is also affected by the HMD system.

Keywords: head mounted display, posture, prism adaptation, unilateral spatial neglect

Procedia PDF Downloads 257
376 Learners’ Perceptions of Tertiary Level Teachers’ Code Switching: A Vietnamese Perspective

Authors: Hoa Pham

Abstract:

The literature on language teaching and second language acquisition has been largely driven by monolingual ideology with a common assumption that a second language (L2) is best taught and learned in the L2 only. The current study challenges this assumption by reporting learners' positive perceptions of tertiary level teachers' code switching practices in Vietnam. The findings of this study contribute to our understanding of code switching practices in language classrooms from a learners' perspective. Data were collected from student participants who were working towards a Bachelor degree in English within the English for Business Communication stream through the use of focus group interviews. The literature has documented that this method of interviewing has a number of distinct advantages over individual student interviews. For instance, group interactions generated by focus groups create a more natural environment than that of an individual interview because they include a range of communicative processes in which each individual may influence or be influenced by others - as they are in their real life. The process of interaction provides the opportunity to obtain the meanings and answers to a problem that are "socially constructed rather than individually created" leading to the capture of real-life data. The distinct feature of group interaction offered by this technique makes it a powerful means of obtaining deeper and richer data than those from individual interviews. The data generated through this study were analysed using a constant comparative approach. Overall, the students expressed positive views of this practice indicating that it is a useful teaching strategy. Teacher code switching was seen as a learning resource and a source supporting language output. This practice was perceived to promote student comprehension and to aid the learning of content and target language knowledge. This practice was also believed to scaffold the students' language production in different contexts. However, the students indicated their preference for teacher code switching to be constrained, as extensive use was believed to negatively impact on their L2 learning and trigger cognitive reliance on the L1 for L2 learning. The students also perceived that when the L1 was used to a great extent, their ability to develop as autonomous learners was negatively impacted. This study found that teacher code switching was supported in certain contexts by learners, thus suggesting that there is a need for the widespread assumption about the monolingual teaching approach to be re-considered.

Keywords: codeswitching, L1 use, L2 teaching, learners’ perception

Procedia PDF Downloads 290
375 Proposal of a Rectenna Built by Using Paper as a Dielectric Substrate for Electromagnetic Energy Harvesting

Authors: Ursula D. C. Resende, Yan G. Santos, Lucas M. de O. Andrade

Abstract:

The recent and fast development of the internet, wireless, telecommunication technologies and low-power electronic devices has led to an expressive amount of electromagnetic energy available in the environment and the smart applications technology expansion. These applications have been used in the Internet of Things devices, 4G and 5G solutions. The main feature of this technology is the use of the wireless sensor. Although these sensors are low-power loads, their use imposes huge challenges in terms of an efficient and reliable way for power supply in order to avoid the traditional battery. The radio frequency based energy harvesting technology is especially suitable to wireless power sensors by using a rectenna since it can be completely integrated into the distributed hosting sensors structure, reducing its cost, maintenance and environmental impact. The rectenna is an equipment composed of an antenna and a rectifier circuit. The antenna function is to collect as much radio frequency radiation as possible and transfer it to the rectifier, which is a nonlinear circuit, that converts the very low input radio frequency energy into direct current voltage. In this work, a set of rectennas, mounted on a paper substrate, which can be used for the inner coating of buildings and simultaneously harvest electromagnetic energy from the environment, is proposed. Each proposed individual rectenna is composed of a 2.45 GHz patch antenna and a voltage doubler rectifier circuit, built in the same paper substrate. The antenna contains a rectangular radiator element and a microstrip transmission line that was projected and optimized by using the Computer Simulation Software (CST) in order to obtain values of S11 parameter below -10 dB in 2.45 GHz. In order to increase the amount of harvested power, eight individual rectennas, incorporating metamaterial cells, were connected in parallel forming a system, denominated Electromagnetic Wall (EW). In order to evaluate the EW performance, it was positioned at a variable distance from the internet router, and a 27 kΩ resistive load was fed. The results obtained showed that if more than one rectenna is associated in parallel, enough power level can be achieved in order to feed very low consumption sensors. The 0.12 m2 EW proposed in this work was able to harvest 0.6 mW from the environment. It also observed that the use of metamaterial structures provide an expressive growth in the amount of electromagnetic energy harvested, which was increased from 0. 2mW to 0.6 mW.

Keywords: electromagnetic energy harvesting, metamaterial, rectenna, rectifier circuit

Procedia PDF Downloads 132
374 The Reasons for Failure in Writing Essays: Teaching Writing as a Project-Based Enterprise

Authors: Ewa Toloczko

Abstract:

Studies show that developing writing skills throughout years of formal foreign language instruction does not necessarily result in rewarding accomplishments among learners, nor an affirmative attitude they build towards written assignments. What causes this apparently wide-spread bias to writing might be a diminished relevance students attach to it, as opposed to the other productive skill — speaking, insufficient resources available for them to succeed, or the ways writing is approached by instructors, that is inapt teaching techniques that discourage rather that inflame learners’ engagement. The assumption underlying this presentation is that psychological and psycholinguistic factors constitute a key dimension of every writing process, and hence should be seriously considered in both material design and lesson planning. The author intends to demonstrate research in which writing tasks were conceived of as attitudinal rather than technical operations, and consequently turned into meaningful and socially-oriented incidents that students could relate to and have an active hand in. The instrument employed to achieve this purpose and to make writing even more interactive was the format of a project, a carefully devised series of tasks, which involved students as human beings, not only language learners. The projects rested upon the premise that the presence of peers and the teacher in class could be taken advantage of in a supportive rather than evaluative mode. In fact, the research showed that collaborative work and constant meaning negotiation reinforced not only bonds between learners, but also the language form and structure of the output. Accordingly, the role of the teacher shifted from the assessor to problem barometer, always ready to accept the slightest improvements in students’ language performance. This way, written verbal communication, which usually aims to merely manifest accuracy and coherent content for assessment, became part of the enterprise meant to emphasise its social aspect — the writer in real-life setting. The samples of projects show the spectrum of possibilities teachers have when exploring the domain of writing within school curriculum. The ideas are easy to modify and adjust to all proficiency levels and ages. Initially, however, they were meant to suit teenage and young adult learners of English as a foreign language in both European and Asian contexts.

Keywords: projects, psycholinguistic/ psychological dimension of writing, writing as a social enterprise, writing skills, written assignments

Procedia PDF Downloads 215
373 Machine Learning Model to Predict TB Bacteria-Resistant Drugs from TB Isolates

Authors: Rosa Tsegaye Aga, Xuan Jiang, Pavel Vazquez Faci, Siqing Liu, Simon Rayner, Endalkachew Alemu, Markos Abebe

Abstract:

Tuberculosis (TB) is a major cause of disease globally. In most cases, TB is treatable and curable, but only with the proper treatment. There is a time when drug-resistant TB occurs when bacteria become resistant to the drugs that are used to treat TB. Current strategies to identify drug-resistant TB bacteria are laboratory-based, and it takes a longer time to identify the drug-resistant bacteria and treat the patient accordingly. But machine learning (ML) and data science approaches can offer new approaches to the problem. In this study, we propose to develop an ML-based model to predict the antibiotic resistance phenotypes of TB isolates in minutes and give the right treatment to the patient immediately. The study has been using the whole genome sequence (WGS) of TB isolates as training data that have been extracted from the NCBI repository and contain different countries’ samples to build the ML models. The reason that different countries’ samples have been included is to generalize the large group of TB isolates from different regions in the world. This supports the model to train different behaviors of the TB bacteria and makes the model robust. The model training has been considering three pieces of information that have been extracted from the WGS data to train the model. These are all variants that have been found within the candidate genes (F1), predetermined resistance-associated variants (F2), and only resistance-associated gene information for the particular drug. Two major datasets have been constructed using these three information. F1 and F2 information have been considered as two independent datasets, and the third information is used as a class to label the two datasets. Five machine learning algorithms have been considered to train the model. These are Support Vector Machine (SVM), Random forest (RF), Logistic regression (LR), Gradient Boosting, and Ada boost algorithms. The models have been trained on the datasets F1, F2, and F1F2 that is the F1 and the F2 dataset merged. Additionally, an ensemble approach has been used to train the model. The ensemble approach has been considered to run F1 and F2 datasets on gradient boosting algorithm and use the output as one dataset that is called F1F2 ensemble dataset and train a model using this dataset on the five algorithms. As the experiment shows, the ensemble approach model that has been trained on the Gradient Boosting algorithm outperformed the rest of the models. In conclusion, this study suggests the ensemble approach, that is, the RF + Gradient boosting model, to predict the antibiotic resistance phenotypes of TB isolates by outperforming the rest of the models.

Keywords: machine learning, MTB, WGS, drug resistant TB

Procedia PDF Downloads 25
372 Hydrogeochemical Investigation of Lead-Zinc Deposits in Oshiri and Ishiagu Areas, South Eastern Nigeria

Authors: Christian Ogubuchi Ede, Moses Oghenenyoreme Eyankware

Abstract:

This study assessed the concentration of heavy metals (HMs) in soil, rock, mine dump pile, and water from Oshiri and Ishiagu areas of Ebonyi State. Investigations on mobile fraction equally evaluated the geochemical condition of different HM using UV spectrophotometer for Mineralized and unmineralized rocks, dumps, and soil, while AAS was used in determining the geochemical nature of the water system. Analysis revealed very high pollution of Cd mostly in Ishiagu (Ihetutu and Amaonye) active mine zones and with subordinates enrichments of Pb, Cu, As, and Zn in Amagu and Umungbala. Oshiri recorded sparingly moderate to high contamination of Cd and Mn but out rightly high anthropogenic input. Observation showed that most of the contamination conditions were unbearable while at the control but decrease with increasing distance from the mine vicinity. The potential heavy metal risk of the environments was evaluated using the risk factors such as enrichment factor, index of Geoacumulation, Contamination Factor, and Effect Range Median. Cadmium and Zn showed moderate to extreme contamination using Geoaccumulation Index (Igeo) while Pb, Cd, and As indicated moderate to strong pollution using the Effect Range Median. Results, when compared with the allowable limits and standards, showed the concentration of the metals in the following order Cd>Zn>Pb>As>Cu>Ni (rocks), Cd>As>Pb>Zn>Cu>Ni (soil) while Cd>Zn>As>Pb> Cu (for mine dump pile. High concentrations of Zn and As were recorded more in mine pond and salt line/drain channels along active mine zones, it heightened its threat during the rainy period as it settles into river course, living behind full-scale contaminations to inhabitants depending on it for domestic uses. Pb and Cu with moderate pollution were recorded in surface/stream water source as its mobility were relatively low. Results from Ishiagu Crush rock sites and Fedeco metallurgical and auto workshop where groundwater contamination was seen infiltrating some of the wells points gave rise to values that were 4 times high than the allowable limits. Some of these metal concentrations according to WHO (2015) if left unmitigated pose adverse effects to the soil and human community.

Keywords: water, geo-accumulation, heavy metals, mine and Nigeria.

Procedia PDF Downloads 144
371 Development of Tutorial Courseware on Selected Topics in Mathematics, Science and the English Language

Authors: Alice D. Dioquino, Olivia N. Buzon, Emilio F. Aguinaldo, Ruel Avila, Erwin R. Callo, Cristy Ocampo, Malvin R. Tabajen, Marla C. Papango, Marilou M. Ubina, Josephine Tondo, Cromwell L. Valeriano

Abstract:

The main purpose of this study was to develop, evaluate and validate courseware on Selected Topics in Mathematics, Science, and the English Language. Specifically, it aimed to: 1. Identify the appropriate Instructional Systems Design (ISD) model in the development of the courseware material; 2. Assess the courseware material according to its: a. Content Characteristics; b. Instructional Characteristics; and c. Technical Characteristics 3. Find out if there is a significant difference in the performance of students before and after using the tutorial CAI. This research is developmental as well as a one group pretest-posttest design. The study had two phases. Phase I includes the needs analysis, writing of lessons and storyboard by the respective experts in each field. Phase II includes the digitization or the actual development of the courseware by the faculty of the ICT department. In this phase it adapted an instructional systems design (ISD) model which is the ADDIE model. ADDIE stands for Analysis, Design, Development, Implementation and Evaluation. Formative evaluation was conducted simultaneously with the different phases to detect and remedy any bugs in the courseware along the areas of content, instructional and technical characteristics. The expected output are the digitized lessons in Algebra, Biology, Chemistry, Physics and Communication Arts in English. Students and some IT experts validated the CAI material using the Evaluation Form by Wong & Wong. They validated the CAI materials as Highly Acceptable with an overall mean rating of 4.527and standard deviation of 0 which means that they were one in the ratings they have given the CAI materials. A mean gain was recorded and computing the t-test for dependent samples it showed that there were significant differences in the mean achievement of the students before and after the treatment (using CAI). The identified ISD model used in the development of the tutorial courseware was the ADDIE model. The quantitative analyses of data based on ratings given by the respondents’ shows that the tutorial courseware possess the characteristics and or qualities of a very good computer-based courseware. The ratings given by the different evaluators with regard to content, instructional, and technical aspects of the Tutorial Courseware are in conformity towards being excellent. Students performed better in mathematics, biology chemistry, physics and the English Communication Arts after they were exposed to the tutorial courseware.

Keywords: CAI, tutorial courseware, Instructional Systems Design (ISD) Model, education

Procedia PDF Downloads 318
370 Ionometallurgy for Recycling Silver in Silicon Solar Panel

Authors: Emmanuel Billy

Abstract:

This work is in the CABRISS project (H2020 projects) which aims at developing innovative cost-effective methods for the extraction of materials from the different sources of PV waste: Si based panels, thin film panels or Si water diluted slurries. Aluminum, silicon, indium, and silver will especially be extracted from these wastes in order to constitute materials feedstock which can be used later in a closed-loop process. The extraction of metals from silicon solar cells is often an energy-intensive process. It requires either smelting or leaching at elevated temperature, or the use of large quantities of strong acids or bases that require energy to produce. The energy input equates to a significant cost and an associated CO2 footprint, both of which it would be desirable to reduce. Thus there is a need to develop more energy-efficient and environmentally-compatible processes. Thus, ‘ionometallurgy’ could offer a new set of environmentally-benign process for metallurgy. This work demonstrates that ionic liquids provide one such method since they can be used to dissolve and recover silver. The overall process associates leaching, recovery and the possibility to re-use the solution in closed-loop process. This study aims to evaluate and compare different ionic liquids to leach and recover silver. An electrochemical analysis is first implemented to define the best system for the Ag dissolution. Effects of temperature, concentration and oxidizing agent are evaluated by this approach. Further, a comparative study between conventional approach (nitric acid, thiourea) and the ionic liquids (Cu and Al) focused on the leaching efficiency is conducted. A specific attention has been paid to the selection of the Ionic Liquids. Electrolytes composed of chelating anions are used to facilitate the lixiviation (Cl, Br, I,), avoid problems dealing with solubility issues of metallic species and of classical additional ligands. This approach reduces the cost of the process and facilitates the re-use of the leaching medium. To define the most suitable ionic liquids, electrochemical experiments have been carried out to evaluate the oxidation potential of silver include in the crystalline solar cells. Then, chemical dissolution of metals for crystalline solar cells have been performed for the most promising ionic liquids. After the chemical dissolution, electrodeposition has been performed to recover silver under a metallic form.

Keywords: electrodeposition, ionometallurgy, leaching, recycling, silver

Procedia PDF Downloads 224
369 Robust Batch Process Scheduling in Pharmaceutical Industries: A Case Study

Authors: Tommaso Adamo, Gianpaolo Ghiani, Antonio Domenico Grieco, Emanuela Guerriero

Abstract:

Batch production plants provide a wide range of scheduling problems. In pharmaceutical industries a batch process is usually described by a recipe, consisting of an ordering of tasks to produce the desired product. In this research work we focused on pharmaceutical production processes requiring the culture of a microorganism population (i.e. bacteria, yeasts or antibiotics). Several sources of uncertainty may influence the yield of the culture processes, including (i) low performance and quality of the cultured microorganism population or (ii) microbial contamination. For these reasons, robustness is a valuable property for the considered application context. In particular, a robust schedule will not collapse immediately when a cell of microorganisms has to be thrown away due to a microbial contamination. Indeed, a robust schedule should change locally in small proportions and the overall performance measure (i.e. makespan, lateness) should change a little if at all. In this research work we formulated a constraint programming optimization (COP) model for the robust planning of antibiotics production. We developed a discrete-time model with a multi-criteria objective, ordering the different criteria and performing a lexicographic optimization. A feasible solution of the proposed COP model is a schedule of a given set of tasks onto available resources. The schedule has to satisfy tasks precedence constraints, resource capacity constraints and time constraints. In particular time constraints model tasks duedates and resource availability time windows constraints. To improve the schedule robustness, we modeled the concept of (a, b) super-solutions, where (a, b) are input parameters of the COP model. An (a, b) super-solution is one in which if a variables (i.e. the completion times of a culture tasks) lose their values (i.e. cultures are contaminated), the solution can be repaired by assigning these variables values with a new values (i.e. the completion times of a backup culture tasks) and at most b other variables (i.e. delaying the completion of at most b other tasks). The efficiency and applicability of the proposed model is demonstrated by solving instances taken from Sanofi Aventis, a French pharmaceutical company. Computational results showed that the determined super-solutions are near-optimal.

Keywords: constraint programming, super-solutions, robust scheduling, batch process, pharmaceutical industries

Procedia PDF Downloads 592