Search results for: distribution system and optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 23287

Search results for: distribution system and optimization

10567 Optical Imaging Based Detection of Solder Paste in Printed Circuit Board Jet-Printing Inspection

Authors: D. Heinemann, S. Schramm, S. Knabner, D. Baumgarten

Abstract:

Purpose: Applying solder paste to printed circuit boards (PCB) with stencils has been the method of choice over the past years. A new method uses a jet printer to deposit tiny droplets of solder paste through an ejector mechanism onto the board. This allows for more flexible PCB layouts with smaller components. Due to the viscosity of the solder paste, air blisters can be trapped in the cartridge. This can lead to missing solder joints or deviations in the applied solder volume. Therefore, a built-in and real-time inspection of the printing process is needed to minimize uncertainties and increase the efficiency of the process by immediate correction. The objective of the current study is the design of an optimal imaging system and the development of an automatic algorithm for the detection of applied solder joints from optical from the captured images. Methods: In a first approach, a camera module connected to a microcomputer and LED strips are employed to capture images of the printed circuit board under four different illuminations (white, red, green and blue). Subsequently, an improved system including a ring light, an objective lens, and a monochromatic camera was set up to acquire higher quality images. The obtained images can be divided into three main components: the PCB itself (i.e., the background), the reflections induced by unsoldered positions or screw holes and the solder joints. Non-uniform illumination is corrected by estimating the background using a morphological opening and subtraction from the input image. Image sharpening is applied in order to prevent error pixels in the subsequent segmentation. The intensity thresholds which divide the main components are obtained from the multimodal histogram using three probability density functions. Determining the intersections delivers proper thresholds for the segmentation. Remaining edge gradients produces small error areas which are removed by another morphological opening. For quantitative analysis of the segmentation results, the dice coefficient is used. Results: The obtained PCB images show a significant gradient in all RGB channels, resulting from ambient light. Using different lightings and color channels 12 images of a single PCB are available. A visual inspection and the investigation of 27 specific points show the best differentiation between those points using a red lighting and a green color channel. Estimating two thresholds from analyzing the multimodal histogram of the corrected images and using them for segmentation precisely extracts the solder joints. The comparison of the results to manually segmented images yield high sensitivity and specificity values. Analyzing the overall result delivers a Dice coefficient of 0.89 which varies for single object segmentations between 0.96 for a good segmented solder joints and 0.25 for single negative outliers. Conclusion: Our results demonstrate that the presented optical imaging system and the developed algorithm can robustly detect solder joints on printed circuit boards. Future work will comprise a modified lighting system which allows for more precise segmentation results using structure analysis.

Keywords: printed circuit board jet-printing, inspection, segmentation, solder paste detection

Procedia PDF Downloads 332
10566 Ligandless Extraction and Determination of Trace Amounts of Lead in Pomegranate, Zucchini and Lettuce Samples after Dispersive Liquid-Liquid Microextraction with Ultrasonic Bath and Optimization of Extraction Condition with RSM Design

Authors: Fariba Tadayon, Elmira Hassanlou, Hasan Bagheri, Mostafa Jafarian

Abstract:

Heavy metals are released into water, plants, soil, and food by natural and human activities. Lead has toxic roles in the human body and may cause serious problems even in low concentrations, since it may have several adverse effects on human. Therefore, determination of lead in different samples is an important procedure in the studies of environmental pollution. In this work, an ultrasonic assisted-ionic liquid based-liquid-liquid microextraction (UA-IL-DLLME) procedure for the determination of lead in zucchini, pomegranate, and lettuce has been established and developed by using flame atomic absorption spectrometer (FAAS). For UA-IL-DLLME procedure, 10 mL of the sample solution containing Pb2+ was adjusted to pH=5 in a glass test tube with a conical bottom; then, 120 μL of 1-Hexyl-3-methylimidazolium hexafluoro phosphate (CMIM)(PF6) was rapidly injected into the sample solution with a microsyringe. After that, the resulting cloudy mixture was treated by ultrasonic for 5 min, then the separation of two phases was obtained by centrifugation for 5 min at 3000 rpm and IL-phase diluted with 1 cc ethanol, and the analytes were determined by FAAS. The effect of different experimental parameters in the extraction step including: ionic liquid volume, sonication time and pH was studied and optimized simultaneously by using Response Surface Methodology (RSM) employing a central composite design (CCD). The optimal conditions were determined to be an ionic liquid volume of 120 μL, sonication time of 5 min, and pH=5. The linear ranges of the calibration curve for the determination by FAAS of lead were 0.1-4 ppm with R2=0.992. Under optimized conditions, the limit of detection (LOD) for lead was 0.062 μg.mL-1, the enrichment factor (EF) was 93, and the relative standard deviation (RSD) for lead was calculated as 2.29%. The levels of lead for pomegranate, zucchini, and lettuce were calculated as 2.88 μg.g-1, 1.54 μg.g-1, 2.18 μg.g-1, respectively. Therefore, this method has been successfully applied for the analysis of the content of lead in different food samples by FAAS.

Keywords: Dispersive liquid-liquid microextraction, Central composite design, Food samples, Flame atomic absorption spectrometry.

Procedia PDF Downloads 278
10565 Investigation of Detectability of Orbital Objects/Debris in Geostationary Earth Orbit by Microwave Kinetic Inductance Detectors

Authors: Saeed Vahedikamal, Ian Hepburn

Abstract:

Microwave Kinetic Inductance Detectors (MKIDs) are considered as one of the most promising photon detectors of the future in many Astronomical applications such as exoplanet detections. The MKID advantages stem from their single photon sensitivity (ranging from UV to optical and near infrared), photon energy resolution and high temporal capability (~microseconds). There has been substantial progress in the development of these detectors and MKIDs with Megapixel arrays is now possible. The unique capability of recording an incident photon and its energy (or wavelength) while also registering its time of arrival to within a microsecond enables an array of MKIDs to produce a four-dimensional data block of x, y, z and t comprising x, y spatial, z axis per pixel spectral and t axis per pixel which is temporal. This offers the possibility that the spectrum and brightness variation for any detected piece of space debris as a function of time might offer a unique identifier or fingerprint. Such a fingerprint signal from any object identified in multiple detections by different observers has the potential to determine the orbital features of the object and be used for their tracking. Modelling performed so far shows that with a 20 cm telescope located at an Astronomical observatory (e.g. La Palma, Canary Islands) we could detect sub cm objects at GEO. By considering a Lambertian sphere with a 10 % reflectivity (albedo of the Moon) we anticipate the following for a GEO object: 10 cm object imaged in a 1 second image capture; 1.2 cm object for a 70 second image integration or 0.65 cm object for a 4 minute image integration. We present details of our modelling and the potential instrument for a dedicated GEO surveillance system.

Keywords: space debris, orbital debris, detection system, observation, microwave kinetic inductance detectors, MKID

Procedia PDF Downloads 90
10564 Estimation of the Dynamic Fragility of Padre Jacinto Zamora Bridge Due to Traffic Loads

Authors: Kimuel Suyat, Francis Aldrine Uy, John Paul Carreon

Abstract:

The Philippines, composed of many islands, is connected with approximately 8030 bridges. Continuous evaluation of the structural condition of these bridges is needed to safeguard the safety of the general public. With most bridges reaching its design life, retrofitting and replacement may be needed. Concerned government agencies allocate huge costs for periodic monitoring and maintenance of these structures. The rising volume of traffic and aging of these infrastructures is challenging structural engineers to give rise for structural health monitoring techniques. Numerous techniques are already proposed and some are now being employed in other countries. Vibration Analysis is one way. The natural frequency and vibration of a bridge are design criteria in ensuring the stability, safety and economy of the structure. Its natural frequency must not be so high so as not to cause discomfort and not so low that the structure is so stiff causing it to be both costly and heavy. It is well known that the stiffer the member is, the more load it attracts. The frequency must not also match the vibration caused by the traffic loads. If this happens, a resonance occurs. Vibration that matches a systems frequency will generate excitation and when this exceeds the member’s limit, a structural failure will happen. This study presents a method for calculating dynamic fragility through the use of vibration-based monitoring system. Dynamic fragility is the probability that a structural system exceeds a limit state when subjected to dynamic loads. The bridge is modeled in SAP2000 based from the available construction drawings provided by the Department of Public Works and Highways. It was verified and adjusted based from the actual condition of the bridge. The bridge design specifications are also checked using nondestructive tests. The approach used in this method properly accounts the uncertainty of observed values and code-based structural assumptions. The vibration response of the structure due to actual loads is monitored using installed sensors on the bridge. From the determinacy of these dynamic characteristic of a system, threshold criteria can be established and fragility curves can be estimated. This study conducted in relation with the research project between Department of Science and Technology, Mapúa Institute of Technology, and the Department of Public Works and Highways also known as Mapúa-DOST Smart Bridge Project deploys Structural Health Monitoring Sensors at Zamora Bridge. The bridge is selected in coordination with the Department of Public Works and Highways. The structural plans for the bridge are also readily available.

Keywords: structural health monitoring, dynamic characteristic, threshold criteria, traffic loads

Procedia PDF Downloads 263
10563 Analysis of Enhanced Built-up and Bare Land Index in the Urban Area of Yangon, Myanmar

Authors: Su Nandar Tin, Wutjanun Muttitanon

Abstract:

The availability of free global and historical satellite imagery provides a valuable opportunity for mapping and monitoring the year by year for the built-up area, constantly and effectively. Land distribution guidelines and identification of changes are important in preparing and reviewing changes in the ground overview data. This study utilizes Landsat images for thirty years of information to acquire significant, and land spread data that are extremely valuable for urban arranging. This paper is mainly introducing to focus the basic of extracting built-up area for the city development area from the satellite images of LANDSAT 5,7,8 and Sentinel 2A from USGS in every five years. The purpose analyses the changing of the urban built-up area according to the year by year and to get the accuracy of mapping built-up and bare land areas in studying the trend of urban built-up changes the periods from 1990 to 2020. The GIS tools such as raster calculator and built-up area modelling are using in this study and then calculating the indices, which include enhanced built-up and bareness index (EBBI), Normalized difference Built-up index (NDBI), Urban index (UI), Built-up index (BUI) and Normalized difference bareness index (NDBAI) are used to get the high accuracy urban built-up area. Therefore, this study will point out a variable approach to automatically mapping typical enhanced built-up and bare land changes (EBBI) with simple indices and according to the outputs of indexes. Therefore, the percentage of the outputs of enhanced built-up and bareness index (EBBI) of the sentinel-2A can be realized with 48.4% of accuracy than the other index of Landsat images which are 15.6% in 1990 where there is increasing urban expansion area from 43.6% in 1990 to 92.5% in 2020 on the study area for last thirty years.

Keywords: built-up area, EBBI, NDBI, NDBAI, urban index

Procedia PDF Downloads 162
10562 The Effects of Cost-Sharing Contracts on the Costs and Operations of E-Commerce Supply Chains

Authors: Sahani Rathnasiri, Pritee Ray, Sardar M. N. Isalm, Carlos A. Vega-Mejia

Abstract:

This study develops a cooperative game theory-based cost-sharing contract model for a business to consumer (B2C) e-commerce supply chain to minimize the overall supply chain costs and the individual costs within an information asymmetry scenario. The objective of this study is to address the issues of strategic interactions among the key players of the e-commerce supply chain operation, which impedes the optimal operational outcomes. Game theory has been included in the field of supply chain management to resolve strategic decision-making issues; however, most of the studies are limited only to two-echelons of the supply chains. Multi-echelon supply chain optimizations based on game-theoretic models are less explored in the previous literature. This study adopts a cooperative game model to focus on the common payoff of operations and addresses the issues of information asymmetry and coordination of a three-echelon e-commerce supply chain. The cost-sharing contract model integrates operational features such as production, inventory management and distribution with the contract related constraints. The outcomes of the model highlight the importance of maintaining lower operational costs by all players to obtain benefits from the cost-sharing contract. Further, the cost-sharing contract ensures true cost revelation, and hence eliminates the information asymmetry issues among the players. Comparing the results of the contract model with the de-centralized e-commerce supply chain operation further emphasizes that the cost-sharing contract derives Pareto-improved outcomes and minimizes the costs of overall e-commerce supply chain operation.

Keywords: cooperative game theory, cost-sharing contract, e-commerce supply chain, information asymmetry

Procedia PDF Downloads 118
10561 Mutational and Evolutionary Analysis of Interleukin-2 Gene in Four Pakistani Goat Breeds

Authors: Tanveer Hussain, Misbah Hussain, Masroor Ellahi Babar, Muhammad Traiq Pervez, Fiaz Hussain, Sana Zahoor, Rashid Saif

Abstract:

Interleukin 2 (IL-2) is a cytokine which is produced by activated T cells, play important role in immune response against antigen. It act in both autocrine and paracrine manner. It can stimulate B cells and various other phagocytic cells like monocytes, lymphokine-activated killer cells and natural killer cells. Acting in autocrine fashion, IL-2 protein plays a crucial role in proliferation of T cells. IL-2 triggers the release of pro and anti- inflammatory cytokines by activating several pathways. In present study, exon 1 of IL-2 gene of four local Pakistani breeds (Dera Din Panah, Beetal, Nachi and Kamori) from two provinces was amplified by using reported Ovine IL-2 primers, yielding PCR product of 501 bp. The sequencing of all samples was done to identify the polymorphisms in amplified region of IL-2 gene. Analysis of sequencing data resulted in identification of one novel nucleotide substitution (T→A) in amplified non-coding region of IL-2 gene. Comparison of IL-2 gene sequence of all four breeds with other goat breeds showed high similarity in sequence. While phylogenetic analysis of our local breeds with other mammals showed that IL-2 is a variable gene which has undergone many substitutions. This high substitution rate can be due to the decreased or increased changed selective pressure. These rapid changes can also lead to the change in function of immune system. This pioneering study of Pakistani goat breeds urge for further studies on immune system of each targeted breed for fully understanding the functional role of IL-2 in goat immunity.

Keywords: interleukin 2, mutational analysis, phylogeny, goat breeds, Pakistan

Procedia PDF Downloads 601
10560 Phonology and Syntax of Article Incorporation in Mauritian Creole: Evidence from Bantou Languages

Authors: Emmanuel Nikiema

Abstract:

This paper examines article incorporation in Mauritian Creole, a French Lexifier Creole which exhibits three forms of article incorporation as illustrated in (1-3). While various analyses of article incorporation have been proposed in the literature, fewer studies have explored the motivation of this widespread phenomenon in Mauritian Creole (MC) as opposed to other French Lexifier Creoles spoken in the Caribbean. For example, Mauritian Creole exhibits 4 times more CV incorporation than Haitian Creole, and 40 times more than Reunion Creole. (1) Consonantal type (C): loraz ‘thunder storm’, lete ‘summer’, zwazo ‘bird’, nide ‘idea’. (2) Syllabic type (CV): lapo ‘skin’, liku ‘neck’, ledo ‘back’, leker ‘heart’, diber ‘butter’. (3) Bi-consonantal (CVC): delo ‘water’, dizef ‘egg’, lizye ‘eye’, dilwil ‘oil’. The goal of this study is twofold: 1) uncover the rules governing the three types of article incorporation in MC, and 2) account for its remarkable occurrence in MC as opposed to its quasi-absence in Reunion Creole. We have collected a corpus of over 700 cases and organized it into three categories (C; CV and CVC). For example, there are 471 examples of CV incorporation in MC against 112 in Haitian Creole and only 12 in Reunion Creole. Two questions can be raised: 1) what is the motivation and distribution of the three types of incorporation in MC, and 2) how can one account for the high volume of incorporation in MC as opposed to its quasi-absence in Reunion Creole? We suggest that article incorporation in MC is related to the structure of nouns in Bantou languages. While previous authors have largely used population settlement data in the colonies during the Creole formation period to justify their analyses, we propose an account based on the syntactic structure of Bantou nouns. This analysis will shed light on the contribution of African languages to the formation of MC, and on to why MC has exhibited more article incorporation cases than any other French Lexifier Creole.

Keywords: article incorporation, creole languages, description, phonology

Procedia PDF Downloads 106
10559 Modeling Atmospheric Correction for Global Navigation Satellite System Signal to Improve Urban Cadastre 3D Positional Accuracy Case of: TANA and ADIS IGS Stations

Authors: Asmamaw Yehun

Abstract:

The name “TANA” is one of International Geodetic Service (IGS) Global Positioning System (GPS) station which is found in Bahir Dar University in Institute of Land Administration. The station name taken from one of big Lakes in Africa ,Lake Tana. The Institute of Land Administration (ILA) is part of Bahir Dar University, located in the capital of the Amhara National Regional State, Bahir Dar. The institute is the first of its kind in East Africa. The station is installed by cooperation of ILA and Sweden International Development Agency (SIDA) fund support. The Continues Operating Reference Station (CORS) is a network of stations that provide global satellite system navigation data to help three dimensional positioning, meteorology, space, weather, and geophysical applications throughout the globe. TANA station was as CORS since 2013 and sites are independently owned and operated by governments, research and education facilities and others. The data collected by the reference station is downloadable through Internet for post processing purpose by interested parties who carry out GNSS measurements and want to achieve a higher accuracy. We made a first observation on TANA, monitor stations on May 29th 2013. We used Leica 1200 receivers and AX1202GG antennas and made observations from 11:30 until 15:20 for about 3h 50minutes. Processing of data was done in an automatic post processing service CSRS-PPP by Natural Resources Canada (NRCan) . Post processing was done June 27th 2013 so precise ephemeris was used 30 days after observation. We found Latitude (ITRF08): 11 34 08.6573 (dms) / 0.008 (m), Longitude (ITRF08): 37 19 44.7811 (dms) / 0.018 (m) and Ellipsoidal Height (ITRF08): 1850.958 (m) / 0.037 (m). We were compared this result with GAMIT/GLOBK processed data and it was very closed and accurate. TANA station is one of the second IGS station for Ethiopia since 2015 up to now. It provides data for any civilian users, researchers, governmental and nongovernmental users. TANA station is installed with very advanced choke ring antenna and GR25 Leica receiver and also the site is very good for satellite accessibility. In order to test hydrostatic and wet zenith delay for positional data quality, we used GAMIT/GLOBK and we found that TANA station is the most accurate IGS station in East Africa. Due to lower tropospheric zenith and ionospheric delay, TANA and ADIS IGS stations has 2 and 1.9 meters 3D positional accuracy respectively.

Keywords: atmosphere, GNSS, neutral atmosphere, precipitable water vapour

Procedia PDF Downloads 66
10558 Effect of High Intensity Ultrasonic Treatment on the Micro Structure, Corrosion and Mechanical Behavior of ac4c Aluminium Alloy

Authors: A.Farrag Farrag, A. M. El-Aziz Abdel Aziz, W. Khlifa Khlifa

Abstract:

Ultrasonic treatment is a promising process nowadays in the engineering field due to its high efficiency and it is a low-cost process. It enhances mechanical properties, corrosion resistance, and homogeneity of the microstructure. In this study, the effect of ultrasonic treatment and several casting conditions on microstructure, hardness and corrosion behavior of AC4C aluminum alloy was examined. Various ultrasonic treatments of the AC4C alloys were carried out to prepare billets for thixocasting process. Treatment temperatures varied from about 630oC and cooled down to under ultrasonic field. Treatment time was about 90s. A 600-watts ultrasonic system with 19.5 kHz and intensity of 170 W/cm2 was used. Billets were reheated to semisolid state and held for 5 minutes at 582 oC and temperatures (soaking) using high-frequency induction system, then thixocasted using a die casting machine. Microstructures of the thixocast parts were studied using optical and SEM microscopes. On the other hand, two samples were conventionally cast and poured at 634 oC and 750 oC. The microstructure showed a globular none dendritic grains for AC4C with the application of UST at 630-582 oC, Less dendritic grains when the sample was conventionally cast without the application of UST and poured at 624 oC and a fully dendritic microstructure When the sample was cast and poured at 750 oC without UST .The ultrasonic treatment during solidification proved that it has a positive influence on the microstructure as it produced the finest and globular grains thus it is expected to increase the mechanical properties of the alloy. Higher values of corrosion resistance and hardness were recorded for the ultrasound-treated sample in comparison to cast one.

Keywords: ultrasonic treatment, aluminum alloys, corrosion behaviour, mechanical behaviour, microstructure

Procedia PDF Downloads 350
10557 Electrocatalysts for Lithium-Sulfur Energy Storage Systems

Authors: Mirko Ante, Şeniz Sörgel, Andreas Bund

Abstract:

Li-S- (Lithium-Sulfur-) battery systems provide very high specific gravimetric energy (2600 Wh/kg) and volumetric energy density (2800Wh/l). Hence, Li-S batteries are one of the key technologies for both the upcoming electromobility and stationary applications. Furthermore, the Li-S battery system is potentially cheap and environmentally benign. However, the technical implementation suffers from cycling stability, low charge and discharge rates and incomplete understanding of the complex polysulfide reaction mechanism. The aim of this work is to develop an effective electrocatalyst for the polysulfide reactions so that the electrode kinetics of the sulfur half-cell will be improved. Accordingly, the overvoltage will be decreased, and the efficiency of the cell will be increased. An enhanced electroactive surface additionally improves the charge and discharge rates. To reach this goal, functionalized electrocatalytic coatings are investigated to accelerate the kinetics of the polysulfide reactions. In order to determine a suitable electrocatalyst, apparent exchange current densities of a variety of materials (Ni, Co, Pt, Cr, Al, Cu, ITO, stainless steel) have been evaluated in a polysulfide containing electrolyte by potentiodynamic measurements and a Butler-Volmer fit including diffusion limitation. The samples have been examined by Scanning Electron Microscopy (SEM) after the potentiodynamic measurements. Up to now, our work shows that cobalt is a promising material with good electrocatalytic properties for the polysulfide reactions and good chemical stability in the system. Furthermore, an electrodeposition from a modified Watt’s nickel electrolyte with a sulfur source seems to provide an autocatalytic effect, but the electrocatalytic behavior decreases after several cycles of the current-potential-curve.

Keywords: electrocatalyst, energy storage, lithium sulfur battery, sulfur electrode materials

Procedia PDF Downloads 363
10556 Improved Performance of Mn Substituted Ceria Nanospheres for Water Gas Shift Reaction: Influence of Preparation Conditions

Authors: Bhairi Lakshminarayana, Surajit Sarker, Ch. Subrahmanyam

Abstract:

The present study reports the development of noble metal free nano catalysts for low-temperature CO oxidation and water gas shift reaction. Mn-substituted CeO2 solid solution catalysts were synthesized by co-precipitation, combustion and hydrothermal methods. The formation of solid solution was confirmed by XRD with Rietveld refinement and the percentage of carbon and nitrogen doping was ensured by CHNS analyzer. Raman spectroscopic confirmed the oxygen vacancies. The surface area, pore volume and pore size distribution confirmed by N2 physisorption analysis, whereas, UV-visible diffuse reflectance spectroscopy and XPS data confirmed the oxidation state of the Mn ion. The particle size and morphology (spherical shape) of the material was confirmed using FESEM and HRTEM analysis. Ce0.8Mn0.2O2-δ was calcined at 400 °C, 600 °C and 800 °C. Raman spectroscopy confirmed that the catalyst calcined at 400 °C has the best redox properties. The activity of the designed catalysts for CO oxidation (0.2 vol%), carried out with GHSV of 21,000 h-1 and it has been observed that co-precipitation favored the best active catalyst towards CO oxidation and water gas shift reaction, due to the high surface area, improved reducibility, oxygen mobility and highest quantity of surface oxygen species. The activation energy of low temperature CO oxidation on Ce0.8Mn0.2O2- δ (combustion) was 5.5 kcal.K-1.mole-1. The designed catalysts were tested for water gas shift reaction. The present study demonstrates that Mn ion substituted ceria at 400 °C calcination temperature prepared by co-precipitation method promise to revive a green sustainable energy production approach.

Keywords: Ce0.8Mn0.2O2-ð, CO oxidation, physicochemical characterization, water gas shift reaction (WGS)

Procedia PDF Downloads 231
10555 The Role of Heat Pumps in the Decarbonization of European Regions

Authors: Domenico M. Mongelli, Michele De Carli, Laura Carnieletto, Filippo Busato

Abstract:

Europe's dependence on imported fossil fuels has been particularly highlighted by the Russian invasion of Ukraine. Limiting this dependency with a massive replacement of fossil fuel boilers with heat pumps for building heating is the goal of this work. Therefore, with the aim of diversifying energy sources and evaluating the potential use of heat pump technologies for residential buildings with a view to decarbonization, the quantitative reduction in the consumption of fossil fuels was investigated in all regions of Europe through the use of heat pumps. First, a general overview of energy consumption in buildings in Europe has been assessed. The consumption of buildings has been addressed to the different uses (heating, cooling, DHW, etc.) as well as the different sources (natural gas, oil, biomass, etc.). The analysis has been done in order to provide a baseline at the European level on the current consumptions and future consumptions, with a particular interest in the future increase of cooling. A database was therefore created on the distribution of residential energy consumption linked to air conditioning among the various energy carriers (electricity, waste heat, gas, solid fossil fuels, liquid fossil fuels, and renewable sources) for each region in Europe. Subsequently, the energy profiles of various European cities representative of the different climates are analyzed in order to evaluate, in each European climatic region, which energy coverage can be provided by heat pumps in replacement of natural gas and solid and liquid fossil fuels for air conditioning of the buildings, also carrying out the environmental and economic assessments for this energy transition operation. This work aims to make an innovative contribution to the evaluation of the potential for introducing heat pump technology for decarbonization in the air conditioning of buildings in all climates of the different European regions.

Keywords: heat pumps, heating, decarbonization, energy policies

Procedia PDF Downloads 121
10554 Effect of Green Roofs to Prevent the Dissipation of Energy in Mountainous Areas

Authors: Mina Ganji Morad, Maziar Azadisoleimanieh, Sina Ganji Morad

Abstract:

A green roof is formed by green plants alive and has many positive impacts in the regional climatic, as well as indoor. Green roof system to prevent solar radiation plays a role in the cooling space. The cooling is done by reducing thermal fluctuations on the exterior of the roof and by increasing the roof heat capacity which cause to keep the space under the roof cool in the summer and heating rate increases during the winter. A roof garden is one of the recommended ways to reduce energy consumption in large cities. Despite the scale of the city green roofs have effective functions, such as beautiful view of city and decontaminating the urban landscape and reduce mental stress, and in an exchange of energy and heat from outside to inside spaces. This article is based on a review of 20 articles and 10 books and valid survey results on the positive effects of green roofs to prevent energy waste in the building. According to these publications, three of the conventional roof, green roof typical and green roof with certain administrative details (layers of glass) and the use of resistant plants and shrubs have been analyzed and compared their heat transfer. The results of these studies showed that one of the best green roof systems for mountainous climate is tree and shrub system that in addition to being resistant to climate change in mountainous regions, will benefit from the other advantages of green roof. Due to the severity of climate change in mountainous areas it is essential to prevent the waste of buildings heating and cooling energy. Proper climate design can greatly help to reduce energy.

Keywords: green roof, heat transfer, reducing energy consumption, mountainous areas, sustainable architecture

Procedia PDF Downloads 392
10553 Regional Flood Frequency Analysis in Narmada Basin: A Case Study

Authors: Ankit Shah, R. K. Shrivastava

Abstract:

Flood and drought are two main features of hydrology which affect the human life. Floods are natural disasters which cause millions of rupees’ worth of damage each year in India and the whole world. Flood causes destruction in form of life and property. An accurate estimate of the flood damage potential is a key element to an effective, nationwide flood damage abatement program. Also, the increase in demand of water due to increase in population, industrial and agricultural growth, has let us know that though being a renewable resource it cannot be taken for granted. We have to optimize the use of water according to circumstances and conditions and need to harness it which can be done by construction of hydraulic structures. For their safe and proper functioning of hydraulic structures, we need to predict the flood magnitude and its impact. Hydraulic structures play a key role in harnessing and optimization of flood water which in turn results in safe and maximum use of water available. Mainly hydraulic structures are constructed on ungauged sites. There are two methods by which we can estimate flood viz. generation of Unit Hydrographs and Flood Frequency Analysis. In this study, Regional Flood Frequency Analysis has been employed. There are many methods for estimating the ‘Regional Flood Frequency Analysis’ viz. Index Flood Method. National Environmental and Research Council (NERC Methods), Multiple Regression Method, etc. However, none of the methods can be considered universal for every situation and location. The Narmada basin is located in Central India. It is drained by most of the tributaries, most of which are ungauged. Therefore it is very difficult to estimate flood on these tributaries and in the main river. As mentioned above Artificial Neural Network (ANN)s and Multiple Regression Method is used for determination of Regional flood Frequency. The annual peak flood data of 20 sites gauging sites of Narmada Basin is used in the present study to determine the Regional Flood relationships. Homogeneity of the considered sites is determined by using the Index Flood Method. Flood relationships obtained by both the methods are compared with each other, and it is found that ANN is more reliable than Multiple Regression Method for the present study area.

Keywords: artificial neural network, index flood method, multi layer perceptrons, multiple regression, Narmada basin, regional flood frequency

Procedia PDF Downloads 415
10552 Development of Sustainability Indicators for Marine Ecosystem Management: Initial Research Results in Vietnam

Authors: Tran Dinh Lan, Do Thi Thu Huong

Abstract:

Among the 17 goals of the United Nations, 2030 Agenda for Sustainable Development, SDG 14.2 and SDG 14.4 under SDG 14 directly address the sustainable management, exploitation, and use of marine ecosystems. To achieve these goals, it is necessary to quantify the level of sustainable use of marine ecosystems, which have been paid attention for more than two decades in the direction of a quantitative approach by indicator and index development using methods of building and analyzing indicators and indices. With the employment of the above methods, over the past two decades, a number of marine ecosystems in Vietnam have been quantitatively evaluated for sustainable use for integrated coastal and marine management. Thirty indicators for sustainable use of marine ecosystems in the Northeast of Vietnam, together with indices, have been developed to assess mangrove, coral, and beach ecosystems. An assessment shows the following results. The mangrove ecosystem declined from sustainable to unsustainable uses in the period 1989-2007. The coral ecosystem in 2003 was at a sensitive point between sustainable and unsustainable uses. The beach ecosystem was evaluated with ten selected beaches in the period 2013-2018, showing that nine beaches are at a sustainable level, and one beach is at an unsustainable level. The Thua Thien-Hue coastal lagoon ecosystem assessed by 21 indicators of environmental vulnerability in 2014 showed less sustainability. The marine ecosystems around the offshore islands of Bach Long Vi, Con Co, and Tho Chu were tested to assess the level of sustainable use by the index of total economic value. The results show that these ecosystems are being used sustainably but are also at risk of falling to an unsustainable level (Tho Chu). The use of the environmental vulnerability index or economic value index to evaluate ecosystem sustainability only reflects parts of the function or value of the system but does not fully reflect the sustainability of the system.

Keywords: index, indicators, sustainability evaluation, Vietnam marine ecosystems

Procedia PDF Downloads 102
10551 Validation of an Acuity Measurement Tool for Maternity Services

Authors: Cherrie Lowe

Abstract:

The TrendCare Patient Dependency System is currently utilized by a large number of Maternity Services across Australia, New Zealand and Singapore. In 2012, 2013, and 2014 validation studies were initiated in all three countries to validate the acuity tools used for Women in Labour, and Postnatal Mothers and Babies. This paper will present the findings of the validation study. Aim: The aim of this study was to; Identify if the care hours provided by the TrendCare Acuity System was an accurate reflection of the care required by Women and Babies. Obtain evidence of changes required to acuity indicators and/or category timings to ensure the TrendCare acuity system remains reliable and valid across a range of Maternity care models in three countries. Method: A non-experimental action research methodology was used across four District Health Boards in New Zealand, two large public Australian Maternity services and a large tertiary Maternity service in Singapore. Standardized data collection forms and timing devices were used to collect Midwife contact times with Women and Babies included in the study. Rejection processes excluded samples where care was not completed/rationed. The variances between actual timed Midwife/Mother/Baby contact and actual Trend Care acuity times were identified and investigated. Results: 87.5% (18) of TrendCare acuity category timings matched the actual timings recorded for Midwifery care. 12.5% (3) of TrendCare night duty categories provided less minutes of care than the actual timings. 100% of Labour Ward TrendCare categories matched actual timings for Midwifery care. The actual times given for assistance to New Zealand independent Midwives in Labour Ward showed a significant deviation to previous studies demonstrating the need for additional time allocations in Trend Care. Conclusion: The results demonstrated the importance of regularly validating the Trend Care category timings with the care hours required, as variances to models of care and length of stay in Maternity units have increased Midwifery workloads on the night shift. The level of assistance provided by the core labour ward staff to the Independent Midwife has increased substantially. Outcomes: As a consequence of this study changes were made to the night duty TrendCare Maternity categories, additional acuity indicators developed and times for assisting independent Midwives increased. The updated TrendCare version was delivered to Maternity services in 2014.

Keywords: maternity, acuity, research, nursing workloads

Procedia PDF Downloads 375
10550 Structural Model on Organizational Climate, Leadership Behavior and Organizational Commitment: Work Engagement of Private Secondary School Teachers in Davao City

Authors: Genevaive Melendres

Abstract:

School administrators face the reality of teachers losing their engagement, or schools losing the teachers. This study is then conducted to identify a structural model that best predict work engagement of private secondary teachers in Davao City. Ninety-three teachers from four sectarian schools and 56 teachers from four non-sectarian schools were involved in the completion of four survey instruments namely Organizational Climate Questionnaire, Leader Behavior Descriptive Questionnaire, Organizational Commitment Scales, and Utrecht Work Engagement Scales. Data were analyzed using frequency distribution, mean, standardized deviation, t-test for independent sample, Pearson r, stepwise multiple regression analysis, and structural equation modeling. Results show that schools have high level of organizational climate dimensions; leaders oftentimes show work-oriented and people-oriented behavior; teachers have high normative commitment and they are very often engaged at their work. Teachers from non-sectarian schools have higher organizational commitment than those from sectarian schools. Organizational climate and leadership behavior are positively related to and predict work engagement whereas commitment did not show any relationship. This study underscores the relative effects of three variables on the work engagement of teachers. After testing network of relationships and evaluating several models, a best-fitting model was found between leadership behavior and work engagement. The noteworthy findings suggest that principals pay attention and consistently evaluate their behavior for this best predicts the work engagement of the teachers. The study provides value to administrators who take decisions and create conditions in which teachers derive fulfillment.

Keywords: leadership behavior, organizational climate, organizational commitment, private secondary school teachers, structural model on work engagement

Procedia PDF Downloads 257
10549 User Requirements Study in Order to Improve the Quality of Social Robots for Dementia Patients

Authors: Konrad Rejdak

Abstract:

Introduction: Neurodegenerative diseases are frequently accompanied by loss and unwanted change in functional independence, social relationships, and economic circumstances. Currently, the achievements of social robots to date is being projected to improve multidimensional quality of life among people with cognitive impairment and others. Objectives: Identification of particular human needs in the context of the changes occurring in course of neurodegenerative diseases. Methods: Based on the 110 surveys performed in the Medical University of Lublin from medical staff, patients, and caregivers we made prioritization of the users' needs as high, medium, and low. The issues included in the surveys concerned four aspects: user acceptance, functional requirements, the design of the robotic assistant and preferred types of human-robot interaction. Results: We received completed questionnaires; 50 from medical staff, 30 from caregivers and 30 from potential users. Above 90% of the respondents from each of the three groups, accepted a robotic assistant as a potential caregiver. High priority functional capability of assistive technology was to handle emergencies in a private home-like recognizing life-threatening situations and reminding about medication intake. With reference to the design of the robotic assistant, the majority of the respondent would like to have an anthropomorphic appearance with a positive emotionally expressive face. The most important type of human-robot interaction was a voice-operated system and by touchscreen. Conclusion: The results from our study might contribute to a better understanding of the system and users’ requirements for the development of a service robot intended to support patients with dementia.

Keywords: assistant robot, dementia, long term care, patients

Procedia PDF Downloads 149
10548 Structural Health Monitoring of the 9-Story Torre Central Building Using Recorded Data and Wave Method

Authors: Tzong-Ying Hao, Mohammad T. Rahmani

Abstract:

The Torre Central building is a 9-story shear wall structure located in Santiago, Chile, and has been instrumented since 2009. Events of different intensity (ambient vibrations, weak and strong earthquake motions) have been recorded, and thus the building can serve as a full-scale benchmark to evaluate the structural health monitoring method developed. The first part of this article presents an analysis of inter-story drifts, and of changes in the first system frequencies (estimated from the relative displacement response of the 8th-floor with respect to the basement from recorded data) as baseline indicators of the occurrence of damage. During 2010 Chile earthquake the system frequencies were detected decreasing approximately 24% in the EW and 27% in NS motions. Near the end of shaking, an increase of about 17% in the EW motion was detected. The structural health monitoring (SHM) method based on changes in wave traveling time (wave method) within a layered shear beam model of structure is presented in the second part of this article. If structural damage occurs the velocity of wave propagated through the structure changes. The wave method measures the velocities of shear wave propagation from the impulse responses generated by recorded data at various locations inside the building. Our analysis and results show that the detected changes in wave velocities are consistent with the observed damages. On this basis, the wave method is proven for actual implementation in structural health monitoring systems.

Keywords: Chile earthquake, damage detection, earthquake response, impulse response, layered shear beam, structural health monitoring, Torre Central building, wave method, wave travel time

Procedia PDF Downloads 359
10547 Biomechanical Performance of the Synovial Capsule of the Glenohumeral Joint with a BANKART Lesion through Finite Element Analysis

Authors: Duvert A. Puentes T., Javier A. Maldonado E., Ivan Quintero., Diego F. Villegas

Abstract:

Mechanical Computation is a great tool to study the performance of complex models. An example of it is the study of the human body structure. This paper took advantage of different types of software to make a 3D model of the glenohumeral joint and apply a finite element analysis. The main objective was to study the change in the biomechanical properties of the joint when it presents an injury. Specifically, a BANKART lesion, which consists in the detachment of the anteroinferior labrum from the glenoid. Stress and strain distribution of the soft tissues were the focus of this study. First, a 3D model was made of a joint without any pathology, as a control sample, using segmentation software for the bones with the support of medical imagery and a cadaveric model to represent the soft tissue. The joint was built to simulate a compression and external rotation test using CAD to prepare the model in the adequate position. When the healthy model was finished, it was submitted to a finite element analysis and the results were validated with experimental model data. With the validated model, it was sensitized to obtain the best mesh measurement. Finally, the geometry of the 3D model was changed to imitate a BANKART lesion. Then, the contact zone of the glenoid with the labrum was slightly separated simulating a tissue detachment. With this new geometry, the finite element analysis was applied again, and the results were compared with the control sample created initially. With the data gathered, this study can be used to improve understanding of the labrum tears. Nevertheless, it is important to remember that the computational analysis are approximations and the initial data was taken from an in vitro assay.

Keywords: biomechanics, computational model, finite elements, glenohumeral joint, bankart lesion, labrum

Procedia PDF Downloads 157
10546 Effects of Occupational Therapy on Children with Unilateral Cerebral Palsy

Authors: Sedef Şahin, Meral Huri

Abstract:

Cerebral Palsy (CP) represents the most frequent cause of physical disability in children with a rate of 2,9 per 1000 live births. The activity-focused intervention is known to improve function and reduce activity limitations and barriers to participation of children with disabilities. The aim of the study was to assess the effects of occupational therapy on level of fatigue, activity performance and satisfaction in children with Unilateral Cerebral Palsy. Twenty-two children with hemiparetic cerebral palsy (mean age: 9,3 ± 2.1years; Gross Motor Function Classification System ( GMFCS) level from I to V (I = 54%, II = 23%, III = 14%, IV= 9%, V= 0%), Manual Ability Classification System (MACS) level from I to V (I = 40%, II = 32%, III = 14%, IV= 10%, V= 4%), were assigned to occupational therapy program for 6 weeks.Visual Analogue Scale (VAS) was used for intensity of the fatigue they experienced at the time on a 10 point Likert scale (1-10).Activity performance and satisfaction were measured with Canadian Occupational Performance Measure (COPM).A client-centered occupational therapy intervention was designed according to results of COPM. The results were compared with nonparametric Wilcoxon test before and after the intervention. Thirteen of the children were right-handed, whereas nine of the children were left handed.Six weeks of intervention showed statistically significant differences in level of fatigue, compared to first assessment(p<0,05). The mean score of first and the second activity performance scores were 4.51 ± 1.70 and 7.35 ± 2.51 respectively. Statistically significant difference between performance scores were found (p<0.01). The mean scores of first and second activity satisfaction scores were of 2.30± 1.05 and 5.51 ± 2.26 respectively. Statistically significant difference between satisfaction assessments were found (p<0.01). Occupational therapy is an evidence-based approach and occupational therapy interventions implemented by therapists were clinically effective on severity of fatigue, activity performance and satisfaction if implemented individually during 6 weeks.

Keywords: activity performance, cerebral palsy, fatigue, occupational therapy

Procedia PDF Downloads 232
10545 Investigating the Public’s Perceptions and Factors Contributing to the Management of Household Solid Waste in Rural Communities: A Case Study of Two Contrasting Rural Wards in the Greater Tzaneen Municipality

Authors: Dimakatso Machetele, Clare Kelso, Thea Schoeman

Abstract:

In developing countries such as India, China, and South Africa, disposal of household solid waste in rural areas is of great concern. Rural communities face numerous challenges that include the absence of waste collection services and sanitation facilities. The inadequate provision of waste collection and sanitation services results to the occurrence of infectious diseases e.g., malaria. The gap in the management of household solid waste between rural and urban communities, whereby urban communities have better waste management services compared to rural areas is an environmental injustice towards rural communities. The unequal distribution of infrastructure in South Africa’s waste management is a concern that stems from the spatial inequalities of the country’s apartheid history. The Limpopo province has a higher proportion of households without waste collection services from the municipality. The present research objectives are to investigate the public’s perceptions and factors contributing to the management of household solid waste in two contrasting rural Wards in the Greater Tzaneen Municipality. There is limited data and studies that have been conducted to understand the management of household solid waste in rural areas, and specifically, for the Greater Tzaneen Municipality located in the Limpopo province, South Africa. The findings of the study will propose recommendations to the Greater Tzaneen Municipality, rural municipalities in South Africa, and globally to explore sustainable methods to manage household solid waste and explore economic opportunities within the waste management sector to alleviate poverty in rural communities.

Keywords: rural, household solid wase, perceptions, waste management

Procedia PDF Downloads 109
10544 Sizing Residential Solar Power Systems Based on Site-Specific Energy Statistics

Authors: Maria Arechavaleta, Mark Halpin

Abstract:

In the United States, costs of solar energy systems have declined to the point that they are viable options for most consumers. However, there are no consistent procedures for specifying sufficient systems. The factors that must be considered are energy consumption, potential solar energy production, and cost. The traditional method of specifying solar energy systems is based on assumed daily levels of available solar energy and average amounts of daily energy consumption. The mismatches between energy production and consumption are usually mitigated using battery energy storage systems, and energy use is curtailed when necessary. The main consumer decision question that drives the total system cost is how much unserved (or curtailed) energy is acceptable? Of course additional solar conversion equipment can be installed to provide greater peak energy production and extra energy storage capability can be added to mitigate longer lasting low solar energy production periods. Each option increases total cost and provides a benefit which is difficult to quantify accurately. An approach to quantify the cost-benefit of adding additional resources, either production or storage or both, based on the statistical concepts of loss-of-energy probability and expected unserved energy, is presented in this paper. Relatively simple calculations, based on site-specific energy availability and consumption data, can be used to show the value of each additional increment of production or storage. With this incremental benefit-cost information, consumers can select the best overall performance combination for their application at a cost they are comfortable paying. The approach is based on a statistical analysis of energy consumption and production characteristics over time. The characteristics are in the forms of curves with each point on the curve representing an energy consumption or production value over a period of time; a one-minute period is used for the work in this paper. These curves are measured at the consumer location under the conditions that exist at the site and the duration of the measurements is a minimum of one week. While greater accuracy could be obtained with longer recording periods, the examples in this paper are based on a single week for demonstration purposes. The weekly consumption and production curves are overlaid on each other and the mismatches are used to size the battery energy storage system. Loss-of-energy probability and expected unserved energy indices are calculated in addition to the total system cost. These indices allow the consumer to recognize and quantify the benefit (probably a reduction in energy consumption curtailment) available for a given increase in cost. Consumers can then make informed decisions that are accurate for their location and conditions and which are consistent with their available funds.

Keywords: battery energy storage systems, loss of load probability, residential renewable energy, solar energy systems

Procedia PDF Downloads 230
10543 Molecular Dynamic Simulation of Cold Spray Process

Authors: Aneesh Joshi, Sagil James

Abstract:

Cold Spray (CS) process is deposition of solid particles over a substrate above a certain critical impact velocity. Unlike thermal spray processes, CS process does not melt the particles thus retaining their original physical and chemical properties. These characteristics make CS process ideal for various engineering applications involving metals, polymers, ceramics and composites. The bonding mechanism involved in CS process is extremely complex considering the dynamic nature of the process. Though CS process offers great promise for several engineering applications, the realization of its full potential is limited by the lack of understanding of the complex mechanisms involved in this process and the effect of critical process parameters on the deposition efficiency. The goal of this research is to understand the complex nanoscale mechanisms involved in CS process. The study uses Molecular Dynamics (MD) simulation technique to understand the material deposition phenomenon during the CS process. Impact of a single crystalline copper nanoparticle on copper substrate is modelled under varying process conditions. The quantitative results of the impacts at different velocities, impact angle and size of the particles are evaluated using flattening ratio, von Mises stress distribution and local shear strain. The study finds that the flattening ratio and hence the quality of deposition was highest for an impact velocity of 700 m/s, particle size of 20 Å and an impact angle of 90°. The stress and strain analysis revealed regions of shear instabilities in the periphery of impact and also revealed plastic deformation of the particles after the impact. The results of this study can be used to augment our existing knowledge in the field of CS processes.

Keywords: cold spray process, molecular dynamics simulation, nanoparticles, particle impact

Procedia PDF Downloads 362
10542 Optimal Operation of Bakhtiari and Roudbar Dam Using Differential Evolution Algorithms

Authors: Ramin Mansouri

Abstract:

Due to the contrast of rivers discharge regime with water demands, one of the best ways to use water resources is to regulate the natural flow of the rivers and supplying water needs to construct dams. Optimal utilization of reservoirs, consideration of multiple important goals together at the same is of very high importance. To study about analyzing this method, statistical data of Bakhtiari and Roudbar dam over 46 years (1955 until 2001) is used. Initially an appropriate objective function was specified and using DE algorithm, the rule curve was developed. In continue, operation policy using rule curves was compared to standard comparative operation policy. The proposed method distributed the lack to the whole year and lowest damage was inflicted to the system. The standard deviation of monthly shortfall of each year with the proposed algorithm was less deviated than the other two methods. The Results show that median values for the coefficients of F and Cr provide the optimum situation and cause DE algorithm not to be trapped in local optimum. The most optimal answer for coefficients are 0.6 and 0.5 for F and Cr coefficients, respectively. After finding the best combination of coefficients values F and CR, algorithms for solving the independent populations were examined. For this purpose, the population of 4, 25, 50, 100, 500 and 1000 members were studied in two generations (G=50 and 100). result indicates that the generation number 200 is suitable for optimizing. The increase in time per the number of population has almost a linear trend, which indicates the effect of population in the runtime algorithm. Hence specifying suitable population to obtain an optimal results is very important. Standard operation policy had better reversibility percentage, but inflicts severe vulnerability to the system. The results obtained in years of low rainfall had very good results compared to other comparative methods.

Keywords: reservoirs, differential evolution, dam, Optimal operation

Procedia PDF Downloads 73
10541 SPARK: An Open-Source Knowledge Discovery Platform That Leverages Non-Relational Databases and Massively Parallel Computational Power for Heterogeneous Genomic Datasets

Authors: Thilina Ranaweera, Enes Makalic, John L. Hopper, Adrian Bickerstaffe

Abstract:

Data are the primary asset of biomedical researchers, and the engine for both discovery and research translation. As the volume and complexity of research datasets increase, especially with new technologies such as large single nucleotide polymorphism (SNP) chips, so too does the requirement for software to manage, process and analyze the data. Researchers often need to execute complicated queries and conduct complex analyzes of large-scale datasets. Existing tools to analyze such data, and other types of high-dimensional data, unfortunately suffer from one or more major problems. They typically require a high level of computing expertise, are too simplistic (i.e., do not fit realistic models that allow for complex interactions), are limited by computing power, do not exploit the computing power of large-scale parallel architectures (e.g. supercomputers, GPU clusters etc.), or are limited in the types of analysis available, compounded by the fact that integrating new analysis methods is not straightforward. Solutions to these problems, such as those developed and implemented on parallel architectures, are currently available to only a relatively small portion of medical researchers with access and know-how. The past decade has seen a rapid expansion of data management systems for the medical domain. Much attention has been given to systems that manage phenotype datasets generated by medical studies. The introduction of heterogeneous genomic data for research subjects that reside in these systems has highlighted the need for substantial improvements in software architecture. To address this problem, we have developed SPARK, an enabling and translational system for medical research, leveraging existing high performance computing resources, and analysis techniques currently available or being developed. It builds these into The Ark, an open-source web-based system designed to manage medical data. SPARK provides a next-generation biomedical data management solution that is based upon a novel Micro-Service architecture and Big Data technologies. The system serves to demonstrate the applicability of Micro-Service architectures for the development of high performance computing applications. When applied to high-dimensional medical datasets such as genomic data, relational data management approaches with normalized data structures suffer from unfeasibly high execution times for basic operations such as insert (i.e. importing a GWAS dataset) and the queries that are typical of the genomics research domain. SPARK resolves these problems by incorporating non-relational NoSQL databases that have been driven by the emergence of Big Data. SPARK provides researchers across the world with user-friendly access to state-of-the-art data management and analysis tools while eliminating the need for high-level informatics and programming skills. The system will benefit health and medical research by eliminating the burden of large-scale data management, querying, cleaning, and analysis. SPARK represents a major advancement in genome research technologies, vastly reducing the burden of working with genomic datasets, and enabling cutting edge analysis approaches that have previously been out of reach for many medical researchers.

Keywords: biomedical research, genomics, information systems, software

Procedia PDF Downloads 263
10540 Performance Evaluation of Solid Lubricant Characteristics at Different Sliding Conditions

Authors: Suresh Kumar Reddy Narala, Rakesh Kumar Gunda

Abstract:

In modern industry, mechanical parts are subjected to friction and wear, leading to heat generation, which affects the reliability, life and power consumption of machinery. To overcome the tribological losses due to friction and wear, a significant portion of lubricant with high viscous properties allows very smooth relative motion between two sliding surfaces. Advancement in modern tribology has facilitated the use of applying solid lubricants in various industrial applications. Solid lubricant additives with high viscous thin film formation between the sliding surfaces can adequately wet and adhere to a work surface. In the present investigation, an attempt has been made to investigate and evaluate the tribological studies of various solid lubricants like MoS¬2, graphite, and boric acid at different sliding conditions. The base oil used in this study was SAE 40 oil with a viscosity of 220 cSt at 400C. The tribological properties were measured on pin-on-disc tribometer. An experimental set-up has been developed for effective supply of solid lubricants to the pin-disc interface zone. The results obtained from the experiments show that the friction coefficient increases with increase in applied load for all the considered environments. The tribological properties with MoS2 solid lubricant exhibit larger load carrying capacity than that of graphite and boric acid. The present research work also contributes to the understanding of the behavior of film thickness distribution of solid lubricant using potential contact technique under different sliding conditions. The results presented in this research work are expected to form a scientific basis for selecting the best solid lubricant in various industrial applications for possible minimization of friction and wear.

Keywords: friction, wear, temperature, solid lubricant

Procedia PDF Downloads 346
10539 Microencapsulation of Phenobarbital by Ethyl Cellulose Matrix

Authors: S. Bouameur, S. Chirani

Abstract:

The aim of this study was to evaluate the potential use of EthylCellulose in the preparation of microspheres as a Drug Delivery System for sustained release of phenobarbital. The microspheres were prepared by solvent evaporation technique using ethylcellulose as polymer matrix with a ratio 1:2, dichloromethane as solvent and Polyvinyl alcohol 1% as processing medium to solidify the microspheres. Size, shape, drug loading capacity and entrapement efficiency were studied.

Keywords: phenobarbital, microspheres, ethylcellulose, polyvinylacohol

Procedia PDF Downloads 359
10538 Application of RayMan Model in Quantifying the Impacts of the Built Environment and Surface Properties on Surrounding Temperature

Authors: Maryam Karimi, Rouzbeh Nazari

Abstract:

Introduction: Understanding thermal distribution in the micro-urban climate has now been necessary for urban planners or designers due to the impact of complex micro-scale features of Urban Heat Island (UHI) on the built environment and public health. Hence, understanding the interrelation between urban components and thermal pattern can assist planners in the proper addition of vegetation to build-environment, which can minimize the UHI impact. To characterize the need for urban green infrastructure (UGI) through better urban planning, this study proposes the use of RayMan model to measure the impact of air quality and increased temperature based on urban morphology in the selected metropolitan cities. This project will measure the impact of build environment for urban and regional planning using human biometeorological evaluations (Tmrt). Methods: We utilized the RayMan model to estimate the Tmrt in an urban environment incorporating location and height of buildings and trees as a supplemental tool in urban planning and street design. The estimated Tmrt value will be compared with existing surface and air temperature data to find the actual temperature felt by pedestrians. Results: Our current results suggest a strong relationship between sky-view factor (SVF) and increased surface temperature in megacities based on current urban morphology. Conclusion: This study will help with Quantifying the impacts of the built environment and surface properties on surrounding temperature, identifying priority urban neighborhoods by analyzing Tmrt and air quality data at the pedestrian level, and characterizing the need for urban green infrastructure cooling potential.

Keywords: built environment, urban planning, urban cooling, extreme heat

Procedia PDF Downloads 117