Search results for: non uniform utility computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2529

Search results for: non uniform utility computing

1119 Computational Fluid Dynamic Modeling of Mixing Enhancement by Stimulation of Ferrofluid under Magnetic Field

Authors: Neda Azimi, Masoud Rahimi, Faezeh Mohammadi

Abstract:

Computational fluid dynamics (CFD) simulation was performed to investigate the effect of ferrofluid stimulation on hydrodynamic and mass transfer characteristics of two immiscible liquid phases in a Y-micromixer. The main purpose of this work was to develop a numerical model that is able to simulate hydrodynamic of the ferrofluid flow under magnetic field and determine its effect on mass transfer characteristics. A uniform external magnetic field was applied perpendicular to the flow direction. The volume of fluid (VOF) approach was used for simulating the multiphase flow of ferrofluid and two-immiscible liquid flows. The geometric reconstruction scheme (Geo-Reconstruct) based on piecewise linear interpolation (PLIC) was used for reconstruction of the interface in the VOF approach. The mass transfer rate was defined via an equation as a function of mass concentration gradient of the transported species and added into the phase interaction panel using the user-defined function (UDF). The magnetic field was solved numerically by Fluent MHD module based on solving the magnetic induction equation method. CFD results were validated by experimental data and good agreements have been achieved, which maximum relative error for extraction efficiency was about 7.52 %. It was showed that ferrofluid actuation by a magnetic field can be considered as an efficient mixing agent for liquid-liquid two-phase mass transfer in microdevices.

Keywords: CFD modeling, hydrodynamic, micromixer, ferrofluid, mixing

Procedia PDF Downloads 186
1118 Text Mining of Veterinary Forums for Epidemiological Surveillance Supplementation

Authors: Samuel Munaf, Kevin Swingler, Franz Brülisauer, Anthony O’Hare, George Gunn, Aaron Reeves

Abstract:

Web scraping and text mining are popular computer science methods deployed by public health researchers to augment traditional epidemiological surveillance. However, within veterinary disease surveillance, such techniques are still in the early stages of development and have not yet been fully utilised. This study presents an exploration into the utility of incorporating internet-based data to better understand the smallholder farming communities within Scotland by using online text extraction and the subsequent mining of this data. Web scraping of the livestock fora was conducted in conjunction with text mining of the data in search of common themes, words, and topics found within the text. Results from bi-grams and topic modelling uncover four main topics of interest within the data pertaining to aspects of livestock husbandry: feeding, breeding, slaughter, and disposal. These topics were found amongst both the poultry and pig sub-forums. Topic modeling appears to be a useful method of unsupervised classification regarding this form of data, as it has produced clusters that relate to biosecurity and animal welfare. Internet data can be a very effective tool in aiding traditional veterinary surveillance methods, but the requirement for human validation of said data is crucial. This opens avenues of research via the incorporation of other dynamic social media data, namely Twitter and Facebook/Meta, in addition to time series analysis to highlight temporal patterns.

Keywords: veterinary epidemiology, disease surveillance, infodemiology, infoveillance, smallholding, social media, web scraping, sentiment analysis, geolocation, text mining, NLP

Procedia PDF Downloads 85
1117 Research on Container Housing: A New Form of Informal Housing on Urban Temporary Land

Authors: Lufei Mao, Hongwei Chen, Zijiao Chai

Abstract:

Informal housing is a widespread phenomenon in developing countries. In many newly-emerging cities in China, rapid urbanization leads to an influx of population as well as a shortage of housing. Under this background, container housing, a new form of informal housing, gradually appears on a small scale on urban temporary land in recent years. Container housing, just as its name implies, transforms containers into small houses that allow migrant workers group to live in it. Scholars in other countries have established sound theoretical frameworks for informal housing study, but the research fruits seem rather limited on this small scale housing form. Unlike the cases in developed countries, these houses, which are outside urban planning, bring about various environmental, economic, social and governance issues. Aiming to figure out this new-born housing form, a survey mainly on two container housing settlements in Hangzhou, China was carried out to gather the information of them. Based on this thorough survey, the paper concludes the features and problems of infrastructure, environment and social communication of container housing settlements. The result shows that these containers were lacking of basic facilities and were restricted in a small mess temporary land. Moreover, because of the deficiency in management, the rental rights of these containers might not be guaranteed. Then the paper analyzes the factors affecting the formation and evolution of container housing settlements. It turns out that institutional and policy factors, market factors and social factors were the main three factors that affect the formation. At last, the paper proposes some suggestions for the governance of container housing and the utility pattern of urban temporary land.

Keywords: container housing, informal housing, urban temporary land, urban governance

Procedia PDF Downloads 247
1116 Deploying a Platform as a Service Cloud Solution to Support Student Learning

Authors: Jiangping Wang

Abstract:

This presentation describes the design and implementation of PaaS (platform as a service) cloud-based labs that are used in database-related courses to teach students practical skills. Traditionally, all labs are implemented in a desktop-based environment where students have to install heavy client software to access database servers. In order to release students from that burden, we have successfully deployed the cloud-based solution to support database-related courses, from which students and teachers can practice and learn database topics in various database courses via cloud access. With its development environment, execution runtime, web server, database server, and collaboration capability, it offers a shared pool of configurable computing resources and comprehensive environment that supports students’ needs without the complexity of maintaining the infrastructure.

Keywords: PaaS, database environment, e-learning, web server

Procedia PDF Downloads 259
1115 Determining Water Quantity from Sprayer Nozzle Using Particle Image Velocimetry (PIV) and Image Processing Techniques

Authors: M. Nadeem, Y. K. Chang, C. Diallo, U. Venkatadri, P. Havard, T. Nguyen-Quang

Abstract:

Uniform distribution of agro-chemicals is highly important because there is a significant loss of agro-chemicals, for example from pesticide, during spraying due to non-uniformity of droplet and off-target drift. Improving the efficiency of spray pattern for different cropping systems would reduce energy, costs and to minimize environmental pollution. In this paper, we examine the water jet patterns in order to study the performance and uniformity of water distribution during the spraying process. We present a method to quantify the water amount from a sprayer jet by using the Particle Image Velocimetry (PIV) system. The results of the study will be used to optimize sprayer or nozzles design for chemical application. For this study, ten sets of images were acquired by using the following PIV system settings: double frame mode, trigger rate is 4 Hz, and time between pulsed signals is 500 µs. Each set of images contained different numbers of double-framed images: 10, 20, 30, 40, 50, 60, 70, 80, 90 and 100 at eight different pressures 25, 50, 75, 100, 125, 150, 175 and 200 kPa. The PIV images obtained were analysed using custom-made image processing software for droplets and volume calculations. The results showed good agreement of both manual and PIV measurements and suggested that the PIV technique coupled with image processing can be used for a precise quantification of flow through nozzles. The results also revealed that the method of measuring fluid flow through PIV is reliable and accurate for sprayer patterns.

Keywords: image processing, PIV, quantifying the water volume from nozzle, spraying pattern

Procedia PDF Downloads 227
1114 Utility of CT Perfusion Imaging for Diagnosis and Management of Delayed Cerebral Ischaemia Following Subarachnoid Haemorrhage

Authors: Abdalla Mansour, Dan Brown, Adel Helmy, Rikin Trivedi, Mathew Guilfoyle

Abstract:

Introduction: Diagnosing delayed cerebral ischaemia (DCI) following aneurysmal subarachnoid haemorrhage (SAH) can be challenging, particularly in poor-grade patients. Objectives: This study sought to assess the value of routine CTP in identifying (or excluding) DCI and in guiding management. Methods: Eight-year retrospective neuroimaging study at a large UK neurosurgical centre. Subjects included a random sample of adult patients with confirmed aneurysmal SAH that had a CTP scan during their inpatient stay, over a 8-year period (May 2014 - May 2022). Data collected through electronic patient record and PACS. Variables included age, WFNS scale, aneurysm site, treatment, the timing of CTP, radiologist report, and DCI management. Results: Over eight years, 916 patients were treated for aneurysmal SAH; this study focused on 466 patients that were randomly selected. Of this sample, 181 (38.84%) had one or more CTP scans following brain aneurysm treatment (Total 318). The first CTP scan in each patient was performed at 1-20 days following ictus (median 4 days). There was radiological evidence of DCI in 83, and no reversible ischaemia was found in 80. Findings were equivocal in the remaining 18. Of the 103 patients treated with clipping, 49 had DCI radiological evidence, in comparison to 31 of 69 patients treated with endovascular embolization. The remaining 9 patients are either unsecured aneurysms or non-aneurysmal SAH. Of the patients with radiological evidence of DCI, 65 had a treatment change following the CTP directed at improving cerebral perfusion. In contrast, treatment was not changed for (61) patients without radiological evidence of DCI. Conclusion: CTP is a useful adjunct to clinical assessment in the diagnosis of DCI and is helpful in identifying patients that may benefit from intensive therapy and those in whom it is unlikely to be effective.

Keywords: SAH, vasospasm, aneurysm, delayed cerebral ischemia

Procedia PDF Downloads 61
1113 An Early Detection Type 2 Diabetes Using K - Nearest Neighbor Algorithm

Authors: Ng Liang Shen, Ngahzaifa Abdul Ghani

Abstract:

This research aimed at developing an early warning system for pre-diabetic and diabetics by analyzing simple and easily determinable signs and symptoms of diabetes among the people living in Malaysia using Particle Swarm Optimized Artificial. With the skyrocketing prevalence of Type 2 diabetes in Malaysia, the system can be used to encourage affected people to seek further medical attention to prevent the onset of diabetes or start managing it early enough to avoid the associated complications. The study sought to find out the best predictive variables of Type 2 Diabetes Mellitus, developed a system to diagnose diabetes from the variables using Artificial Neural Networks and tested the system on accuracy to find out the patent generated from diabetes diagnosis result in machine learning algorithms even at primary or advanced stages.

Keywords: diabetes diagnosis, Artificial Neural Networks, artificial intelligence, soft computing, medical diagnosis

Procedia PDF Downloads 324
1112 Technology Identification, Evaluation and Selection Methodology for Industrial Process Water and Waste Water Treatment Plant of 3x150 MWe Tufanbeyli Lignite-Fired Power Plant

Authors: Cigdem Safak Saglam

Abstract:

Most thermal power plants use steam as working fluid in their power cycle. Therefore, in addition to fuel, water is the other main input for thermal plants. Water and steam must be highly pure in order to protect the systems from corrosion, scaling and biofouling. Pure process water is produced in water treatment plants having many several treatment methods. Treatment plant design is selected depending on raw water source and required water quality. Although working principle of fossil-fuel fired thermal power plants are same, there is no standard design and equipment arrangement valid for all thermal power plant utility systems. Besides that, there are many other technology evaluation and selection criteria for designing the most optimal water systems meeting the requirements such as local conditions, environmental restrictions, electricity and other consumables availability and transport, process water sources and scarcity, land use constraints etc. Aim of this study is explaining the adopted methodology for technology selection for process water preparation and industrial waste water treatment plant in a thermal power plant project located in Tufanbeyli, Adana Province in Turkey. Thermal power plant is fired with indigenous lignite coal extracted from adjacent lignite reserves. This paper addresses all above-mentioned factors affecting the thermal power plant water treatment facilities (demineralization + waste water treatment) design and describes the ultimate design of Tufanbeyli Thermal Power Plant Water Treatment Plant.

Keywords: thermal power plant, lignite coal, pretreatment, demineralization, electrodialysis, recycling, ash dampening

Procedia PDF Downloads 471
1111 Laser Shock Peening of Additively Manufactured Nickel-Based Superalloys

Authors: Michael Munther, Keivan Davami

Abstract:

One significant roadblock for additively manufactured (AM) parts is the buildup of residual tensile stresses during the fabrication process. These residual stresses are formed due to the intense localized thermal gradients and high cooling rates that cause non-uniform material expansion/contraction and mismatched strain profiles during powder-bed fusion techniques, such as direct metal laser sintering (DMLS). The residual stresses adversely affect the fatigue life of the AM parts. Moreover, if the residual stresses become higher than the material’s yield strength, they will lead to acute geometric distortion. These are limiting the applications and acceptance of AM components for safety-critical applications. Herein, we discuss laser shock peening method as an advanced technique for the manipulation of the residual stresses in AM parts. An X-ray diffraction technique is used for the measurements of the residual stresses before and after the laser shock peening process. Also, the hardness of the structures is measured using a nanoindentation technique. Maps of nanohardness and modulus are obtained from the nanoindentation, and a correlation is made between the residual stresses and the mechanical properties. The results indicate that laser shock peening is able to induce compressive residual stresses in the structure that mitigate the tensile residual stresses and increase the hardness of AM IN718, a superalloy, almost 20%. No significant changes were observed in the modulus after laser shock peening. The results strongly suggest that laser shock peening can be used as an advanced post-processing technique to optimize the service lives of critical components for various applications.

Keywords: additive manufacturing, Inconel 718, laser shock peening, residual stresses

Procedia PDF Downloads 119
1110 Conventional and Hybrid Network Energy Systems Optimization for Canadian Community

Authors: Mohamed Ghorab

Abstract:

Local generated and distributed system for thermal and electrical energy is sighted in the near future to reduce transmission losses instead of the centralized system. Distributed Energy Resources (DER) is designed at different sizes (small and medium) and it is incorporated in energy distribution between the hubs. The energy generated from each technology at each hub should meet the local energy demands. Economic and environmental enhancement can be achieved when there are interaction and energy exchange between the hubs. Network energy system and CO2 optimization between different six hubs presented Canadian community level are investigated in this study. Three different scenarios of technology systems are studied to meet both thermal and electrical demand loads for the six hubs. The conventional system is used as the first technology system and a reference case study. The conventional system includes boiler to provide the thermal energy, but the electrical energy is imported from the utility grid. The second technology system includes combined heat and power (CHP) system to meet the thermal demand loads and part of the electrical demand load. The third scenario has integration systems of CHP and Organic Rankine Cycle (ORC) where the thermal waste energy from the CHP system is used by ORC to generate electricity. General Algebraic Modeling System (GAMS) is used to model DER system optimization based on energy economics and CO2 emission analyses. The results are compared with the conventional energy system. The results show that scenarios 2 and 3 provide an annual total cost saving of 21.3% and 32.3 %, respectively compared to the conventional system (scenario 1). Additionally, Scenario 3 (CHP & ORC systems) provides 32.5% saving in CO2 emission compared to conventional system subsequent case 2 (CHP system) with a value of 9.3%.  

Keywords: distributed energy resources, network energy system, optimization, microgeneration system

Procedia PDF Downloads 186
1109 Cd1−xMnxSe Thin Films Preparation by Cbd: Aspect on Optical and Electrical Properties

Authors: Jaiprakash Dargad

Abstract:

CdMnSe dilute semiconductor or semimagnetic semiconductors have become the focus of intense research due to their interesting combination of magnetic and semiconducting properties, and are employed in a variety of devices including solar cells, gas sensors etc. A series of thin films of this material, Cd1−xMnxSe (0 ≤ x ≤ 0.5), were therefore synthesized onto precleaned amorphous glass substrates using a solution growth technique. The sources of cadmium (Cd2+) and manganese (Mn2+) were aqueous solutions of cadmium sulphate and manganese sulphate, and selenium (Se2−) was extracted from a reflux of sodium selenosulphite. The different deposition parameters such as temperature, time of deposition, speed of mechanical churning, pH of the reaction mixture etc were optimized to yield good quality deposits. The as-grown samples were thin, relatively uniform, smooth and tightly adherent to the substrate support. The colour of the deposits changed from deep red-orange to yellowish-orange as the composition parameter, x, was varied from 0 to 0.5. The terminal layer thickness decreased with increasing value of, x. The optical energy gap decreased from 1.84 eV to 1.34 eV for the change of x from 0 to 0.5. The coefficient of optical absorption is of the order of 10-4 - 10-5 cm−1 and the type of transition (m = 0.5) is of the band-to-band direct type. The dc electrical conductivities were measured at room temperature and in the temperature range 300 K - 500 K. It was observed that the room temperature electrical conductivity increased with the composition parameter x up to 0.1, gradually decreasing thereafter. The thermo power measurements showed n-type conduction in these films.

Keywords: dilute semiconductor, reflux, CBD, thin film

Procedia PDF Downloads 224
1108 Design and Optimization of Open Loop Supply Chain Distribution Network Using Hybrid K-Means Cluster Based Heuristic Algorithm

Authors: P. Suresh, K. Gunasekaran, R. Thanigaivelan

Abstract:

Radio frequency identification (RFID) technology has been attracting considerable attention with the expectation of improved supply chain visibility for consumer goods, apparel, and pharmaceutical manufacturers, as well as retailers and government procurement agencies. It is also expected to improve the consumer shopping experience by making it more likely that the products they want to purchase are available. Recent announcements from some key retailers have brought interest in RFID to the forefront. A modified K- Means Cluster based Heuristic approach, Hybrid Genetic Algorithm (GA) - Simulated Annealing (SA) approach, Hybrid K-Means Cluster based Heuristic-GA and Hybrid K-Means Cluster based Heuristic-GA-SA for Open Loop Supply Chain Network problem are proposed. The study incorporated uniform crossover operator and combined crossover operator in GAs for solving open loop supply chain distribution network problem. The algorithms are tested on 50 randomly generated data set and compared with each other. The results of the numerical experiments show that the Hybrid K-means cluster based heuristic-GA-SA, when tested on 50 randomly generated data set, shows superior performance to the other methods for solving the open loop supply chain distribution network problem.

Keywords: RFID, supply chain distribution network, open loop supply chain, genetic algorithm, simulated annealing

Procedia PDF Downloads 157
1107 Energy Absorption Capacity of Aluminium Foam Manufactured by Kelvin Model Loaded Under Different Biaxial Combined Compression-Torsion Conditions

Authors: H. Solomon, A. Abdul-Latif, R. Baleh, I. Deiab, K. Khanafer

Abstract:

Aluminum foams were developed and tested due to their high energy absorption abilities for multifunctional applications. The aim of this research work was to investigate experimentally the effect of quasi-static biaxial loading complexity (combined compression-torsion) on the energy absorption capacity of highly uniform architecture open-cell aluminum foam manufactured by kelvin cell model. The two generated aluminum foams have 80% and 85% porosities, spherical-shaped pores having 11mm in diameter. These foams were tested by means of several square-section specimens. A patented rig called ACTP (Absorption par Compression-Torsion Plastique), was used to investigate the foam response under quasi-static complex loading paths having different torsional components (i.e., 0°, 37° and 53°). The main mechanical responses of the aluminum foams were studied under simple, intermediate and severe loading conditions. In fact, the key responses to be examined were stress plateau and energy absorption capacity of the two foams with respect to loading complexity. It was concluded that the higher the loading complexity and the higher the relative density, the greater the energy absorption capacity of the foam. The highest energy absorption was thus recorded under the most complicated loading path (i.e., biaxial-53°) for the denser foam (i.e., 80% porosity).

Keywords: open-cell aluminum foams, biaxial loading complexity, foams porosity, energy absorption capacity, characterization

Procedia PDF Downloads 113
1106 Analytical Determination of Electromechanical Coupling Effects on Interlaminar Stresses of Generally Laminated Piezoelectric Plates

Authors: Atieh Andakhshideh, S. Maleki, Sayed Sadegh Marashi

Abstract:

In this paper, the interlaminar stresses of generally laminated piezoelectric plates are presented. The electromechanical coupling effect of the piezoelectric plate is considered and the governing equations and boundary conditions are derived using the principle of minimum total potential energy. The solution procedure is a three-dimensional multi-term extended Kantorovich method (3DMTEKM). The objective of this paper is to accurately study coupling influence on the edge effects of piezolaminated plates with finite dimensions, arbitrary lamination lay-ups and under uniform axial strain. These results can provide a benchmark for checking the accuracy of the other numerical method or two-dimensional laminate theories. To verify the accuracy of the 3DMTEKM, first examples are simplified to special cases such as cross-ply or symmetric laminations and are compared with other analytical solutions available in the literature. Excellent agreement is achieved in validation test and other numerical results are presented for general cases. Numerical examples indicate the singular behavior of interlaminar normal/shear stresses and electric field strength components near the edges of the piezolaminated plates. The coupling influence on the free edge effect with respect to lamination lay-ups of piezoelectric plate is studied in several examples.

Keywords: electromechanical coupling, generally laminated piezoelectric plates, Kantorovich method, edge effect, interlaminar stresses

Procedia PDF Downloads 141
1105 New Approaches to the Determination of the Time Costs of Movements

Authors: Dana Kristalova

Abstract:

This article deals with geographical conditions in terrain and their effect on the movement of vehicles, their effect on speed and safety of movement of people and vehicles. Finding of the optimal routes outside the communication is studied in the army environment, but it occur in civilian as well, primarily in crisis situation, or by the provision of assistance when natural disasters such as floods, fires, storms, etc. have happened. These movements require the optimization of routes when effects of geographical factors should be included. The most important factor is surface of the terrain. It is based on several geographical factors as are slopes, soil conditions, micro-relief, a type of surface and meteorological conditions. Their mutual impact has been given by coefficient of deceleration. This coefficient can be used for commander´s decision. New approaches and methods of terrain testing, mathematical computing, mathematical statistics or cartometric investigation are necessary parts of this evaluation.

Keywords: surface of a terrain, movement of vehicles, geographical factor, optimization of routes

Procedia PDF Downloads 457
1104 The Utilization of Tea Extract within the Realm of the Food Industry

Authors: Raana Babadi Fathipour

Abstract:

Tea, a beverage widely cherished across the globe, has captured the interest of scholars with its recent acknowledgement for possessing noteworthy health advantages. Of particular significance is its proven ability to ward off ailments such as cancer and cardiovascular afflictions. Moreover, within the realm of culinary creations, lipid oxidation poses a significant challenge for food product development. In light of these aforementioned concerns, this present discourse turns its attention towards exploring diverse methodologies employed in extracting polyphenols from various types of tea leaves and examining their utility within the vast landscape of the ever-evolving food industry. Based on the discoveries unearthed in this comprehensive investigation, it has been determined that the fundamental constituents of tea are polyphenols possessed of intrinsic health-enhancing properties. This includes an assortment of catechins, namely epicatechin, epigallocatechin, epicatechin gallate, and epigallocatechin gallate. Moreover, gallic acid, flavonoids, flavonols and theaphlavins have also been detected within this aromatic beverage. Of these myriad components examined vigorously in this study's analysis, catechin emerges as particularly beneficial. Multiple techniques have emerged over time to successfully extract key compounds from tea plants, including solvent-based extraction methodologies, microwave-assisted water extraction approaches and ultrasound-assisted extraction techniques. In particular, consideration is given to microwave-assisted water extraction method as a viable scheme which effectively procures valuable polyphenols from tea extracts. This methodology appears adaptable for implementation within sectors such as dairy production along with meat and oil industries alike.

Keywords: camellia sinensis, extraction, food application, shelf life, tea

Procedia PDF Downloads 60
1103 Predicting Data Center Resource Usage Using Quantile Regression to Conserve Energy While Fulfilling the Service Level Agreement

Authors: Ahmed I. Alutabi, Naghmeh Dezhabad, Sudhakar Ganti

Abstract:

Data centers have been growing in size and dema nd continuously in the last two decades. Planning for the deployment of resources has been shallow and always resorted to over-provisioning. Data center operators try to maximize the availability of their services by allocating multiple of the needed resources. One resource that has been wasted, with little thought, has been energy. In recent years, programmable resource allocation has paved the way to allow for more efficient and robust data centers. In this work, we examine the predictability of resource usage in a data center environment. We use a number of models that cover a wide spectrum of machine learning categories. Then we establish a framework to guarantee the client service level agreement (SLA). Our results show that using prediction can cut energy loss by up to 55%.

Keywords: machine learning, artificial intelligence, prediction, data center, resource allocation, green computing

Procedia PDF Downloads 102
1102 Analyzing Information Management in Science and Technology Institute Libraries in India

Authors: P. M. Naushad Ali

Abstract:

India’s strength in basic research is recognized internationally. Science and Technology research in India has been performed by six distinct bodies or organizations such as Cooperative Research Associations, Autonomous Research Council, Institute under Ministries, Industrial R&D Establishment, Universities, Private Institutions. All most all these institutions are having a well-established library/information center to cater the information needs of their users like scientists and technologists. Information Management (IM) comprises disciplines concerned with the study and the effective and efficient management of information and resources, products and services as well as the understanding of the involved technologies and the people engaged in this activity. It is also observed that the libraries and information centers in India are also using modern technologies for the management of various activities and services to serve their users in a better way. Science and Technology libraries in the country are usually better equipped because the investment in Science and Technology in the country are much larger than those in other fields. Thus, most of the Science and Technology libraries are equipped with modern IT-based tools for handling and management of library services. In spite of these facts Science and Technology libraries are having all the characteristics of a model organization where computer application is found most successful, however, the adoption of this IT based management tool is not uniform in these libraries. The present study will help to know about the level use of IT-based management tools for the information management of Science and Technology libraries in India. The questionnaire, interview, observation and document review techniques have been used in data collection. Finally, the author discusses findings of the study and put forward some suggestions to improve the quality of Science and Technology institute library services in India.

Keywords: information management, science and technology libraries, India, IT-based tools

Procedia PDF Downloads 388
1101 Kinoform Optimisation Using Gerchberg- Saxton Iterative Algorithm

Authors: M. Al-Shamery, R. Young, P. Birch, C. Chatwin

Abstract:

Computer Generated Holography (CGH) is employed to create digitally defined coherent wavefronts. A CGH can be created by using different techniques such as by using a detour-phase technique or by direct phase modulation to create a kinoform. The detour-phase technique was one of the first techniques that was used to generate holograms digitally. The disadvantage of this technique is that the reconstructed image often has poor quality due to the limited dynamic range it is possible to record using a medium with reasonable spatial resolution.. The kinoform (phase-only hologram) is an alternative technique. In this method, the phase of the original wavefront is recorded but the amplitude is constrained to be constant. The original object does not need to exist physically and so the kinoform can be used to reconstruct an almost arbitrary wavefront. However, the image reconstructed by this technique contains high levels of noise and is not identical to the reference image. To improve the reconstruction quality of the kinoform, iterative techniques such as the Gerchberg-Saxton algorithm (GS) are employed. In this paper the GS algorithm is described for the optimisation of a kinoform used for the reconstruction of a complex wavefront. Iterations of the GS algorithm are applied to determine the phase at a plane (with known amplitude distribution which is often taken as uniform), that satisfies given phase and amplitude constraints in a corresponding Fourier plane. The GS algorithm can be used in this way to enhance the reconstruction quality of the kinoform. Different images are employed as the reference object and their kinoform is synthesised using the GS algorithm. The quality of the reconstructed images is quantified to demonstrate the enhanced reconstruction quality achieved by using this method.

Keywords: computer generated holography, digital holography, Gerchberg-Saxton algorithm, kinoform

Procedia PDF Downloads 522
1100 Numerical Simulation of Footing on Reinforced Loose Sand

Authors: M. L. Burnwal, P. Raychowdhury

Abstract:

Earthquake leads to adverse effects on buildings resting on soft soils. Mitigating the response of shallow foundations on soft soil with different methods reduces settlement and provides foundation stability. Few methods such as the rocking foundation (used in Performance-based design), deep foundation, prefabricated drain, grouting, and Vibro-compaction are used to control the pore pressure and enhance the strength of the loose soils. One of the problems with these methods is that the settlement is uncontrollable, leading to differential settlement of the footings, further leading to the collapse of buildings. The present study investigates the utility of geosynthetics as a potential improvement of the subsoil to reduce the earthquake-induced settlement of structures. A steel moment-resisting frame building resting on loose liquefiable dry soil, subjected to Uttarkashi 1991 and Chamba 1995 earthquakes, is used for the soil-structure interaction (SSI) analysis. The continuum model can simultaneously simulate structure, soil, interfaces, and geogrids in the OpenSees framework. Soil is modeled with PressureDependentMultiYield (PDMY) material models with Quad element that provides stress-strain at gauss points and is calibrated to predict the behavior of Ganga sand. The model analyzed with a tied degree of freedom contact reveals that the system responses align with the shake table experimental results. An attempt is made to study the responses of footing structure and geosynthetics with unreinforced and reinforced bases with varying parameters. The result shows that geogrid reinforces shallow foundation effectively reduces the settlement by 60%.

Keywords: settlement, shallow foundation, SSI, continuum FEM

Procedia PDF Downloads 185
1099 Mapping a Data Governance Framework to the Continuum of Care in the Active Assisted Living Context

Authors: Gaya Bin Noon, Thoko Hanjahanja-Phiri, Laura Xavier Fadrique, Plinio Pelegrini Morita, Hélène Vaillancourt, Jennifer Teague, Tania Donovska

Abstract:

Active Assisted Living (AAL) refers to systems designed to improve the quality of life, aid in independence, and create healthier lifestyles for care recipients. As the population ages, there is a pressing need for non-intrusive, continuous, adaptable, and reliable health monitoring tools to support aging in place. AAL has great potential to support these efforts with the wide variety of solutions currently available, but insufficient efforts have been made to address concerns arising from the integration of AAL into care. The purpose of this research was to (1) explore the integration of AAL technologies and data into the clinical pathway, and (2) map data access and governance for AAL technology in order to develop standards for use by policy-makers, technology manufacturers, and developers of smart communities for seniors. This was done through four successive research phases: (1) literature search to explore existing work in this area and identify lessons learned; (2) modeling of the continuum of care; (3) adapting a framework for data governance into the AAL context; and (4) interviews with stakeholders to explore the applicability of previous work. Opportunities for standards found in these research phases included a need for greater consistency in language and technology requirements, better role definition regarding who can access and who is responsible for taking action based on the gathered data, and understanding of the privacy-utility tradeoff inherent in using AAL technologies in care settings.

Keywords: active assisted living, aging in place, internet of things, standards

Procedia PDF Downloads 125
1098 Computational Analysis on Thermal Performance of Chip Package in Electro-Optical Device

Authors: Long Kim Vu

Abstract:

The central processing unit in Electro-Optical devices is a Field-programmable gate array (FPGA) chip package allowing flexible, reconfigurable computing but energy consumption. Because chip package is placed in isolated devices based on IP67 waterproof standard, there is no air circulation and the heat dissipation is a challenge. In this paper, the author successfully modeled a chip package which various interposer materials such as silicon, glass and organics. Computational fluid dynamics (CFD) was utilized to analyze the thermal performance of chip package in the case of considering comprehensive heat transfer modes: conduction, convection and radiation, which proposes equivalent heat dissipation. The logic chip temperature varying with time is compared between the simulation and experiment results showing the excellent correlation, proving the reasonable chip modeling and simulation method.

Keywords: CFD, FPGA, heat transfer, thermal analysis

Procedia PDF Downloads 175
1097 Optimisation of Pin Fin Heat Sink Using Taguchi Method

Authors: N. K. Chougule, G. V. Parishwad

Abstract:

The pin fin heat sink is a novel heat transfer device to transfer large amount of heat through with very small temperature differences and it also possesses large uniform cooling characteristics. Pin fins are widely used as elements that provide increased cooling for electronic devices. Increasing demands regarding the performance of such devices can be observed due to the increasing heat production density of electronic components. For this reason, extensive work is being carried out to select and optimize pin fin elements for increased heat transfer. In this paper, the effects of design parameters and the optimum design parameters for a Pin-Fin heat sink (PFHS) under multi-jet impingement case with thermal performance characteristics have been investigated by using Taguchi methodology based on the L9 orthogonal arrays. Various design parameters, such as pin-fin array size, gap between nozzle exit to impingement target surface (Z/d) and air velocity are explored by numerical experiment. The average convective heat transfer coefficient is considered as the thermal performance characteristics. The analysis of variance (ANOVA) is applied to find the effect of each design parameter on the thermal performance characteristics. Then the results of confirmation test with the optimal level constitution of design parameters have obviously shown that this logic approach can effective in optimizing the PFHS with the thermal performance characteristics. The analysis of the Taguchi method reveals that, all the parameters mentioned above have equal contributions in the performance of heat sink efficiency. Experimental results are provided to validate the suitability of the proposed approach.

Keywords: Pin Fin Heat Sink (PFHS), Taguchi method, CFD, thermal performance

Procedia PDF Downloads 240
1096 A Cost-Benefit Analysis of Routinely Performed Transthoracic Echocardiography in the Setting of Acute Ischemic Stroke

Authors: John Rothrock

Abstract:

Background: The role of transthoracic echocardiography (TTE) in the diagnosis and management of patients with acute ischemic stroke remains controversial. While many stroke subspecialist reserve TTE for selected patients, others consider the procedure obligatory for most or all acute stroke patients. This study was undertaken to assess the cost vs. benefit of 'routine' TTE. Methods: We examined a consecutive series of patients who were admitted to a single institution in 2019 for acute ischemic stroke and underwent TTE. We sought to determine the frequency with which the results of TTE led to a new diagnosis of cardioembolism, redirected therapeutic cerebrovascular management, and at least potentially influenced the short or long-term clinical outcome. We recorded the direct cost associated with TTE. Results: There were 1076 patients in the study group, all of whom underwent TTE. TTE identified an unsuspected source of possible/probable cardioembolism in 62 patients (6%), confirmed an initially suspected source (primarily endocarditis) in an additional 13 (1%) and produced findings that stimulated subsequent testing diagnostic of possible/probable cardioembolism in 7 patients ( < 1%). TTE results potentially influenced the clinical outcome in a total of 48 patients (4%). With a total direct cost of $1.51 million, the mean cost per case wherein TTE results potentially influenced the clinical outcome in a positive manner was $31,375. Diagnostically and therapeutically, TTE was most beneficial in 67 patients under the age of 55 who presented with 'cryptogenic' stroke, identifying patent foramen ovale in 21 (31%); closure was performed in 19. Conclusions: The utility of TTE in the setting of acute ischemic stroke is modest, with its yield greatest in younger patients with cryptogenic stroke. Given the greater sensitivity of transesophageal echocardiography in detecting PFO and evaluating the aortic arch, TTE’s role in stroke diagnosis would appear to be limited.

Keywords: cardioembolic, cost-benefit, stroke, TTE

Procedia PDF Downloads 112
1095 Understanding the Linkages of Human Development and Fertility Change in Districts of Uttar Pradesh

Authors: Mamta Rajbhar, Sanjay K. Mohanty

Abstract:

India's progress in achieving replacement level of fertility is largely contingent on fertility reduction in the state of Uttar Pradesh as it accounts 17% of India's population with a low level of development. Though the TFR in the state has declined from 5.1 in 1991 to 3.4 by 2011, it conceals large differences in fertility level across districts. Using data from multiple sources this paper tests the hypothesis that the improvement in human development significantly reduces the fertility levels in districts of Uttar Pradesh. The unit of analyses is district, and fertility estimates are derived using the reverse survival method(RSM) while human development indices(HDI) are are estimated using uniform methodology adopted by UNDP for three period. The correlation and linear regression models are used to examine the relationship of fertility change and human development indices across districts. Result show the large variation and significant change in fertility level among the districts of Uttar Pradesh. During 1991-2011, eight districts had experienced a decline of TFR by 10-20%, 30 districts by 20-30% and 32 districts had experienced decline of more than 30%. On human development aspect, 17 districts recorded increase of more than 0.170 in HDI, 18 districts in the range of 0.150-0.170, 29 districts between 0.125-0.150 and six districts in the range of 0.1-0.125 during 1991-2011. Study shows significant negative relationship between HDI and TFR. HDI alone explains 70% variation in TFR. Also, the regression coefficient of TFR and HDI has become stronger over time; from -0.524 in 1991, -0.7477 by 2001 and -0.7181 by 2010. The regression analyses indicate that 0.1 point increase in HDI value will lead to 0.78 point decline in TFR. The HDI alone explains 70% variation in TFR. Improving the HDI will certainly reduce the fertility level in the districts.

Keywords: Fertility, HDI, Uttar Pradesh

Procedia PDF Downloads 235
1094 Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model

Authors: Nureni O. Adeboye, Dawud A. Agunbiade

Abstract:

This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.

Keywords: audit fee lagrange multiplier test, heteroscedasticity, lagrange multiplier test, Monte-Carlo scheme, periodicity

Procedia PDF Downloads 134
1093 Knowledge-Attitude-Practice Survey Regarding High Alert Medication in a Teaching Hospital in Eastern India

Authors: D. S. Chakraborty, S. Ghosh, A. Hazra

Abstract:

Objective: Medication errors are a reality in all settings where medicines are prescribed, dispensed and used. High Alert Medications (HAM) are those that bear a heightened risk of causing significant patient harm when used in error. We conducted a knowledge-attitude-practice survey, among residents working in a teaching hospital, to assess the ground situation with regard to the handling of HAM. Methods: We plan to approach 242 residents among the approximately 600 currently working in the hospital through purposive sampling. Residents in all disciplines (clinical, paraclinical and preclinical) are being targeted. A structured questionnaire that has been pretested on 5 volunteer residents is being used for data collection. The questionnaire is being administered to residents individually through face-to-face interview, by two raters, while they are on duty but not during rush hours. Results: Of the 156 residents approached so far, data from 140 have been analyzed, the rest having refused participation. Although background knowledge exists for the majority of respondents, awareness levels regarding HAM are moderate, and attitude is non-uniform. The number of respondents correctly able to identify most ( > 80%) HAM in three common settings– accident and emergency, obstetrics and intensive care unit are less than 70%. Several potential errors in practice have been identified. The study is ongoing. Conclusions: Situation requires corrective action. There is an urgent need for improving awareness regarding HAM for the sake of patient safety. The pharmacology department can take the lead in designing awareness campaign with support from the hospital administration.

Keywords: high alert medication, medication error, questionnaire, resident

Procedia PDF Downloads 120
1092 Learning Grammars for Detection of Disaster-Related Micro Events

Authors: Josef Steinberger, Vanni Zavarella, Hristo Tanev

Abstract:

Natural disasters cause tens of thousands of victims and massive material damages. We refer to all those events caused by natural disasters, such as damage on people, infrastructure, vehicles, services and resource supply, as micro events. This paper addresses the problem of micro - event detection in online media sources. We present a natural language grammar learning algorithm and apply it to online news. The algorithm in question is based on distributional clustering and detection of word collocations. We also explore the extraction of micro-events from social media and describe a Twitter mining robot, who uses combinations of keywords to detect tweets which talk about effects of disasters.

Keywords: online news, natural language processing, machine learning, event extraction, crisis computing, disaster effects, Twitter

Procedia PDF Downloads 472
1091 Assessing the Legacy Effects of Wildfire on Eucalypt Canopy Structure of South Eastern Australia

Authors: Yogendra K. Karna, Lauren T. Bennett

Abstract:

Fire-tolerant eucalypt forests are one of the major forest ecosystems of south-eastern Australia and thought to be highly resistant to frequent high severity wildfires. However, the impact of different severity wildfires on the canopy structure of fire-tolerant forest type is under-studied, and there are significant knowledge gaps in relation to the assessment of tree and stand level canopy structural dynamics and recovery after fire. Assessment of canopy structure is a complex task involving accurate measurements of the horizontal and vertical arrangement of the canopy in space and time. This study examined the utility of multitemporal, small-footprint lidar data to describe the changes in the horizontal and vertical canopy structure of fire-tolerant eucalypt forests seven years after wildfire of different severities from the tree to stand level. Extensive ground measurements were carried out in four severity classes to describe and validate canopy cover and height metrics as they change after wildfire. Several metrics such as crown height and width, crown base height and clumpiness of crown were assessed at tree and stand level using several individual tree top detection and measurement algorithm. Persistent effects of high severity fire 8 years after both on tree crowns and stand canopy were observed. High severity fire increased the crown depth but decreased the crown projective cover leading to more open canopy.

Keywords: canopy gaps, canopy structure, crown architecture, crown projective cover, multi-temporal lidar, wildfire severity

Procedia PDF Downloads 164
1090 A Quality Index Optimization Method for Non-Invasive Fetal ECG Extraction

Authors: Lucia Billeci, Gennaro Tartarisco, Maurizio Varanini

Abstract:

Fetal cardiac monitoring by fetal electrocardiogram (fECG) can provide significant clinical information about the healthy condition of the fetus. Despite this potentiality till now the use of fECG in clinical practice has been quite limited due to the difficulties in its measuring. The recovery of fECG from the signals acquired non-invasively by using electrodes placed on the maternal abdomen is a challenging task because abdominal signals are a mixture of several components and the fetal one is very weak. This paper presents an approach for fECG extraction from abdominal maternal recordings, which exploits the characteristics of pseudo-periodicity of fetal ECG. It consists of devising a quality index (fQI) for fECG and of finding the linear combinations of preprocessed abdominal signals, which maximize these fQI (quality index optimization - QIO). It aims at improving the performances of the most commonly adopted methods for fECG extraction, usually based on maternal ECG (mECG) estimating and canceling. The procedure for the fECG extraction and fetal QRS (fQRS) detection is completely unsupervised and based on the following steps: signal pre-processing; maternal ECG (mECG) extraction and maternal QRS detection; mECG component approximation and canceling by weighted principal component analysis; fECG extraction by fQI maximization and fetal QRS detection. The proposed method was compared with our previously developed procedure, which obtained the highest at the Physionet/Computing in Cardiology Challenge 2013. That procedure was based on removing the mECG from abdominal signals estimated by a principal component analysis (PCA) and applying the Independent component Analysis (ICA) on the residual signals. Both methods were developed and tuned using 69, 1 min long, abdominal measurements with fetal QRS annotation of the dataset A provided by PhysioNet/Computing in Cardiology Challenge 2013. The QIO-based and the ICA-based methods were compared in analyzing two databases of abdominal maternal ECG available on the Physionet site. The first is the Abdominal and Direct Fetal Electrocardiogram Database (ADdb) which contains the fetal QRS annotations thus allowing a quantitative performance comparison, the second is the Non-Invasive Fetal Electrocardiogram Database (NIdb), which does not contain the fetal QRS annotations so that the comparison between the two methods can be only qualitative. In particular, the comparison on NIdb was performed defining an index of quality for the fetal RR series. On the annotated database ADdb the QIO method, provided the performance indexes Sens=0.9988, PPA=0.9991, F1=0.9989 overcoming the ICA-based one, which provided Sens=0.9966, PPA=0.9972, F1=0.9969. The comparison on NIdb was performed defining an index of quality for the fetal RR series. The index of quality resulted higher for the QIO-based method compared to the ICA-based one in 35 records out 55 cases of the NIdb. The QIO-based method gave very high performances with both the databases. The results of this study foresees the application of the algorithm in a fully unsupervised way for the implementation in wearable devices for self-monitoring of fetal health.

Keywords: fetal electrocardiography, fetal QRS detection, independent component analysis (ICA), optimization, wearable

Procedia PDF Downloads 270