Search results for: continuous wavelet analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 29669

Search results for: continuous wavelet analysis

28799 Mapping the Core Processes and Identifying Actors along with Their Roles, Functions and Linkages in Trout Value Chain in Kashmir, India

Authors: Stanzin Gawa, Nalini Ranjan Kumar, Gohar Bilal Wani, Vinay Maruti Hatte, A. Vinay

Abstract:

Rainbow trout (Oncorhynchus mykiss) and Brown trout (Salmo trutta fario) are the two species of trout which were once introduced by British in waters of Kashmir has well adapted to favorable climatic conditions. Cold water fisheries are one of the emerging sectors in Kashmir valley and trout holds an important place Jammu and Kashmir fisheries. Realizing the immense potential of trout culture in Kashmir region, the state fisheries department started privatizing trout culture under the centrally funded scheme of RKVY in which they provide 80 percent subsidy for raceway construction and supply of feed and seed for the first year since 2009-10 and at present there are 362 private trout farms. To cater the growing demand for trout in the valley, it is important to understand the bottlenecks faced in the propagation of trout culture. Value chain analysis provides a generic framework to understand the various activities and processes, mapping and studying linkages is first step that needs to be done in any value chain analysis. In Kashmir, it is found that trout hatcheries play a crucial role in insuring the continuous supply of trout seed in valley. Feed is most limiting factor in trout culture and the farmer has to incur high cost in payment and in the transportation of feed from the feed mill to farm. Lack of aqua clinic in the Kashmir valley needs to be addressed. Brood stock maintenance, breeding and seed production, technical assistance to private farmer, extension services have to be strengthened and there is need to development healthier environment for new entrepreneurs. It was found that trout farmers do not avail credit facility as there is no well define credit scheme for fisheries in the state. The study showed weak institutional linkages. Research and development should focus more on applied science rather than basic science.

Keywords: trout, Kashmir, value chain, linkages, culture

Procedia PDF Downloads 403
28798 Physiochemical Parameters Assessment and Evaluation of the Quality of Drinking Water in Some Parts of Lagos State

Authors: G. T. Mudashiru, Mayowa P. Ibitola

Abstract:

Investigation was carried out at Ikorodu North local council development area of Lagos state using physiochemical parameters to study the quality drinking water. It was ascertained that the human functions and activities were dependent on the continuous and availability of good drinking water. Six water samples were collected at six different boreholes from various outlets and homes in Ikorodu North local council development area. Lagos state Nigeria. Analysis was carried out to determine the purity of water for domestic use. Physicochemical properties evaluation was adapted using standard chemical methods. A number of parameters such as PH, turbidity, conductivity, total dissolved solids, color, chloride, sulphate, nitrate, hardness were determined. Heavy metals such as Zn, Mg, Fe, Pb, Hg, and Mn as well as total coliform counts were observed. The resulted values of each parameter were justified with World Health Organization (WHO) and Lagos state water regulatory commission LSWRC standard values for quantitative comparison. The result reveals that all the water had pH value well below the WHO maximum permissible level for potable water. Other physicochemical parameters were within the safe limit of WHO standard showing the portability nature of the water. It can be concluded that though the water is potable, there should be a kind of treatment of the water before consumption to prevent outbreak of diseases.

Keywords: drinking water, physiology, boreholes, heavy metals, domestic

Procedia PDF Downloads 221
28797 Rounded-off Measurements and Their Implication on Control Charts

Authors: Ran Etgar

Abstract:

The process of rounding off measurements in continuous variables is commonly encountered. Although it usually has minor effects, sometimes it can lead to poor outcomes in statistical process control using X ̅-chart. The traditional control limits can cause incorrect conclusions if applied carelessly. This study looks into the limitations of classical control limits, particularly the impact of asymmetry. An approach to determining the distribution function of the measured parameter (Y ̅) is presented, resulting in a more precise method to establish the upper and lower control limits. The proposed method, while slightly more complex than Shewhart's original idea, is still user-friendly and accurate and only requires the use of two straightforward tables.

Keywords: inaccurate measurement, SPC, statistical process control, rounded-off, control chart

Procedia PDF Downloads 42
28796 Efficient Utilization of Biomass for Bioenergy in Environmental Control

Authors: Subir Kundu, Sukhendra Singh, Sumedha Ojha, Kanika Kundu

Abstract:

The continuous decline of petroleum and natural gas reserves and non linear rise of oil price has brought about a realisation of the need for a change in our perpetual dependence on the fossil fuel. A day to day increased consumption of crude and petroleum products has made a considerable impact on our foreign exchange reserves. Hence, an alternate resource for the conversion of energy (both liquid and gas) is essential for the substitution of conventional fuels. Biomass is the alternate solution for the present scenario. Biomass can be converted into both liquid as well as gaseous fuels and other feedstocks for the industries.

Keywords: bioenergy, biomass conversion, biorefining, efficient utilisation of night soil

Procedia PDF Downloads 407
28795 Sequential Covering Algorithm for Nondifferentiable Global Optimization Problem and Applications

Authors: Mohamed Rahal, Djaouida Guetta

Abstract:

In this paper, the one-dimensional unconstrained global optimization problem of continuous functions satifying a Hölder condition is considered. We extend the algorithm of sequential covering SCA for Lipschitz functions to a large class of Hölder functions. The convergence of the method is studied and the algorithm can be applied to systems of nonlinear equations. Finally, some numerical examples are presented and illustrate the efficiency of the present approach.

Keywords: global optimization, Hölder functions, sequential covering method, systems of nonlinear equations

Procedia PDF Downloads 372
28794 Magnetoelastically Induced Perpendicular Magnetic Anisotropy and Perpendicular Exchange Bias of CoO/CoPt Multilayer Films

Authors: Guo Lei, Wang Yue, Nakamura Yoshio, Shi Ji

Abstract:

Recently, perpendicular exchange bias (PEB) is introduced as an active topic attracting continuous efforts. Since its discovery, extrinsic control of PEB has been proposed, due to its scientific significance in spintronic devices and potential application in high density magnetic random access memory with perpendicular magnetic tunneling junction (p-MTJ). To our knowledge, the researches aiming to controlling PEB so far are focused mainly on enhancing the interfacial exchange coupling by adjusting the FM/AFM interface roughness, or optimizing the crystalline structures of FM or AFM layer by employing different seed layers. In present work, the effects of magnetoelastically induced PMA on PEB have been explored in [CoO5nm/CoPt5nm]5 multilayer films. We find the PMA strength of FM layer also plays an important role on PEB at the FM/AFM interface and it is effective to control PEB of [CoO5nm/CoPt5nm]5 multilayer films by changing the magnetoelastically induced PMA of CoPt layer. [CoO5nm/CoPt5nm]5 multilayer films were deposited by magnetron sputtering on fused quartz substrate at room temperature, then annealed at 100°C, 250°C, 300°C and 375°C for 3h, respectively. XRD results reveal that all the samples are well crystallized with preferred fcc CoPt (111) orientation. The continuous multilayer structure with sharp component transition at the CoO5nm/CoPt5nm interface are identified clearly by transmission electron microscopy (TEM), x-ray reflectivity (XRR) and atomic force microscope (AFM). CoPt layer in-plane tensile stress is calculated by sin2φ method, and we find it increases gradually upon annealing from 0.99 GPa (as-deposited) up to 3.02 GPa (300oC-annealed). As to the magnetic property, significant enhancement of PMA is achieved in [CoO5nm/CoPt5nm]5 multilayer films after annealing due to the increase of CoPt layer in-plane tensile stress. With the enhancement of magnetoelastically induced PMA, great improvement of PEB is also achieved in [CoO5nm/CoPt5nm]5 multilayer films, which increases from 130 Oe (as-deposited) up to 1060 Oe (300oC-annealed), showing the same change tendency as PMA and the strong correlation with CoPt layer in-plane tensile stress. We consider it is the increase of CoPt layer in-plane tensile stress that leads to the enhancement of PMA, and thus the enhancement of magnetoelastically induced PMA results in the improvement of PEB in [CoO5nm/CoPt5nm]5 multilayer films.

Keywords: perpendicular exchange bias, magnetoelastically induced perpendicular magnetic anisotropy, CoO5nm/CoPt5nm]5 multilayer film with in-plane stress, perpendicular magnetic tunneling junction

Procedia PDF Downloads 462
28793 The Jurisprudential Evolution of Corruption Offenses in Spain: Before and after the Economic Crisis

Authors: Marta Fernandez Cabrera

Abstract:

The period of economic boom generated by the housing bubble created a climate of social indifference to the problem of corruption. This resulted in the persecution and conviction for these criminal offenses being low. After the economic recession, social awareness about the problem of corruption has increased. This has led to the Spanish citizenship requiring the public authorities to try to end the problem in the most effective way possible. In order to respond to the continuous social demands that require an exemplary punishment, the legislator has made changes in crimes against the public administration in the Spanish Criminal Code. However, from the point of view of criminal law, the social change has not served to modify only the law, but also the jurisprudence. After the recession, judges are punishing more severely these conducts than in the past. Before the crisis, it was usual for criminal judges to divert relevant behavior to other areas of the legal system such as administrative law and acquit in the criminal field. Criminal judges have considered that administrative law already has mechanisms that can effectively deal with this type of behavior in order to respect the principle of subsidiarity or ultima ratio. It has also been usual for criminal judges to acquit civil servants due to the absence of requirements unrelated to the applicable offense. For example, they have required an economic damage to the public administration when the offense in the criminal code does not require it. Nevertheless, for some years, these arguments have either partially disappeared or considerably transformed. Since 2010, a jurisprudential stream has been consolidated that aims to provide a more severe response to corruption than it had received until now. This change of opinion, together with greater prosecution of these behaviors by judges and prosecutors, has led to a significant increase in the number of individuals convicted of corruption crimes. This paper has two objectives. The first one is to show that even though judges apply the law impartially, they are flexible to social changes. The second one is to identify the erroneous arguments the courts have used up until now. To carry out the present paper, it has been done a detailed analysis of the judgments of the supreme court before and after the year 2010. Therefore, the jurisprudential analysis is complemented with the statistical data on corruption available.

Keywords: corruption, public administration, social perception, ultima ratio principle

Procedia PDF Downloads 147
28792 An Experimental Study of the External Thermal Insulation System’s (ETICS) Efficiency in Buildings during Spring Conditions

Authors: Carmen Viñas Arrebola, Antonio Rodriguez Sanchez, Sheila Varela Lujan, Mariano Gonzalez Cortina, Cesar Porras Amores

Abstract:

The research group TEMA from the School of Building (UPM) is working in the line of energy efficiency and comfort in building. The need to reduce energy consumption in the building construction implies designing new constructive systems. These systems help to reduce both consumption and energy losses in order to achieve adequate thermal comfort for people in any type of building. In existing buildings the best option is the rehabilitation focused on thermal insulation. The aim of this paper is to design, monitor and analyze the first results of thermal behavior of the ETICS system in façades. This retrofitting solution consists of adding thermal insulation on the outside of the building, helping to create a continuous envelope on the façades. The analysis is done by comparing a rehabilitated part of the building with ETICS system and another part which has not been rehabilitated, and it is taken as reference. Both of them have the same characteristics. Temperature measurements were taken with type K thermocouples according to the previous design of the monitoring and in the same period of time. The pilot building of the study is situated in Benimamet Street, in San Cristobal de Los Ángeles, in the south of Madrid. It was built in the late 50s. The 51st entrance hall, which is restored, and the 47th entrance hall, in original conditions, have been studied.

Keywords: comfort in building, energy efficiency in building, ETICS, thermal properties

Procedia PDF Downloads 316
28791 Numerical Analysis of the Computational Fluid Dynamics of Co-Digestion in a Large-Scale Continuous Stirred Tank Reactor

Authors: Sylvana A. Vega, Cesar E. Huilinir, Carlos J. Gonzalez

Abstract:

Co-digestion in anaerobic biodigesters is a technology improving hydrolysis by increasing methane generation. In the present study, the dimensional computational fluid dynamics (CFD) is numerically analyzed using Ansys Fluent software for agitation in a full-scale Continuous Stirred Tank Reactor (CSTR) biodigester during the co-digestion process. For this, a rheological study of the substrate is carried out, establishing rotation speeds of the stirrers depending on the microbial activity and energy ranges. The substrate is organic waste from industrial sources of sanitary water, butcher, fishmonger, and dairy. Once the rheological behavior curves have been obtained, it is obtained that it is a non-Newtonian fluid of the pseudoplastic type, with a solids rate of 12%. In the simulation, the rheological results of the fluid are considered, and the full-scale CSTR biodigester is modeled. It was coupling the second-order continuity differential equations, the three-dimensional Navier Stokes, the power-law model for non-Newtonian fluids, and three turbulence models: k-ε RNG, k-ε Realizable, and RMS (Reynolds Stress Model), for a 45° tilt vane impeller. It is simulated for three minutes since it is desired to study an intermittent mixture with a saving benefit of energy consumed. The results show that the absolute errors of the power number associated with the k-ε RNG, k-ε Realizable, and RMS models were 7.62%, 1.85%, and 5.05%, respectively, the numbers of power obtained from the analytical-experimental equation of Nagata. The results of the generalized Reynolds number show that the fluid dynamics have a transition-turbulent flow regime. Concerning the Froude number, the result indicates there is no need to implement baffles in the biodigester design, and the power number provides a steady trend close to 1.5. It is observed that the levels of design speeds within the biodigester are approximately 0.1 m/s, which are speeds suitable for the microbial community, where they can coexist and feed on the substrate in co-digestion. It is concluded that the model that more accurately predicts the behavior of fluid dynamics within the reactor is the k-ε Realizable model. The flow paths obtained are consistent with what is stated in the referenced literature, where the 45° inclination PBT impeller is the right type of agitator to keep particles in suspension and, in turn, increase the dispersion of gas in the liquid phase. If a 24/7 complete mix is considered under stirred agitation, with a plant factor of 80%, 51,840 kWh/year are estimated. On the contrary, if intermittent agitations of 3 min every 15 min are used under the same design conditions, reduce almost 80% of energy costs. It is a feasible solution to predict the energy expenditure of an anaerobic biodigester CSTR. It is recommended to use high mixing intensities, at the beginning and end of the joint phase acetogenesis/methanogenesis. This high intensity of mixing, in the beginning, produces the activation of the bacteria, and once reaching the end of the Hydraulic Retention Time period, it produces another increase in the mixing agitations, favoring the final dispersion of the biogas that may be trapped in the biodigester bottom.

Keywords: anaerobic co-digestion, computational fluid dynamics, CFD, net power, organic waste

Procedia PDF Downloads 115
28790 Embedded Test Framework: A Solution Accelerator for Embedded Hardware Testing

Authors: Arjun Kumar Rath, Titus Dhanasingh

Abstract:

Embedded product development requires software to test hardware functionality during development and finding issues during manufacturing in larger quantities. As the components are getting integrated, the devices are tested for their full functionality using advanced software tools. Benchmarking tools are used to measure and compare the performance of product features. At present, these tests are based on a variety of methods involving varying hardware and software platforms. Typically, these tests are custom built for every product and remain unusable for other variants. A majority of the tests goes undocumented, not updated, unusable when the product is released. To bridge this gap, a solution accelerator in the form of a framework can address these issues for running all these tests from one place, using an off-the-shelf tests library in a continuous integration environment. There are many open-source test frameworks or tools (fuego. LAVA, AutoTest, KernelCI, etc.) designed for testing embedded system devices, with each one having several unique good features, but one single tool and framework may not satisfy all of the testing needs for embedded systems, thus an extensible framework with the multitude of tools. Embedded product testing includes board bring-up testing, test during manufacturing, firmware testing, application testing, and assembly testing. Traditional test methods include developing test libraries and support components for every new hardware platform that belongs to the same domain with identical hardware architecture. This approach will have drawbacks like non-reusability where platform-specific libraries cannot be reused, need to maintain source infrastructure for individual hardware platforms, and most importantly, time is taken to re-develop test cases for new hardware platforms. These limitations create challenges like environment set up for testing, scalability, and maintenance. A desirable strategy is certainly one that is focused on maximizing reusability, continuous integration, and leveraging artifacts across the complete development cycle during phases of testing and across family of products. To get over the stated challenges with the conventional method and offers benefits of embedded testing, an embedded test framework (ETF), a solution accelerator, is designed, which can be deployed in embedded system-related products with minimal customizations and maintenance to accelerate the hardware testing. Embedded test framework supports testing different hardwares including microprocessor and microcontroller. It offers benefits such as (1) Time-to-Market: Accelerates board brings up time with prepacked test suites supporting all necessary peripherals which can speed up the design and development stage(board bring up, manufacturing and device driver) (2) Reusability-framework components isolated from the platform-specific HW initialization and configuration makes the adaptability of test cases across various platform quick and simple (3) Effective build and test infrastructure with multiple test interface options and preintegrated with FUEGO framework (4) Continuos integration - pre-integrated with Jenkins which enabled continuous testing and automated software update feature. Applying the embedded test framework accelerator throughout the design and development phase enables to development of the well-tested systems before functional verification and improves time to market to a large extent.

Keywords: board diagnostics software, embedded system, hardware testing, test frameworks

Procedia PDF Downloads 147
28789 The Quantitative Analysis of the Influence of the Superficial Abrasion on the Lifetime of the Frog Rail

Authors: Dong Jiang

Abstract:

Turnout is the essential equipment on the railway, which also belongs to one of the strongest demanded infrastructural facilities of railway on account of the more seriously frog rail failures. In cooperation with Germany Company (DB Systemtechnik AG), our research team focuses on the quantitative analysis about the frog rails to predict their lifetimes. Moreover, the suggestions for the timely and effective maintenances are made to improve the economy of the frog rails. The lifetime of the frog rail depends strongly on the internal damage of the running surface until the breakages occur. On the basis of Hertzian theory of the contact mechanics, the dynamic loads of the running surface are calculated in form of the contact pressures on the running surface and the equivalent tensile stress inside the running surface. According to material mechanics, the strength of the frog rail is determined quantitatively in form of the Stress-cycle (S-N) curve. Under the interaction between the dynamic loads and the strength, the internal damage of the running surface is calculated by means of the linear damage hypothesis of the Miner’s rule. The emergence of the first Breakage on the running surface is to be defined as the failure criterion that the damage degree equals 1.0. From the microscopic perspective, the running surface of the frog rail is divided into numerous segments for the detailed analysis. The internal damage of the segment grows slowly in the beginning and disproportionately quickly in the end until the emergence of the breakage. From the macroscopic perspective, the internal damage of the running surface develops simply always linear along the lifetime. With this linear growth of the internal damages, the lifetime of the frog rail could be predicted simply through the immediate introduction of the slope of the linearity. However, the superficial abrasion plays an essential role in the results of the internal damages from the both perspectives. The influences of the superficial abrasion on the lifetime are described in form of the abrasion rate. It has two contradictory effects. On the one hand, the insufficient abrasion rate causes the concentration of the damage accumulation on the same position below the running surface to accelerate the rail failure. On the other hand, the excessive abrasion rate advances the disappearance of the head hardened surface of the frog rail to result in the untimely breakage on the surface. Thus, the relationship between the abrasion rate and the lifetime is subdivided into an initial phase of the increased lifetime and a subsequent phase of the more rapid decreasing lifetime with the continuous growth of the abrasion rate. Through the compensation of these two effects, the critical abrasion rate is discussed to reach the optimal lifetime.

Keywords: breakage, critical abrasion rate, frog rail, internal damage, optimal lifetime

Procedia PDF Downloads 227
28788 Thermo-Mechanical Treatments of Cu-Ti Alloys

Authors: M. M. Morgham, A. A. Hameda, N. A. Zriba, H. A. Jawan

Abstract:

This paper aims to study the effect of cold work condition on the microstructure of Cu-1.5wt%Ti, and Cu-3.5wt%Ti and hence mechanical properties. The samples under investigation were machined and solution heat treated. X-ray diffraction technique is used to identify the different phases present after cold deformation by compression and also different heat treatment and also measuring the relative quantities of phases present. Metallographic examination is used to study the microstructure of the samples. The hardness measurements were used to indicate the change in mechanical properties. The results are compared with the mechanical properties obtained by previous workers. Experiments on cold compression followed by aging of Cu-Ti alloys have indicated that the most effective hardening of the material results from continuous precipitation of very fine particles within the matrix. These particles were reported to be β`-type, Cu4Ti phase. The β`-β transformation and particles coarsening within the matrix as well as a long grain boundaries were responsible for the averaging of Cu-1.5wt%Ti and Cu-3.5wt%Ti alloys. It is well know that plate like particles are β – type, Cu3Ti phase. Discontinuous precipitation was found to start at the grain boundaries and expand into grain interior. At the higher aging temperature a classic widmanstätten morphology forms giving rise to a coarse microstructure comprised of α and the equilibrium phase β. Those results were confirmed by X-ray analysis, which found that a few percent of Cu3Ti, β precipitates are formed during aging at high temperature for long time for both Cu- Ti alloys (i.e. Cu-1.5wt%Ti and Cu-3.5wt%Ti).

Keywords: metallographic, hardness, precipitation, aging

Procedia PDF Downloads 406
28787 Field Environment Sensing and Modeling for Pears towards Precision Agriculture

Authors: Tatsuya Yamazaki, Kazuya Miyakawa, Tomohiko Sugiyama, Toshitaka Iwatani

Abstract:

The introduction of sensor technologies into agriculture is a necessary step to realize Precision Agriculture. Although sensing methodologies themselves have been prevailing owing to miniaturization and reduction in costs of sensors, there are some difficulties to analyze and understand the sensing data. Targeting at pears ’Le Lectier’, which is particular to Niigata in Japan, cultivation environmental data have been collected at pear fields by eight sorts of sensors: field temperature, field humidity, rain gauge, soil water potential, soil temperature, soil moisture, inner-bag temperature, and inner-bag humidity sensors. With regard to the inner-bag temperature and humidity sensors, they are used to measure the environment inside the fruit bag used for pre-harvest bagging of pears. In this experiment, three kinds of fruit bags were used for the pre-harvest bagging. After over 100 days continuous measurement, volumes of sensing data have been collected. Firstly, correlation analysis among sensing data measured by respective sensors reveals that one sensor can replace another sensor so that more efficient and cost-saving sensing systems can be proposed to pear farmers. Secondly, differences in characteristic and performance of the three kinds of fruit bags are clarified by the measurement results by the inner-bag environmental sensing. It is found that characteristic and performance of the inner-bags significantly differ from each other by statistical analysis. Lastly, a relational model between the sensing data and the pear outlook quality is established by use of Structural Equation Model (SEM). Here, the pear outlook quality is related with existence of stain, blob, scratch, and so on caused by physiological impair or diseases. Conceptually SEM is a combination of exploratory factor analysis and multiple regression. By using SEM, a model is constructed to connect independent and dependent variables. The proposed SEM model relates the measured sensing data and the pear outlook quality determined on the basis of farmer judgement. In particularly, it is found that the inner-bag humidity variable relatively affects the pear outlook quality. Therefore, inner-bag humidity sensing might help the farmers to control the pear outlook quality. These results are supported by a large quantity of inner-bag humidity data measured over the years 2014, 2015, and 2016. The experimental and analytical results in this research contribute to spreading Precision Agriculture technologies among the farmers growing ’Le Lectier’.

Keywords: precision agriculture, pre-harvest bagging, sensor fusion, structural equation model

Procedia PDF Downloads 314
28786 System Dietadhoc® - A Fusion of Human-Centred Design and Agile Development for the Explainability of AI Techniques Based on Nutritional and Clinical Data

Authors: Michelangelo Sofo, Giuseppe Labianca

Abstract:

In recent years, the scientific community's interest in the exploratory analysis of biomedical data has increased exponentially. Considering the field of research of nutritional biologists, the curative process, based on the analysis of clinical data, is a very delicate operation due to the fact that there are multiple solutions for the management of pathologies in the food sector (for example can recall intolerances and allergies, management of cholesterol metabolism, diabetic pathologies, arterial hypertension, up to obesity and breathing and sleep problems). In this regard, in this research work a system was created capable of evaluating various dietary regimes for specific patient pathologies. The system is founded on a mathematical-numerical model and has been created tailored for the real working needs of an expert in human nutrition using the human-centered design (ISO 9241-210), therefore it is in step with continuous scientific progress in the field and evolves through the experience of managed clinical cases (machine learning process). DietAdhoc® is a decision support system nutrition specialists for patients of both sexes (from 18 years of age) developed with an agile methodology. Its task consists in drawing up the biomedical and clinical profile of the specific patient by applying two algorithmic optimization approaches on nutritional data and a symbolic solution, obtained by transforming the relational database underlying the system into a deductive database. For all three solution approaches, particular emphasis has been given to the explainability of the suggested clinical decisions through flexible and customizable user interfaces. Furthermore, the system has multiple software modules based on time series and visual analytics techniques that allow to evaluate the complete picture of the situation and the evolution of the diet assigned for specific pathologies.

Keywords: medical decision support, physiological data extraction, data driven diagnosis, human centered AI, symbiotic AI paradigm

Procedia PDF Downloads 26
28785 Evaluation of Food Services by the Personnel in Hospitals of Athens, Greece

Authors: I. Mentziou, C. Delezos, D. Krikidis, A. Nestoridou, G. Boskou

Abstract:

Introduction: The systems of production and distribution of meals can have a significant impact on the food intake of hospital patients who are likely to develop malnutrition. In hospitals, the consequences of food borne infections can range from annoying to life-threatening for a patient, since they can lead up to death in vulnerable groups Aim: The aim of the present study was the evaluation of food safety management systems implementation, as well as the general evaluation of the total quality management systems in Greek hospitals. Methods: This is a multifocal study on the implementation and evaluation of the food safety management systems in the Greek hospitals of Attica region. Eleven hospitals from the city of Athens were chosen for this purpose. The sample was derived from the high rank personnel of the nutritional department (dietician, head-chef, food technologist, public health inspector). Tailor made questionnaires on hygiene regulations were used as tools for the interviews. Results: Overall, 30 employees in the field of hospital nutrition participated. Most of the replies implied that almost always the hygiene regulations are implemented. Nevertheless, only 30% stated that there is a Hazard Analysis Critical Control Points HACCP system (HACCP) in the hospital. In a small number of questionnaires there were proposals for changes by the staff. Conclusion: Measurement of the opinion of the personnel about the provided food services within a hospital can further lead to continuous improvement of the hospital nutrition.

Keywords: evaluation, food service, HACCP, hospital, personnel

Procedia PDF Downloads 374
28784 Modeling and Energy Analysis of Limestone Decomposition with Microwave Heating

Authors: Sofia N. Gonçalves, Duarte M. S. Albuquerque, José C. F. Pereira

Abstract:

The energy transition is spurred by structural changes in energy demand, supply, and prices. Microwave technology was first proposed as a faster alternative for cooking food. It was found that food heated instantly when interacting with high-frequency electromagnetic waves. The dielectric properties account for a material’s ability to absorb electromagnetic energy and dissipate this energy in the form of heat. Many energy-intense industries could benefit from electromagnetic heating since many of the raw materials are dielectric at high temperatures. Limestone sedimentary rock is a dielectric material intensively used in the cement industry to produce unslaked lime. A numerical 3D model was implemented in COMSOL Multiphysics to study the limestone continuous processing under microwave heating. The model solves the two-way coupling between the Energy equation and Maxwell’s equations as well as the coupling between heat transfer and chemical interfaces. Complementary, a controller was implemented to optimize the overall heating efficiency and control the numerical model stability. This was done by continuously matching the cavity impedance and predicting the required energy for the system, avoiding energy inefficiencies. This controller was developed in MATLAB and successfully fulfilled all these goals. The limestone load influence on thermal decomposition and overall process efficiency was the main object of this study. The procedure considered the Verification and Validation of the chemical kinetics model separately from the coupled model. The chemical model was found to correctly describe the chosen kinetic equation, and the coupled model successfully solved the equations describing the numerical model. The interaction between flow of material and electric field Poynting vector revealed to influence limestone decomposition, as a result from the low dielectric properties of limestone. The numerical model considered this effect and took advantage from this interaction. The model was demonstrated to be highly unstable when solving non-linear temperature distributions. Limestone has a dielectric loss response that increases with temperature and has low thermal conductivity. For this reason, limestone is prone to produce thermal runaway under electromagnetic heating, as well as numerical model instabilities. Five different scenarios were tested by considering a material fill ratio of 30%, 50%, 65%, 80%, and 100%. Simulating the tube rotation for mixing enhancement was proven to be beneficial and crucial for all loads considered. When uniform temperature distribution is accomplished, the electromagnetic field and material interaction is facilitated. The results pointed out the inefficient development of the electric field within the bed for 30% fill ratio. The thermal efficiency showed the propensity to stabilize around 90%for loads higher than 50%. The process accomplished a maximum microwave efficiency of 75% for the 80% fill ratio, sustaining that the tube has an optimal fill of material. Electric field peak detachment was observed for the case with 100% fill ratio, justifying the lower efficiencies compared to 80%. Microwave technology has been demonstrated to be an important ally for the decarbonization of the cement industry.

Keywords: CFD numerical simulations, efficiency optimization, electromagnetic heating, impedance matching, limestone continuous processing

Procedia PDF Downloads 175
28783 Particle Size Distribution Estimation of a Mixture of Regular and Irregular Sized Particles Using Acoustic Emissions

Authors: Ejay Nsugbe, Andrew Starr, Ian Jennions, Cristobal Ruiz-Carcel

Abstract:

This works investigates the possibility of using Acoustic Emissions (AE) to estimate the Particle Size Distribution (PSD) of a mixture of particles that comprise of particles of different densities and geometry. The experiments carried out involved the mixture of a set of glass and polyethylene particles that ranged from 150-212 microns and 150-250 microns respectively and an experimental rig that allowed the free fall of a continuous stream of particles on a target plate which the AE sensor was placed. By using a time domain based multiple threshold method, it was observed that the PSD of the particles in the mixture could be estimated.

Keywords: acoustic emissions, particle sizing, process monitoring, signal processing

Procedia PDF Downloads 353
28782 Fishing Waste: A Source of Valuable Products through Anaerobic Treatments

Authors: Luisa Maria Arrechea Fajardo, Luz Stella Cadavid Rodriguez

Abstract:

Fish is one of the most commercialized foods worldwide. However, this industry only takes advantage of about 55% of the product's weight, the rest is converted into waste, which is mainly composed of viscera, gills, scales and spines. Consequently, if these wastes are not used or disposed of properly, they cause serious environmental impacts. This is the case of Tumaco (Colombia), the second largest producer of marine fisheries on the Colombian Pacific coast, where artisanal fishermen process more than 50% of the commercialized volume. There, fishing waste is disposed primarily in the ocean, causing negative impacts on the environment and society. Therefore, in the present research, a proposal was made to take advantage of fishing waste through anaerobic treatments, through which it is possible to obtain products with high added value from organic waste. The research was carried out in four stages. First, the production of volatile fatty acids (VFA) in semi-continuous 4L reactors was studied, evaluating three hydraulic retention times (HRT) (10, 7 and 5 days) with four organic loading rates (OLR) (16, 14, 12 and 10 gVS/L/day), the experiment was carried out for 150 days. Subsequently, biogas production was evaluated from the solid digestate generated in the VFA production reactors, initially evaluating the biochemical methane potential (BMP) of 4 total solid concentrations (1, 2, 4 and 6% TS), for 40 days and then, with the optimum TS concentration (2 gVS/L/day), 2 HRT (15 and 20 days) in semi-continuous reactors, were evaluated for 100 days. Finally, the integration of the processes was carried out with the best conditions found, a first phase of VFA production from fishing waste and a second phase of biogas production from unrecovered VFAs and unprocessed material Additionally, an VFA membrane extraction system was included. In the first phase, a liquid digestate with a concentration and VFA production yield of 59.04 gVFA/L and 0.527 gVFA/gVS, respectively, was obtained, with the best condition found (HRT:7 days and OLR: 16 gVS/L/día), where acetic acid and isobutyric acid were the predominant acids. In the second phase of biogas production, a BMP of 0.349 Nm3CH4/KgVS was reached, and it was found as best HRT 20 days. In the integration, the isovaleric, butyric and isobutyric acid were the VFA with the highest percentage of extraction, additionally a 106.67% increase in biogas production was achieved. This research shows that anaerobic treatments are a promising technology for an environmentally safe management of fishing waste and presents the basis of a possible biorefinery.

Keywords: biogas production, fishing waste, VFA membrane extraction, VFA production

Procedia PDF Downloads 117
28781 Enhancement of Higher Order Thinking Skills among Teacher Trainers by Fun Game Learning Approach

Authors: Malathi Balakrishnan, Gananathan M. Nadarajah, Saraswathy Vellasamy, Evelyn Gnanam William George

Abstract:

The purpose of the study is to explore how the fun game-learning approach enhances teacher trainers’ higher order thinking skills. Two-day fun filled fun game learning-approach was introduced to teacher trainers as a Continuous Professional Development Program (CPD). 26 teacher trainers participated in this Transformation of Teaching and Learning Fun Way Program, organized by Institute of Teacher Education Malaysia. Qualitative research technique was adopted as the researchers observed the participants’ higher order thinking skills developed during the program. Data were collected from observational checklist; interview transcriptions of four participants and participants’ reflection notes. All the data were later analyzed with NVivo data analysis process. The finding of this study presented five main themes, which are critical thinking, hands on activities, creating, application and use of technology. The studies showed that the teacher trainers’ higher order thinking skills were enhanced after the two-day CPD program. Therefore, Institute of Teacher Education will have more success using the fun way game-learning approach to develop higher order thinking skills among its teacher trainers who can implement these skills to their trainee teachers in future. This study also added knowledge to Constructivism learning theory, which will further highlight the prominence of the fun way learning approach to enhance higher order thinking skills.

Keywords: constructivism, game-learning approach, higher order thinking skill, teacher trainer

Procedia PDF Downloads 295
28780 Identification of Nonlinear Systems Structured by Hammerstein-Wiener Model

Authors: A. Brouri, F. Giri, A. Mkhida, A. Elkarkri, M. L. Chhibat

Abstract:

Standard Hammerstein-Wiener models consist of a linear subsystem sandwiched by two memoryless nonlinearities. Presently, the linear subsystem is allowed to be parametric or not, continuous- or discrete-time. The input and output nonlinearities are polynomial and may be noninvertible. A two-stage identification method is developed such the parameters of all nonlinear elements are estimated first using the Kozen-Landau polynomial decomposition algorithm. The obtained estimates are then based upon in the identification of the linear subsystem, making use of suitable pre-ad post-compensators.

Keywords: nonlinear system identification, Hammerstein-Wiener systems, frequency identification, polynomial decomposition

Procedia PDF Downloads 512
28779 Derivation of Fragility Functions of Marine Drilling Risers Under Ocean Environment

Authors: Pranjal Srivastava, Piyali Sengupta

Abstract:

The performance of marine drilling risers is crucial in the offshore oil and gas industry to ensure safe drilling operation with minimum downtime. Experimental investigations on marine drilling risers are limited in the literature owing to the expensive and exhaustive test setup required to replicate the realistic riser model and ocean environment in the laboratory. Therefore, this study presents an analytical model of marine drilling riser for determining its fragility under ocean environmental loading. In this study, the marine drilling riser is idealized as a continuous beam having a concentric circular cross-section. Hydrodynamic loading acting on the marine drilling riser is determined by Morison’s equations. By considering the equilibrium of forces on the marine drilling riser for the connected and normal drilling conditions, the governing partial differential equations in terms of independent variables z (depth) and t (time) are derived. Subsequently, the Runge Kutta method and Finite Difference Method are employed for solving the partial differential equations arising from the analytical model. The proposed analytical approach is successfully validated with respect to the experimental results from the literature. From the dynamic analysis results of the proposed analytical approach, the critical design parameters peak displacements, upper and lower flex joint rotations and von Mises stresses of marine drilling risers are determined. An extensive parametric study is conducted to explore the effects of top tension, drilling depth, ocean current speed and platform drift on the critical design parameters of the marine drilling riser. Thereafter, incremental dynamic analysis is performed to derive the fragility functions of shallow water and deep-water marine drilling risers under ocean environmental loading. The proposed methodology can also be adopted for downtime estimation of marine drilling risers incorporating the ranges of uncertainties associated with the ocean environment, especially at deep and ultra-deepwater.

Keywords: drilling riser, marine, analytical model, fragility

Procedia PDF Downloads 149
28778 Evaluation of Medication Errors in Outpatient Pharmacies: Electronic Prescription System vs. Paper System

Authors: Mera Ababneh, Sayer Al-Azzam, Karem Alzoubi, Abeer Rababa'h

Abstract:

Background: Medication errors are among the most common medical errors. Their occurrences result in patient’s mortality, morbidity, and additional healthcare costs. Continuous monitoring and detection is required. Objectives: The aim of this study was to compare medication errors in outpatient’s prescriptions in two different hospitals (paper system vs. electronic system). Methods: This was a cross sectional observational study conducted in two major hospitals; King Abdullah University Hospital (KAUH) and Princess Bassma Teaching Hospital (PBTH) over three months period. Data collection was conducted by two trained pharmacists at each site. During the study period, medication prescriptions and dispensing procedures were screened for medication errors in both participating centers by two trained pharmacist. Results: In the electronic prescription hospital, 2500 prescriptions were screened in which 631 medication errors were detected. Prescription errors were 231 (36.6%), and dispensing errors were 400 (63.4%) of all errors. On the other side, analysis of 2500 prescriptions in paper-based hospital revealed 3714 medication errors, of which 288 (7.8%) were prescription errors, and 3426 (92.2%) were dispensing errors. A significant number of 2496 (67.2%) were inadequately and/or inappropriately labeled. Conclusion: This study provides insight for healthcare policy makers, professionals, and administrators to invest in advanced technology systems, education, and epidemiological surveillance programs to minimize medication errors.

Keywords: medication errors, prescription errors, dispensing errors, electronic prescription, handwritten prescription

Procedia PDF Downloads 282
28777 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks

Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone

Abstract:

Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.

Keywords: artificial neural network, data mining, electroencephalogram, epilepsy, feature extraction, seizure detection, signal processing

Procedia PDF Downloads 189
28776 Research on the Internal Mechanism of Overseas Market Opportunity Construction of the Emerging-Market Multinational Enterprises

Authors: Jie Zhang, Chaomin Zhang

Abstract:

Based on the network theory, this paper selects three Emerging-Market Multinationals Enterprises (EMNEs) as the research object and takes the typical overseas market opportunities constructed by them as the analysis unit to research the internal mechanism of overseas market opportunity construction of the EMNEs. The results show that: (1) EMNEs overseas market opportunity construction is a complex process, through the continuous interaction between enterprises and entities in the internal and external networks to achieve opportunity prototype, opportunity creation, and opportunity optimization in overseas markets. (2) Governments, foreign institutions and industry associations in the institutional network and competitors, partners, and customers in the commercial networks are the important entities in the construction of overseas market opportunities. Through the interaction of entity perception, relationship construction, and utilization, enterprises can obtain the necessary information, resources, and political asylum in the process of opportunity construction. (3) Organizations, project teams, and organizational sub-units within the enterprise are important internal entities for the construction of overseas market opportunities. Through the connection between different entities, they can achieve the circulation of resources within the organization and promote the opportunity construction of overseas markets. The research conclusions expand the relevant research on international opportunities and have inspiring and guiding significance for the expansion of EMNEs overseas markets.

Keywords: international (overseas) opportunities, opportunity construction, network entities, interaction, resource circulation

Procedia PDF Downloads 20
28775 Finite-Sum Optimization: Adaptivity to Smoothness and Loopless Variance Reduction

Authors: Bastien Batardière, Joon Kwon

Abstract:

For finite-sum optimization, variance-reduced gradient methods (VR) compute at each iteration the gradient of a single function (or of a mini-batch), and yet achieve faster convergence than SGD thanks to a carefully crafted lower-variance stochastic gradient estimator that reuses past gradients. Another important line of research of the past decade in continuous optimization is the adaptive algorithms such as AdaGrad, that dynamically adjust the (possibly coordinate-wise) learning rate to past gradients and thereby adapt to the geometry of the objective function. Variants such as RMSprop and Adam demonstrate outstanding practical performance that have contributed to the success of deep learning. In this work, we present AdaLVR, which combines the AdaGrad algorithm with loopless variance-reduced gradient estimators such as SAGA or L-SVRG that benefits from a straightforward construction and a streamlined analysis. We assess that AdaLVR inherits both good convergence properties from VR methods and the adaptive nature of AdaGrad: in the case of L-smooth convex functions we establish a gradient complexity of O(n + (L + √ nL)/ε) without prior knowledge of L. Numerical experiments demonstrate the superiority of AdaLVR over state-of-the-art methods. Moreover, we empirically show that the RMSprop and Adam algorithm combined with variance-reduced gradients estimators achieve even faster convergence.

Keywords: convex optimization, variance reduction, adaptive algorithms, loopless

Procedia PDF Downloads 71
28774 Comprehensive Analysis of Electrohysterography Signal Features in Term and Preterm Labor

Authors: Zhihui Liu, Dongmei Hao, Qian Qiu, Yang An, Lin Yang, Song Zhang, Yimin Yang, Xuwen Li, Dingchang Zheng

Abstract:

Premature birth, defined as birth before 37 completed weeks of gestation is a leading cause of neonatal morbidity and mortality and has long-term adverse consequences for health. It has recently been reported that the worldwide preterm birth rate is around 10%. The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin. However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis and prediction of preterm labor. Electrohysterography (EHG) method based on recording of uterine electrical activity by electrodes attached to maternal abdomen, is a promising method to assess uterine activity and diagnose preterm labor. The purpose of this study is to analyze the difference of EHG signal features between term labor and preterm labor. Free access database was used with 300 signals acquired in two groups of pregnant women who delivered at term (262 cases) and preterm (38 cases). Among them, EHG signals from 38 term labor and 38 preterm labor were preprocessed with band-pass Butterworth filters of 0.08–4Hz. Then, EHG signal features were extracted, which comprised classical time domain description including root mean square and zero-crossing number, spectral parameters including peak frequency, mean frequency and median frequency, wavelet packet coefficients, autoregression (AR) model coefficients, and nonlinear measures including maximal Lyapunov exponent, sample entropy and correlation dimension. Their statistical significance for recognition of two groups of recordings was provided. The results showed that mean frequency of preterm labor was significantly smaller than term labor (p < 0.05). 5 coefficients of AR model showed significant difference between term labor and preterm labor. The maximal Lyapunov exponent of early preterm (time of recording < the 26th week of gestation) was significantly smaller than early term. The sample entropy of late preterm (time of recording > the 26th week of gestation) was significantly smaller than late term. There was no significant difference for other features between the term labor and preterm labor groups. Any future work regarding classification should therefore focus on using multiple techniques, with the mean frequency, AR coefficients, maximal Lyapunov exponent and the sample entropy being among the prime candidates. Even if these methods are not yet useful for clinical practice, they do bring the most promising indicators for the preterm labor.

Keywords: electrohysterogram, feature, preterm labor, term labor

Procedia PDF Downloads 572
28773 Multiscale Analysis of Shale Heterogeneity in Silurian Longmaxi Formation from South China

Authors: Xianglu Tang, Zhenxue Jiang, Zhuo Li

Abstract:

Characterization of shale multi scale heterogeneity is an important part to evaluate size and space distribution of shale gas reservoirs in sedimentary basins. The origin of shale heterogeneity has always been a hot research topic for it determines shale micro characteristics description and macro quality reservoir prediction. Shale multi scale heterogeneity was discussed based on thin section observation, FIB-SEM, QEMSCAN, TOC, XRD, mercury intrusion porosimetry (MIP), and nitrogen adsorption analysis from 30 core samples in Silurian Longmaxi formation. Results show that shale heterogeneity can be characterized by pore structure and mineral composition. The heterogeneity of shale pore is showed by different size pores at nm-μm scale. Macropores (pore diameter > 50 nm) have a large percentage of pore volume than mesopores (pore diameter between 2~ 50 nm) and micropores (pore diameter < 2nm). However, they have a low specific surface area than mesopores and micropores. Fractal dimensions of the pores from nitrogen adsorption data are higher than 2.7, what are higher than 2.8 from MIP data, showing extremely complex pore structure. This complexity in pore structure is mainly due to the organic matter and clay minerals with complex pore network structures, and diagenesis makes it more complicated. The heterogeneity of shale minerals is showed by mineral grains, lamina, and different lithology at nm-km scale under the continuous changing horizon. Through analyzing the change of mineral composition at each scale, random arrangement of mineral equal proportion, seasonal climate changes, large changes of sedimentary environment, and provenance supply are considered to be the main reasons that cause shale minerals heterogeneity from microcosmic to macroscopic. Due to scale effect, the change of shale multi scale heterogeneity is a discontinuous process, and there is a transformation boundary between homogeneous and in homogeneous. Therefore, a shale multi scale heterogeneity changing model is established by defining four types of homogeneous unit at different scales, which can be used to guide the prediction of shale gas distribution from micro scale to macro scale.

Keywords: heterogeneity, homogeneous unit, multiscale, shale

Procedia PDF Downloads 454
28772 Hearing Conservation Aspects of Soldier’s Exposure to Harmfull Noise within Military Armored Vehicles

Authors: Fink Nir

Abstract:

Soldiers within armored vehicles are exposed to continuous noise reaching levels as high as 120 dB. The use of hearing protection devices (HPD) may attenuate noise by as 25 dB, but attenuated noise reaching the ear is still harmful and may result in hearing loss. Hearing conservation programs in the military suggest methods to manage the harmful effects of noise. These include noise absorption within vehicles, evaluating HPD's performance, limiting time exposure, and providing guidance.

Keywords: armored vehicle noise, hearing loss, hearing protection devices, military noise, noise attenuation

Procedia PDF Downloads 146
28771 Surface-Enhanced Raman Detection in Chip-Based Chromatography via a Droplet Interface

Authors: Renata Gerhardt, Detlev Belder

Abstract:

Raman spectroscopy has attracted much attention as a structurally descriptive and label-free detection method. It is particularly suited for chemical analysis given as it is non-destructive and molecules can be identified via the fingerprint region of the spectra. In this work possibilities are investigated how to integrate Raman spectroscopy as a detection method for chip-based chromatography, making use of a droplet interface. A demanding task in lab-on-a-chip applications is the specific and sensitive detection of low concentrated analytes in small volumes. Fluorescence detection is frequently utilized but restricted to fluorescent molecules. Furthermore, no structural information is provided. Another often applied technique is mass spectrometry which enables the identification of molecules based on their mass to charge ratio. Additionally, the obtained fragmentation pattern gives insight into the chemical structure. However, it is only applicable as an end-of-the-line detection because analytes are destroyed during measurements. In contrast to mass spectrometry, Raman spectroscopy can be applied on-chip and substances can be processed further downstream after detection. A major drawback of Raman spectroscopy is the inherent weakness of the Raman signal, which is due to the small cross-sections associated with the scattering process. Enhancement techniques, such as surface enhanced Raman spectroscopy (SERS), are employed to overcome the poor sensitivity even allowing detection on a single molecule level. In SERS measurements, Raman signal intensity is improved by several orders of magnitude if the analyte is in close proximity to nanostructured metal surfaces or nanoparticles. The main gain of lab-on-a-chip technology is the building block-like ability to seamlessly integrate different functionalities, such as synthesis, separation, derivatization and detection on a single device. We intend to utilize this powerful toolbox to realize Raman detection in chip-based chromatography. By interfacing on-chip separations with a droplet generator, the separated analytes are encapsulated into numerous discrete containers. These droplets can then be injected with a silver nanoparticle solution and investigated via Raman spectroscopy. Droplet microfluidics is a sub-discipline of microfluidics which instead of a continuous flow operates with the segmented flow. Segmented flow is created by merging two immiscible phases (usually an aqueous phase and oil) thus forming small discrete volumes of one phase in the carrier phase. The study surveys different chip designs to realize coupling of chip-based chromatography with droplet microfluidics. With regards to maintaining a sufficient flow rate for chromatographic separation and ensuring stable eluent flow over the column different flow rates of eluent and oil phase are tested. Furthermore, the detection of analytes in droplets with surface enhanced Raman spectroscopy is examined. The compartmentalization of separated compounds preserves the analytical resolution since the continuous phase restricts dispersion between the droplets. The droplets are ideal vessels for the insertion of silver colloids thus making use of the surface enhancement effect and improving the sensitivity of the detection. The long-term goal of this work is the first realization of coupling chip based chromatography with droplets microfluidics to employ surface enhanced Raman spectroscopy as means of detection.

Keywords: chip-based separation, chip LC, droplets, Raman spectroscopy, SERS

Procedia PDF Downloads 246
28770 A Low-Power Two-Stage Seismic Sensor Scheme for Earthquake Early Warning System

Authors: Arvind Srivastav, Tarun Kanti Bhattacharyya

Abstract:

The north-eastern, Himalayan, and Eastern Ghats Belt of India comprise of earthquake-prone, remote, and hilly terrains. Earthquakes have caused enormous damages in these regions in the past. A wireless sensor network based earthquake early warning system (EEWS) is being developed to mitigate the damages caused by earthquakes. It consists of sensor nodes, distributed over the region, that perform majority voting of the output of the seismic sensors in the vicinity, and relay a message to a base station to alert the residents when an earthquake is detected. At the heart of the EEWS is a low-power two-stage seismic sensor that continuously tracks seismic events from incoming three-axis accelerometer signal at the first-stage, and, in the presence of a seismic event, triggers the second-stage P-wave detector that detects the onset of P-wave in an earthquake event. The parameters of the P-wave detector have been optimized for minimizing detection time and maximizing the accuracy of detection.Working of the sensor scheme has been verified with seven earthquakes data retrieved from IRIS. In all test cases, the scheme detected the onset of P-wave accurately. Also, it has been established that the P-wave onset detection time reduces linearly with the sampling rate. It has been verified with test data; the detection time for data sampled at 10Hz was around 2 seconds which reduced to 0.3 second for the data sampled at 100Hz.

Keywords: earthquake early warning system, EEWS, STA/LTA, polarization, wavelet, event detector, P-wave detector

Procedia PDF Downloads 177