Search results for: nonlinear Shannon limit
771 Nonlinear Estimation Model for Rail Track Deterioration
Authors: M. Karimpour, L. Hitihamillage, N. Elkhoury, S. Moridpour, R. Hesami
Abstract:
Rail transport authorities around the world have been facing a significant challenge when predicting rail infrastructure maintenance work for a long period of time. Generally, maintenance monitoring and prediction is conducted manually. With the restrictions in economy, the rail transport authorities are in pursuit of improved modern methods, which can provide precise prediction of rail maintenance time and location. The expectation from such a method is to develop models to minimize the human error that is strongly related to manual prediction. Such models will help them in understanding how the track degradation occurs overtime under the change in different conditions (e.g. rail load, rail type, rail profile). They need a well-structured technique to identify the precise time that rail tracks fail in order to minimize the maintenance cost/time and secure the vehicles. The rail track characteristics that have been collected over the years will be used in developing rail track degradation prediction models. Since these data have been collected in large volumes and the data collection is done both electronically and manually, it is possible to have some errors. Sometimes these errors make it impossible to use them in prediction model development. This is one of the major drawbacks in rail track degradation prediction. An accurate model can play a key role in the estimation of the long-term behavior of rail tracks. Accurate models increase the track safety and decrease the cost of maintenance in long term. In this research, a short review of rail track degradation prediction models has been discussed before estimating rail track degradation for the curve sections of Melbourne tram track system using Adaptive Network-based Fuzzy Inference System (ANFIS) model.Keywords: ANFIS, MGT, prediction modeling, rail track degradation
Procedia PDF Downloads 335770 Consequences of Transformation of Modern Monetary Policy during the Global Financial Crisis
Authors: Aleksandra Szunke
Abstract:
Monetary policy is an important pillar of the economy, directly affecting on the condition of banking sector. Depending on the strategy may both support functioning of banking institutions, as well as limit their excessively risky activities. The literature studies indicate a large number of publications, which include characteristics of initiatives, implemented by central banks during the global financial crisis and the potential effects of the use of non-standard monetary policy instruments. However, the empirical evidence about their effects and real consequences for the financial markets are still not final. Even before the escalation of instability, Bernanke, Reinhart, and Sack (2004) analyzed the effectiveness of various unconventional monetary tools in lowering long-term interest rates in the United States and Japan. The obtained results largely confirmed the effectiveness of the zero-interest-rate policy and Quantitative Easing (QE) in achieving the goal of reducing long-term interest rates. Japan, considered as the precursor of QE policy, also conducted research about the consequences of non-standard instruments, implemented to restore financial stability of the country. Although the literature about the effectiveness of Quantitative Easing in Japan is extensive, it does not uniquely specify whether it brought permanent effects. The main aim of the study is to identify the implications of non-standard monetary policy, implemented by selected central banks (the Federal Reserve System, Bank of England and European Central Bank), paying particular attention to the consequences into three areas: the size of money supply, financial markets, and the real economy.Keywords: consequences of modern monetary policy, quantitative easing policy, banking sector instability, global financial crisis
Procedia PDF Downloads 478769 Enhanced Acquisition Time of a Quantum Holography Scheme within a Nonlinear Interferometer
Authors: Sergio Tovar-Pérez, Sebastian Töpfer, Markus Gräfe
Abstract:
The work proposes a technique that decreases the detection acquisition time of quantum holography schemes down to one-third; this allows the possibility to image moving objects. Since its invention, quantum holography with undetected photon schemes has gained interest in the scientific community. This is mainly due to its ability to tailor the detected wavelengths according to the needs of the scheme implementation. Yet this wavelength flexibility grants the scheme a wide range of possible applications; an important matter was yet to be addressed. Since the scheme uses digital phase-shifting techniques to retrieve the information of the object out of the interference pattern, it is necessary to acquire a set of at least four images of the interference pattern along with well-defined phase steps to recover the full object information. Hence, the imaging method requires larger acquisition times to produce well-resolved images. As a consequence, the measurement of moving objects remains out of the reach of the imaging scheme. This work presents the use and implementation of a spatial light modulator along with a digital holographic technique called quasi-parallel phase-shifting. This technique uses the spatial light modulator to build a structured phase image consisting of a chessboard pattern containing the different phase steps for digitally calculating the object information. Depending on the reduction in the number of needed frames, the acquisition time reduces by a significant factor. This technique opens the door to the implementation of the scheme for moving objects. In particular, the application of this scheme in imaging alive specimens comes one step closer.Keywords: quasi-parallel phase shifting, quantum imaging, quantum holography, quantum metrology
Procedia PDF Downloads 114768 Assessment Using Copulas of Simultaneous Damage to Multiple Buildings Due to Tsunamis
Authors: Yo Fukutani, Shuji Moriguchi, Takuma Kotani, Terada Kenjiro
Abstract:
If risk management of the assets owned by companies, risk assessment of real estate portfolio, and risk identification of the entire region are to be implemented, it is necessary to consider simultaneous damage to multiple buildings. In this research, the Sagami Trough earthquake tsunami that could have a significant effect on the Japanese capital region is focused on, and a method is proposed for simultaneous damage assessment using copulas that can take into consideration the correlation of tsunami depths and building damage between two sites. First, the tsunami inundation depths at two sites were simulated by using a nonlinear long-wave equation. The tsunamis were simulated by varying the slip amount (five cases) and the depths (five cases) for each of 10 sources of the Sagami Trough. For each source, the frequency distributions of the tsunami inundation depth were evaluated by using the response surface method. Then, Monte-Carlo simulation was conducted, and frequency distributions of tsunami inundation depth were evaluated at the target sites for all sources of the Sagami Trough. These are marginal distributions. Kendall’s tau for the tsunami inundation simulation at two sites was 0.83. Based on this value, the Gaussian copula, t-copula, Clayton copula, and Gumbel copula (n = 10,000) were generated. Then, the simultaneous distributions of the damage rate were evaluated using the marginal distributions and the copulas. For the correlation of the tsunami inundation depth at the two sites, the expected value hardly changed compared with the case of no correlation, but the damage rate of the ninety-ninth percentile value was approximately 2%, and the maximum value was approximately 6% when using the Gumbel copula.Keywords: copulas, Monte-Carlo simulation, probabilistic risk assessment, tsunamis
Procedia PDF Downloads 143767 Turbulent Flow Characteristics and Bed Morphology around Circular Bridge Pier
Authors: Pratik Acharya
Abstract:
Scour is the natural phenomenon brought about by erosive action of the flowing stream in alluvial channels. Frequent scouring around bridge piers may cause damage to the structures. In alluvial channels, a complex interaction between the streamflow and the bed particles results in scouring around piers. Thus, the study of characteristics of flow around piers can give sound knowledge about the scouring process. The present research has been done to investigate the turbulent flow characteristics around bridge piers and corresponding changes in bed morphology. Laboratory experiments were carried out in a tilting flume with a sand bed. The velocities around the pier are measured by Acoustic Doppler Velocimeter. Measurements show that at upstream of the pier velocity and Reynolds stresses are negative near the bed and near the free surface at downstream of the pier. At the downstream of the pier, Reynolds stresses changes rapidly due to the formation of wake vortices. Experimental results show that secondary currents are more predominant at the downstream of the pier. As the flowing stream hits the pier, the flow gets separated in the form of downflow along the face of the pier due to a strong pressure gradient and along the sides of the piers. Separation of flow around the pier leads to scour the bed material and develop the vortex. The downflow hits the bed and removes the bed material, which can be carried forward by the flow circulations along sides of the piers. Eroded bed material is deposited along the centerline at the rear side of the pier and produces hump in the downstream region. Initially, the rate of scouring is high and reduces gradually with increasing time. After a certain limit, equilibrium sets between the erosive capacity of the flowing stream and resistance to the motion by bed particles.Keywords: acoustic doppler velocimeter, pier, Reynolds stress, scour depth, velocity
Procedia PDF Downloads 148766 Electronic Payment Recording with Payment History Retrieval Module: A System Software
Authors: Adrian Forca, Simeon Cainday III
Abstract:
The Electronic Payment Recording with Payment History Retrieval Module is developed intendedly for the College of Science and Technology. This system software innovates the manual process of recording the payments done in the department through the development of electronic payment recording system software shifting from the slow and time-consuming procedure to quick yet reliable and accurate way of recording payments because it immediately generates receipts for every transaction. As an added feature to its software process, generation of recorded payment report is integrated eliminating the manual reporting to a more easy and consolidated report. As an added feature to the system, all recorded payments of the students can be retrieved immediately making the system transparent and reliable payment recording software. Viewing the whole process, the system software will shift from the manual process to an organized software technology because the information will be stored in a logically correct and normalized database. Further, the software will be developed using the modern programming language and implement strict programming methods to validate all users accessing the system, evaluate all data passed into the system and information retrieved to ensure data accuracy and reliability. In addition, the system will identify the user and limit its access privilege to establish boundaries of the specific access to information allowed for the store, modify, and update making the information secure against unauthorized data manipulation. As a result, the System software will eliminate the manual procedure and replace with an innovative modern information technology resulting to the improvement of the whole process of payment recording fast, secure, accurate and reliable software innovations.Keywords: collection, information system, manual procedure, payment
Procedia PDF Downloads 164765 Risk Assessment of Heavy Metals in Soils at Electronic Waste Activity Sites within the Vicinity of Alaba International Market, Nigeria
Authors: A. A. Adebayo, A. O. Ogunkeyede, A. O. Adeigbe
Abstract:
Digital globalisation and yarn of Nigeria society to overcome the digital divide have resulted in contamination of soil by heavy metals (HMs) from e-waste activities at Alaba international market, Lagos, Nigeria. The aim of this research was to determine the concentration of various metals {Cadmium (Cd), Chromium (Cr), Copper (Cu), and Lead (Pb)} and identify their ecological and health risks for the people within the study area. A total of 60 soil samples were collected at Alaba market study area. Two types of samples were collected from each sampling points: topsoil (0-15 cm), subsoil (15 -30 cm). The metal concentration results showed that the soils were heavily contaminated by HMs at topsoil and subsoil. The geoaccummulation and ecological risk indices revealed high pollution level from all studied site. The health risk assessment results suggested that there is high possibility of carcinogenic risk to humans because the carcinogenic risk via corresponding exposure pathways exceeded the safety limit of 10-6 (the acceptable level of carcinogenic risk for human). Furthermore, inhalation of soil particles is the main exposure pathway for Cr to enter the human body for all ages. Children in the vicinity are exposed more to ingestion of Pb since they tend to eat earth (pica) and repeatedly suck their fingers. This study provides basic information to create awareness for a need to introduce pollution control measures and the need to protect the ecosystem and human health within the study area at Alaba international market.Keywords: contaminated soil, ecological risk, hazard index, risk factor, exposure pathways, heavy metals
Procedia PDF Downloads 252764 Analysis of Pressure Drop in a Concentrated Solar Collector with Direct Steam Production
Authors: Sara Sallam, Mohamed Taqi, Naoual Belouaggadia
Abstract:
Solar thermal power plants using parabolic trough collectors (PTC) are currently a powerful technology for generating electricity. Most of these solar power plants use thermal oils as heat transfer fluid. The latter is heated in the solar field and transfers the heat absorbed in an oil-water heat exchanger for the production of steam driving the turbines of the power plant. Currently, we are seeking to develop PTCs with direct steam generation (DSG). This process consists of circulating water under pressure in the receiver tube to generate steam directly into the solar loop. This makes it possible to reduce the investment and maintenance costs of the PTCs (the oil-water exchangers are removed) and to avoid the environmental risks associated with the use of thermal oils. The pressure drops in these systems are an important parameter to ensure their proper operation. The determination of these losses is complex because of the presence of the two phases, and most often we limit ourselves to describing them by models using empirical correlations. A comparison of these models with experimental data was performed. Our calculations focused on the evolution of the pressure of the liquid-vapor mixture along the receiver tube of a PTC-DSG for pressure values and inlet flow rates ranging respectively from 3 to 10 MPa, and from 0.4 to 0.6 kg/s. The comparison of the numerical results with experience allows us to demonstrate the validity of some models according to the pressures and the flow rates of entry in the PTC-DSG receiver tube. The analysis of these two parameters’ effects on the evolution of the pressure along the receiving tub, shows that the increase of the inlet pressure and the decrease of the flow rate lead to minimal pressure losses.Keywords: direct steam generation, parabolic trough collectors, Ppressure drop, empirical models
Procedia PDF Downloads 139763 Graph Neural Network-Based Classification for Disease Prediction in Health Care Heterogeneous Data Structures of Electronic Health Record
Authors: Raghavi C. Janaswamy
Abstract:
In the healthcare sector, heterogenous data elements such as patients, diagnosis, symptoms, conditions, observation text from physician notes, and prescriptions form the essentials of the Electronic Health Record (EHR). The data in the form of clear text and images are stored or processed in a relational format in most systems. However, the intrinsic structure restrictions and complex joins of relational databases limit the widespread utility. In this regard, the design and development of realistic mapping and deep connections as real-time objects offer unparallel advantages. Herein, a graph neural network-based classification of EHR data has been developed. The patient conditions have been predicted as a node classification task using a graph-based open source EHR data, Synthea Database, stored in Tigergraph. The Synthea DB dataset is leveraged due to its closer representation of the real-time data and being voluminous. The graph model is built from the EHR heterogeneous data using python modules, namely, pyTigerGraph to get nodes and edges from the Tigergraph database, PyTorch to tensorize the nodes and edges, PyTorch-Geometric (PyG) to train the Graph Neural Network (GNN) and adopt the self-supervised learning techniques with the AutoEncoders to generate the node embeddings and eventually perform the node classifications using the node embeddings. The model predicts patient conditions ranging from common to rare situations. The outcome is deemed to open up opportunities for data querying toward better predictions and accuracy.Keywords: electronic health record, graph neural network, heterogeneous data, prediction
Procedia PDF Downloads 86762 An Investigation of the Use of Visible Spectrophotometric Analysis of Lead in an Herbal Tea Supplement
Authors: Salve Alessandria Alcantara, John Armand E. Aquino, Ma. Veronica Aranda, Nikki Francine Balde, Angeli Therese F. Cruz, Elise Danielle Garcia, Antonie Kyna Lim, Divina Gracia Lucero, Nikolai Thadeus Mappatao, Maylan N. Ocat, Jamille Dyanne L. Pajarillo, Jane Mierial A. Pesigan, Grace Kristin Viva, Jasmine Arielle C. Yap, Kathleen Michelle T. Yu, Joanna J. Orejola, Joanna V. Toralba
Abstract:
Lead is a neurotoxic metallic element that is slowly accumulated in bones and tissues especially if present in products taken in a regular basis such as herbal tea supplements. Although sensitive analytical instruments are already available, the USP limit test for lead is still widely used. However, because of its serious shortcomings, Lang Lang and his colleagues developed a spectrophotometric method for determination of lead in all types of samples. This method was the one adapted in this study. The actual procedure performed was divided into three parts: digestion, extraction and analysis. For digestion, HNO3 and CH3COOH were used. Afterwards, masking agents, 0.003% and 0.001% dithizone in CHCl3 were added and used for the extraction. For the analysis, standard addition method and colorimetry were performed. This was done in triplicates under two conditions. The 1st condition, using 25µg/mL of standard, resulted to very low absorbances with an r2 of 0.551. This led to the use of a higher concentration, 1mg/mL, for condition 2. Precipitation of lead cyanide was observed and the absorbance readings were relatively higher but between 0.15-0.25, resulting to a very low r2 of 0.429. LOQ and LOD were not computed due to the limitations of the Milton-Roy Spectrophotometer. The method performed has a shorter digestion time, and used less but more accessible reagents. However, the optimum ratio of dithizone-lead complex must be observed in order to obtain reliable results while exploring other concentration of standards.Keywords: herbal tea supplement, lead-dithizone complex, standard addition, visible spectroscopy
Procedia PDF Downloads 387761 Application of Continuum Damage Concept to Simulation of the Interaction between Hydraulic Fractures and Natural Fractures
Authors: Anny Zambrano, German Gonzalez, Yair Quintero
Abstract:
The continuum damage concept is used to study the interaction between hydraulic fractures and natural fractures, the objective is representing the path and relation among this two fractures types and predict its complex behavior without the need to pre-define their direction as occurs in other finite element applications, providing results more consistent with the physical behavior of the phenomenon. The approach uses finite element simulations through Abaqus software to model damage fracturing, the fracturing process by damage propagation in a rock. The modeling the phenomenon develops in two dimensional (2D) so that the fracture will be represented by a line and the crack front by a point. It considers nonlinear constitutive behavior, finite strain, time-dependent deformation, complex boundary conditions, strain hardening and softening, and strain based damage evolution in compression and tension. The complete governing equations are provided and the method is described in detail to permit readers to replicate all results. The model is compared to models that are published and available. Comparisons are focused in five interactions between natural fractures (NF) and hydraulic fractures: Fractured arrested at NF, crossing NF with or without offset, branching at intersecting NFs, branching at end of NF and NF dilation due to shear slippage. The most significant new finding is, that is not necessary to use pre-defined addresses propagation and stress condition can be evaluated as a dominant factor in the process. This is important because it can model in a more real way the generated complex hydraulic fractures, and be a valuable tool to predict potential problems and different geometries of the fracture network in the process of fracturing due to fluid injection.Keywords: continuum damage, hydraulic fractures, natural fractures, complex fracture network, stiffness
Procedia PDF Downloads 343760 Seismic Vulnerability of Structures Designed in Accordance with the Allowable Stress Design and Load Resistant Factor Design Methods
Authors: Mohammadreza Vafaei, Amirali Moradi, Sophia C. Alih
Abstract:
The method selected for the design of structures not only can affect their seismic vulnerability but also can affect their construction cost. For the design of steel structures, two distinct methods have been introduced by existing codes, namely allowable stress design (ASD) and load resistant factor design (LRFD). This study investigates the effect of using the aforementioned design methods on the seismic vulnerability and construction cost of steel structures. Specifically, a 20-story building equipped with special moment resisting frame and an eccentrically braced system was selected for this study. The building was designed for three different intensities of peak ground acceleration including 0.2 g, 0.25 g, and 0.3 g using the ASD and LRFD methods. The required sizes of beams, columns, and braces were obtained using response spectrum analysis. Then, the designed frames were subjected to nine natural earthquake records which were scaled to the designed response spectrum. For each frame, the base shear, story shears, and inter-story drifts were calculated and then were compared. Results indicated that the LRFD method led to a more economical design for the frames. In addition, the LRFD method resulted in lower base shears and larger inter-story drifts when compared with the ASD method. It was concluded that the application of the LRFD method not only reduced the weights of structural elements but also provided a higher safety margin against seismic actions when compared with the ASD method.Keywords: allowable stress design, load resistant factor design, nonlinear time history analysis, seismic vulnerability, steel structures
Procedia PDF Downloads 269759 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling
Authors: M. Almutairi, S. Hadjiloucas
Abstract:
The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.Keywords: harmonics, passive filter, power factor, power quality
Procedia PDF Downloads 306758 Climate Changes in Albania and Their Effect on Cereal Yield
Authors: Lule Basha, Eralda Gjika
Abstract:
This study is focused on analyzing climate change in Albania and its potential effects on cereal yields. Initially, monthly temperature and rainfalls in Albania were studied for the period 1960-2021. Climacteric variables are important variables when trying to model cereal yield behavior, especially when significant changes in weather conditions are observed. For this purpose, in the second part of the study, linear and nonlinear models explaining cereal yield are constructed for the same period, 1960-2021. The multiple linear regression analysis and lasso regression method are applied to the data between cereal yield and each independent variable: average temperature, average rainfall, fertilizer consumption, arable land, land under cereal production, and nitrous oxide emissions. In our regression model, heteroscedasticity is not observed, data follow a normal distribution, and there is a low correlation between factors, so we do not have the problem of multicollinearity. Machine-learning methods, such as random forest, are used to predict cereal yield responses to climacteric and other variables. Random Forest showed high accuracy compared to the other statistical models in the prediction of cereal yield. We found that changes in average temperature negatively affect cereal yield. The coefficients of fertilizer consumption, arable land, and land under cereal production are positively affecting production. Our results show that the Random Forest method is an effective and versatile machine-learning method for cereal yield prediction compared to the other two methods.Keywords: cereal yield, climate change, machine learning, multiple regression model, random forest
Procedia PDF Downloads 91757 Surface-Enhanced Raman Spectroscopy-Based Detection of SARS-CoV-2 Through In Situ One-pot Electrochemical Synthesis of 3D Au-Lysate Nanocomposite Structures on Plasmonic Au Electrodes
Authors: Ansah Iris Baffour, Dong-Ho Kim, Sung-Gyu Park
Abstract:
The ongoing COVID-19 pandemic, caused by the SARS-CoV-2 virus and is gradually shifting to an endemic phase which implies the outbreak is far from over and will be difficult to eradicate. Global cooperation has led to unified precautions that aim to suppress epidemiological spread (e.g., through travel restrictions) and reach herd immunity (through vaccinations); however, the primary strategy to restrain the spread of the virus in mass populations relies on screening protocols that enable rapid on-site diagnosis of infections. Herein, we employed surface enhanced Raman spectroscopy (SERS) for the rapid detection of SARS-CoV-2 lysate on an Au-modified Au nanodimple(AuND)electrode. Through in situone-pot Au electrodeposition on the AuND electrode, Au-lysate nanocomposites were synthesized, generating3D internal hotspots for large SERS signal enhancements within 30 s of the deposition. The capture of lysate into newly generated plasmonic nanogaps within the nanocomposite structures enhanced metal-spike protein contact in 3D spaces and served as hotspots for sensitive detection. The limit of detection of SARS-CoV-2 lysate was 5 x 10-2 PFU/mL. Interestingly, ultrasensitive detection of the lysates of influenza A/H1N1 and respiratory syncytial virus (RSV) was possible, but the method showed ultimate selectivity for SARS-CoV-2 in lysate solution mixtures. We investigated the practical application of the approach for rapid on-site diagnosis by detecting SARS-CoV-2 lysate spiked in normal human saliva at ultralow concentrations. The results presented demonstrate the reliability and sensitivity of the assay for rapid diagnosis of COVID-19.Keywords: label-free detection, nanocomposites, SARS-CoV-2, surface-enhanced raman spectroscopy
Procedia PDF Downloads 123756 Chemiluminescent Detection of Microorganisms in Food/Drug Product Using Reducing Agents and Gold Nanoplates
Authors: Minh-Phuong Ngoc Bui, Abdennour Abbas
Abstract:
Microbial spoilage of food/drug has been a constant nuisance and an unavoidable problem throughout history that affects food/drug quality and safety in a variety of ways. A simple and rapid test of fungi and bacteria in food/drugs and environmental clinical samples is essential for proper management of contamination. A number of different techniques have been developed for detection and enumeration of foodborne microorganism including plate counting, enzyme-linked immunosorbent assay (ELISA), polymer chain reaction (PCR), nucleic acid sensor, electrical and microscopy methods. However, the significant drawbacks of these techniques are highly demand of operation skills and the time and cost involved. In this report, we introduce a rapid method for detection of bacteria and fungi in food/drug products using a specific interaction between a reducing agent (tris(2-carboxylethyl)phosphine (TCEP)) and the microbial surface proteins. The chemical reaction was transferred to a transduction system using gold nanoplates-enhanced chemiluminescence. We have optimized our nanoplates synthetic conditions, characterized the chemiluminescence parameters and optimized conditions for the microbial assay. The new detection method was applied for rapid detection of bacteria (E.coli sp. and Lactobacillus sp.) and fungi (Mucor sp.), with limit of detection as low as single digit cells per mL within 10 min using a portable luminometer. We expect our simple and rapid detection method to be a powerful alternative to the conventional plate counting and immunoassay methods for rapid screening of microorganisms in food/drug products.Keywords: microorganism testing, gold nanoplates, chemiluminescence, reducing agents, luminol
Procedia PDF Downloads 299755 Evaluation of Water Chemistry and Quality Characteristics of Işıklı Lake (Denizli, Türkiye)
Authors: Abdullah Ay, Şehnaz Şener
Abstract:
It is of great importance to reveal their current status and conduct research in this direction for the sustainable use and protection of lakes, which are among the most important water resources for meeting water needs and ensuring ecological balance. In this context, the purpose of this study is to determine the hydrogeochemical properties, as well as water quality and usability characteristics of Işıklı Lake within the Lakes Region of Turkey. Işıklı Lake is a tectonic lake located in the Aegean Region of Turkey. The lake has a surface area of approximately 36 km². Temperature (T), electrical conductivity (EC) and hydrogen ion concentration (pH), dissolved oxygen (%, mg/l), Oxidation Reduction Potential (ORP; mV), and amount of dissolved solids in water (TDS; mg/l) of water samples taken from the lake values were determined by in situ analysis. Major ion and heavy metal analyses were carried out under laboratory conditions. Additionally, the relationship between major ion concentrations and TDS values of Işıklı Lake water samples was determined by correlation analysis. According to the results obtained, it is seen that especially Mg, Ca and HCO₃ ions are dominant in the lake water, and it has been determined that the lake water is in the Ca-Mg-HCO₃ water facies. According to statistical analysis, a strong and positive relationship was found between the TDS value and bicarbonate and calcium (R² = 0.61 and 0.7, respectively). However, no significant relationship was detected between the TDS value and other chemical elements. Although the waters are generally in water quality class I, they are in class IV in terms of sulfur and aluminum. It is included in the water quality class. This situation is due to the rock-water interaction in the region. When the analysis results of the lake water were compared with the drinking water limit values specified by TSE-266 (2005) and WHO (2017), it was determined that it was not suitable for drinking water use in terms of Pb, Se, As, and Cr. When the waters were evaluated in terms of pollution, it was determined that 50% of the samples carried pollution loads in terms of Al, As, Fe, NO3, and Cu.Keywords: Işıklı Lake, water chemistry, water quality, pollution, arsenic, Denizli
Procedia PDF Downloads 23754 Automation of Pneumatic Seed Planter for System of Rice Intensification
Authors: Tukur Daiyabu Abdulkadir, Wan Ishak Wan Ismail, Muhammad Saufi Mohd Kassim
Abstract:
Seed singulation and accuracy in seed spacing are the major challenges associated with the adoption of mechanical seeder for system of rice intensification. In this research the metering system of a pneumatic planter was modified and automated for increase precision to meet the demand of system of rice intensification SRI. The chain and sprocket mechanism of a conventional vacuum planter were now replaced with an electro mechanical system made up of a set of servo motors, limit switch, micro controller and a wheel divided into 10 equal angles. The circumference of the planter wheel was determined based on which seed spacing was computed and mapped to the angles of the metering wheel. A program was then written and uploaded to arduino micro controller and it automatically turns the seed plates for seeding upon covering the required distance. The servo motor was calibrated with the aid of labVIEW. The machine was then calibrated using a grease belt and varying the servo rpm through voltage variation between 37 rpm to 47 rpm until an optimum value of 40 rpm was obtained with a forward speed of 5 kilometers per hour. A pressure of 1.5 kpa was found to be optimum under which no skip or double was recorded. Precision in spacing (coefficient of variation), miss index, multiple index, doubles and skips were investigated. No skip or double was recorded both at laboratory and field levels. The operational parameters under consideration were both evaluated at laboratory and field. Even though there was little variation between the laboratory and field values of precision in spacing, multiple index and miss index, the different is not significant as both laboratory and field values fall within the acceptable range.Keywords: automation, calibration, pneumatic seed planter, system of rice intensification
Procedia PDF Downloads 642753 Fuzzy Climate Control System for Hydroponic Green Forage Production
Authors: Germán Díaz Flórez, Carlos Alberto Olvera Olvera, Domingo José Gómez Meléndez, Francisco Eneldo López Monteagudo
Abstract:
In recent decades, population growth has exerted great pressure on natural resources. Two of the most scarce and difficult to obtain resources, arable land, and water, are closely interrelated, to the satisfaction of the demand for food production. In Mexico, the agricultural sector uses more than 70% of water consumption. Therefore, maximize the efficiency of current production systems is inescapable. It is essential to utilize techniques and tools that will enable us to the significant savings of water, labor and fertilizer. In this study, we present a production module of hydroponic green forage (HGF), which is a viable alternative in the production of livestock feed in the semi-arid and arid zones. The equipment in addition to having a forage production module, has a climate and irrigation control system that operated with photovoltaics. The climate control, irrigation and power management is based on fuzzy control techniques. The fuzzy control provides an accurate method in the design of controllers for nonlinear dynamic physical phenomena such as temperature and humidity, besides other as lighting level, aeration and irrigation control using heuristic information. In this working, firstly refers to the production of the hydroponic green forage, suitable weather conditions and fertigation subsequently presents the design of the production module and the design of the controller. A simulation of the behavior of the production module and the end results of actual operation of the equipment are presented, demonstrating its easy design, flexibility, robustness and low cost that represents this equipment in the primary sector.Keywords: fuzzy, climate control system, hydroponic green forage, forage production module
Procedia PDF Downloads 397752 Biogenic Amines Production during RAS Cheese Ripening
Authors: Amr Amer
Abstract:
Cheeses are among those high-protein-containing foodstuffs in which enzymatic and microbial activities cause the formation of biogenic amines from amino acids decarboxylation. The amount of biogenic amines in cheese may act as a useful indicator of the hygienic quality of the product. In other words, their presence in cheese is related to its spoilage and safety. Formation of biogenic amines during Ras cheese (Egyptian hard cheese) ripening was investigated for 4 months. Three batches of Ras cheese were manufactured using Egyptian traditional method. From each batch, Samples were collected at 1, 7, 15, 30, 60, 90 and 120 days after cheese manufacture. The concentrations of biogenic amines (Tyramine, Histamine, Cadaverine and Tryptamine) were analyzed by high performance liquid chromatography (HPLC). There was a significant increased (P<0.05) in Tyramine levels from 4.34± 0.07 mg|100g in the first day of storage till reached 88.77± 0.14 mg|100g at a 120-day of storage. Also, Histamine and Cadaverine levels had the same increased pattern of Tyramine reaching 64.94± 0.10 and 28.28± 0.08 mg|100g in a 120- day of storage, respectively. While, there was a fluctuation in the concentration of Tryptamine level during ripening period as it decreased from 3.24± 0.06 to 2.66± 0.11 mg|100g at 60-day of storage then reached 5.38±0.08 mg|100g in a 120- day of storage. Biogenic amines can be formed in cheese during production and storage: many variables, as pH, salt concentration, bacterial activity as well as moisture, storage temperature and ripening time, play a relevant role in their formation. Comparing the obtained results with the recommended standard by Food and Drug Administration "FDA" (2001), High levels of biogenic amines in various Ras cheeses consumed in Egypt exceeded the permissible value (10 mg%) which seemed to pose a threat to public health. In this study, presence of high concentrations of biogenic amines (Tyramine, Histamine, cadaverine and Tryptamine) in Egyptian Ras cheeses reflects the bad hygienic conditions under which they produced and stored. Accordingly, the levels of biogenic amines in different cheeses should be come in accordance with the safe permissible limit recommended by FDA to ensure human safety.Keywords: Ras cheese, biogenic amines, tyramine, histamine, cadaverine
Procedia PDF Downloads 436751 Kou Jump Diffusion Model: An Application to the SP 500; Nasdaq 100 and Russell 2000 Index Options
Authors: Wajih Abbassi, Zouhaier Ben Khelifa
Abstract:
The present research points towards the empirical validation of three options valuation models, the ad-hoc Black-Scholes model as proposed by Berkowitz (2001), the constant elasticity of variance model of Cox and Ross (1976) and the Kou jump-diffusion model (2002). Our empirical analysis has been conducted on a sample of 26,974 options written on three indexes, the S&P 500, Nasdaq 100 and the Russell 2000 that were negotiated during the year 2007 just before the sub-prime crisis. We start by presenting the theoretical foundations of the models of interest. Then we use the technique of trust-region-reflective algorithm to estimate the structural parameters of these models from cross-section of option prices. The empirical analysis shows the superiority of the Kou jump-diffusion model. This superiority arises from the ability of this model to portray the behavior of market participants and to be closest to the true distribution that characterizes the evolution of these indices. Indeed the double-exponential distribution covers three interesting properties that are: the leptokurtic feature, the memory less property and the psychological aspect of market participants. Numerous empirical studies have shown that markets tend to have both overreaction and under reaction over good and bad news respectively. Despite of these advantages there are not many empirical studies based on this model partly because probability distribution and option valuation formula are rather complicated. This paper is the first to have used the technique of nonlinear curve-fitting through the trust-region-reflective algorithm and cross-section options to estimate the structural parameters of the Kou jump-diffusion model.Keywords: jump-diffusion process, Kou model, Leptokurtic feature, trust-region-reflective algorithm, US index options
Procedia PDF Downloads 429750 Laser Writing on Vitroceramic Disks for Petabyte Data Storage
Authors: C. Busuioc, S. I. Jinga, E. Pavel
Abstract:
The continuous need of more non-volatile memories with a higher storage capacity, smaller dimensions and weight, as well as lower costs, has led to the exploration of optical lithography on active media, as well as patterned magnetic composites. In this context, optical lithography is a technique that can provide a significant decrease of the information bit size to the nanometric scale. However, there are some restrictions that arise from the need of breaking the optical diffraction limit. Major achievements have been obtained by employing a vitoceramic material as active medium and a laser beam operated at low power for the direct writing procedure. Thus, optical discs with ultra-high density were fabricated by a conventional melt-quenching method starting from analytical purity reagents. They were subsequently used for 3D recording based on their photosensitive features. Naturally, the next step consists in the elucidation of the composition and structure of the active centers, in correlation with the use of silver and rare-earth compounds for the synthesis of the optical supports. This has been accomplished by modern characterization methods, namely transmission electron microscopy coupled with selected area electron diffraction, scanning transmission electron microscopy and electron energy loss spectroscopy. The influence of laser diode parameters, silver concentration and fluorescent compounds formation on the writing process and final material properties was investigated. The results indicate performances in terms of capacity with two order of magnitude higher than other reported information storage systems. Moreover, the fluorescent photosensitive vitroceramics may be integrated in other applications which appeal to nanofabrication as the driving force in electronics and photonics fields.Keywords: data storage, fluorescent compounds, laser writing, vitroceramics
Procedia PDF Downloads 225749 Simultaneous Removal of Phosphate and Ammonium from Eutrophic Water Using Dolochar Based Media Filter
Authors: Prangya Ranjan Rout, Rajesh Roshan Dash, Puspendu Bhunia
Abstract:
With the aim of enhancing the nutrient (ammonium and phosphate) removal from eutrophic wastewater with reduced cost, a novel media based multistage bio filter with drop aeration facility was developed in this work. The bio filter was packed with a discarded sponge iron industry by product, ‘dolochar’ primarily to remove phosphate via physicochemical approach. In the multi stage bio-filter drop, aeration was achieved by the process of percolation of the gravity-fed wastewater through the filter media and dropping down of wastewater from stage to stage. Ammonium present in wastewater got adsorbed by the filter media and biomass grown on the filter media and subsequently, got converted to nitrate through biological nitrification in the aerobic condition, as realized by drop aeration. The performance of the bio-filter in treating real eutrophic wastewater was monitored for a period of about 2 months. The influent phosphate concentration was in the range of 16-19 mg/L, and ammonium concentration was in the range of 65-78 mg/L. The average nutrient removal efficiency observed during the study period were 95.2% for phosphate and 88.7% for ammonium, with mean final effluent concentration of 0.91, and 8.74 mg/L, respectively. Furthermore, the subsequent release of nutrient from the saturated filter media, after completion of treatment process has been undertaken in this study and thin layer funnel analytical test results reveal the slow nutrient release nature of spent dolochar, thereby, recommending its potential agricultural application. Thus, the bio-filter displays immense prospective for treating real eutrophic wastewater, significantly decreasing the level of nutrients and keeping the effluent nutrient concentrations at par with the permissible limit and more importantly, facilitating the conversion of the waste materials into usable ones.Keywords: ammonium removal, phosphate removal, multi-stage bio-filter, dolochar
Procedia PDF Downloads 194748 Ferromagnetic Potts Models with Multi Site Interaction
Authors: Nir Schreiber, Reuven Cohen, Simi Haber
Abstract:
The Potts model has been widely explored in the literature for the last few decades. While many analytical and numerical results concern with the traditional two site interaction model in various geometries and dimensions, little is yet known about models where more than two spins simultaneously interact. We consider a ferromagnetic four site interaction Potts model on the square lattice (FFPS), where the four spins reside in the corners of an elementary square. Each spin can take an integer value 1,2,...,q. We write the partition function as a sum over clusters consisting of monochromatic faces. When the number of faces becomes large, tracing out spin configurations is equivalent to enumerating large lattice animals. It is known that the asymptotic number of animals with k faces is governed by λᵏ, with λ ≈ 4.0626. Based on this observation, systems with q < 4 and q > 4 exhibit a second and first order phase transitions, respectively. The transition nature of the q = 4 case is borderline. For any q, a critical giant component (GC) is formed. In the finite order case, GC is simple, while it is fractal when the transition is continuous. Using simple equilibrium arguments, we obtain a (zero order) bound on the transition point. It is claimed that this bound should apply for other lattices as well. Next, taking into account higher order sites contributions, the critical bound becomes tighter. Moreover, for q > 4, if corrections due to contributions from small clusters are negligible in the thermodynamic limit, the improved bound should be exact. The improved bound is used to relate the critical point to the finite correlation length. Our analytical predictions are confirmed by an extensive numerical study of FFPS, using the Wang-Landau method. In particular, the q=4 marginal case is supported by a very ambiguous pseudo-critical finite size behavior.Keywords: entropic sampling, lattice animals, phase transitions, Potts model
Procedia PDF Downloads 160747 Recurrent Neural Networks for Complex Survival Models
Authors: Pius Marthin, Nihal Ata Tutkun
Abstract:
Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)
Procedia PDF Downloads 89746 Effect of Slope Angle on Gougerd Landslide Stability in Northwest of Iran
Authors: Akbar Khodavirdizadeh
Abstract:
Gougerd village landslide with area about 150 hectares is located in southwest of Khoy city in northwest of the Iran. This Landslide was commenced more than 21 years and caused some damages in houses like some fissures on walls and some cracks on ground and foundations. The main mechanism of landslide is rotational with the high different of top and foot is about 230 m. The thickness of slide mass based on geoelectrical investigation is about 16m obtained. The upper layer of slope is silty sand and the lower layer of clayey gravel. In this paper, the stability of landslide are analyzed based in static analysis under different groundwater surface conditions and at slope angle changes with limit eqlibrium method and the simplified Bishop method. The results of the 72 stability analysis showed that the slope stability of Gougerd landslide increased with increasing of the groundwater surface depth of slope crown. And especially when decreased of slope angle, the safety facter more than in previous state is increased. The required of safety factor for stability in groundwater surface depth from slope crown equal 14 m and with decreased of slope angle to 3 degree at decrease of groundwater surface depth from slope crown equal 6.5 m obtained. The safety factor in critical conditions under groundwater surface depth from slope crown equal 3.5 m and at decreased of slope angle to 3 degree equal 0.5 m obtained. At groudwater surface depth from slope crown of 3 m, 7 m and 10 m respectively equal to 0.97, 1.19 and 1.33 obtained. At groudwater surface depth from slope crown of 3 m, 7 m and 10 m with decreased of slope angle to 3 degree, respectively equal to 1.27, 1.54 and 1.72 obtained. According to the results of this study, for 1 m of groundwater level decrease, the safety factor increased by 5%, and for 1 degree of reduction of the slope angle, safety factor increased by 15%. And the effect of slope angle on Gougerd landslide stability was felt more than groundwater effect.Keywords: Gougerd landslide, stability analysis, slope angle, groundwater, Khoy
Procedia PDF Downloads 169745 Performance Evaluation of Cement Mortar with Crushed Stone Dust as Fine Aggregates
Authors: Pradeep Kumar
Abstract:
The present work is based on application of cement mortar with natural sand and discontinuous steel fiber through which bending behavior of skinny beam was evaluated. This research is to study the effects of combining reinforcing steel meshes (continuous steel reinforcement) with discontinuous fibers as reinforcement in skinny walled Portland cement based cement mortar with crushed stone dust as a fine aggregate. The term ‘skinny’ means thickness of the beams is less than 25 mm. The main idea behind this combination is to satisfy the ultimate strength limit state through the steel mesh reinforcement (as a main reinforcement) and to control the cracking under service loads through fiber (Recron 3s) reinforcement (as secondary reinforcement). The main object of this study is to carry out the bending behavior of mortar reinforced thin beam with only one layer of steel mesh (with various transfer wire spacing) and with a recron 3s (Reliance) fifers. The wide experimental program with bending tests is undertaken. The following variables are investigated: (a) the reference mesh size - 25.4 x 25.4 mm and 50.8 x 50.8 mm; (b) the transverse wire spacing - 25.4 mm, 50.8 mm, and no transverse wires; (c) the type of fibers – Reliance (Recron 3s, 6mm length); and (d) the fiber volume fraction – 0.1% and 0.25%. Some of the main conclusions are: (a) the use of recron 3s fibers leads to a little better overall performance than that with no fiber; (b) an increase in equivalent stress is observed when 0.1% RF,0.25% R Fibers are used; (c) when 25.4 x 50.8 size steel mesh is used, no noticeable change in behavior is observed in comparison to specimens without fibers; and (d) for no fibers 0.1% and o.1% RF the transverse wire spacing has some little effect on the equivalent stress for RF fibers, the transverse wire has no influence but the equivalent stress are increased.Keywords: cement mortar, crushed stone dust, fibre, steel mesh
Procedia PDF Downloads 312744 Investigation of the Physicochemistry in Leaching of Blackmass for the Recovery of Metals from Spent Lithium-Ion Battery
Authors: Alexandre Chagnes
Abstract:
Lithium-ion battery is the technology of choice in the development of electric vehicles. This technology is now mature, although there are still many challenges to increase their energy density while ensuring an irreproachable safety of use. For this goal, it is necessary to develop new cathodic materials that can be cycled at higher voltages and electrolytes compatible with these materials. But the challenge does not only concern the production of efficient batteries for the electrochemical storage of energy since lithium-ion battery technology relies on the use of critical and/or strategic value resources. It is, therefore, crucial to include Lithium-ion batteries development in a circular economy approach very early. In particular, optimized recycling and reuse of battery components must both minimize their impact on the environment and limit geopolitical issues related to tensions on the mineral resources necessary for lithium-ion battery production. Although recycling will never replace mining, it reduces resource dependence by ensuring the presence of exploitable resources in the territory, which is particularly important for countries like France, where exploited or exploitable resources are limited. This conference addresses the development of a new hydrometallurgical process combining leaching of cathodic material from spent lithium-ion battery in acidic chloride media and solvent extraction process. Most of recycling processes reported in the literature rely on the sulphate route, and a few studies investigate the potentialities of the chloride route despite many advantages and the possibility to develop new chemistry, which could get easier the metal separation. The leaching mechanisms and the solvent extraction equilibria will be presented in this conference. Based on the comprehension of the physicochemistry of leaching and solvent extraction, the present study will introduce a new hydrometallurgical process for the production of cobalt, nickel, manganese and lithium from spent cathodic materials.Keywords: lithium-ion battery, recycling, hydrometallurgy, leaching, solvent extraction
Procedia PDF Downloads 80743 Duplex Real-Time Loop-Mediated Isothermal Amplification Assay for Simultaneous Detection of Beef and Pork
Authors: Mi-Ju Kim, Hae-Yeong Kim
Abstract:
Product mislabeling and adulteration have been increasing the concerns in processed meat products. Relatively inexpensive pork meat compared to meat such as beef was adulterated for economic benefit. These food fraud incidents related to pork were concerned due to economic, religious and health reasons. In this study, a rapid on-site detection method using loop-mediated isothermal amplification (LAMP) was developed for the simultaneous identification of beef and pork. Each specific LAMP primer for beef and pork was designed targeting on mitochondrial D-loop region. The LAMP assay reaction was performed at 65 ℃ for 40 min. The specificity of each primer for beef and pork was evaluated using DNAs extracted from 13 animal species including beef and pork. The sensitivity of duplex LAMP assay was examined by serial dilution of beef and pork DNAs, and reference binary mixtures. This assay was applied to processed meat products including beef and pork meat for monitoring. Each set of primers amplified only the targeted species with no cross-reactivity with animal species. The limit of detection of duplex real-time LAMP was 1 pg for each DNA of beef and pork and 1% pork in a beef-meat mixture. Commercial meat products that declared the presence of beef and/or pork meat on the label showed positive results for those species. This method was successfully applied to detect simultaneous beef and pork meats in processed meat products. The optimized duplex LAMP assay can identify simultaneously beef and pork meat within less than 40 min. A portable real-time fluorescence device used in this study is applicable for on-site detection of beef and pork in processed meat products. Thus, this developed assay was considered to be an efficient tool for monitoring meat products.Keywords: beef, duplex real-time LAMP, meat identification, pork
Procedia PDF Downloads 224742 Dietary Risk Assessment of Green Leafy Vegetables (GLV) Due to Heavy Metals from Selected Mining Areas
Authors: Simon Mensah Ofosu
Abstract:
Illicit surface mining activities pollutes agricultural lands and water bodies and results in accumulation of heavy metals in vegetables cultivated in such areas. Heavy metal (HM) accumulation in vegetables is a serious food safety issues due to the adverse effects of metal toxicities, hence the need to investigate the levels of these metals in cultivated vegetables in the eastern region. Cocoyam leaves, cabbage and cucumber were sampled from selected farms in mining areas (Atiwa District) and non -mining areas (Yilo Krobo and East Akim District) of the region for the study. Levels of Cadmium, Lead, Mercury and Arsenic were investigated in the vegetables with Atomic Absorption Spectrometer, and the results statistically analyzed with Microsoft Office Excel (2013) Spread Sheet and ANOVA. Cadmium (Cd) and arsenic (As) were the highest and least concentrated HM in the vegetables sampled, respectively. The mean concentrations of Cd and Pb in cabbage (0.564 mg/kg, 0.470 mg/kg), cucumber (0.389 mg/kg, 0.190 mg/kg), cocoyam leaves (0.410 mg/kg, 0.256 mg/kg) respectively from the mining areas exceeded the permissible limits set by Joint FAO/WHO. The mean concentrations of the metals in vegetables from the mining and non-mining areas varied significantly (P<0.05). The Target Hazard Quotient (THQ) was used to assess the health risk posed to the human population via vegetable consumption. The THQ values of cadmium, mercury, and lead in adults and children through vegetable consumption in the mining areas were greater than 1 (THQ >1). This indicates the potential health risk that the children and adults may be facing. The THQ values of adults and children in the non-mining areas were less than the safe limit of 1 (THQ<1), hence no significant health risk posed to the population from such areas.Keywords: food safety, risk assessment, illicit mining, public health, contaminated vegetables
Procedia PDF Downloads 91