Search results for: percolation threshold
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 787

Search results for: percolation threshold

277 Anomaly Detection in Financial Markets Using Tucker Decomposition

Authors: Salma Krafessi

Abstract:

The financial markets have a multifaceted, intricate environment, and enormous volumes of data are produced every day. To find investment possibilities, possible fraudulent activity, and market oddities, accurate anomaly identification in this data is essential. Conventional methods for detecting anomalies frequently fail to capture the complex organization of financial data. In order to improve the identification of abnormalities in financial time series data, this study presents Tucker Decomposition as a reliable multi-way analysis approach. We start by gathering closing prices for the S&P 500 index across a number of decades. The information is converted to a three-dimensional tensor format, which contains internal characteristics and temporal sequences in a sliding window structure. The tensor is then broken down using Tucker Decomposition into a core tensor and matching factor matrices, allowing latent patterns and relationships in the data to be captured. A possible sign of abnormalities is the reconstruction error from Tucker's Decomposition. We are able to identify large deviations that indicate unusual behavior by setting a statistical threshold. A thorough examination that contrasts the Tucker-based method with traditional anomaly detection approaches validates our methodology. The outcomes demonstrate the superiority of Tucker's Decomposition in identifying intricate and subtle abnormalities that are otherwise missed. This work opens the door for more research into multi-way data analysis approaches across a range of disciplines and emphasizes the value of tensor-based methods in financial analysis.

Keywords: tucker decomposition, financial markets, financial engineering, artificial intelligence, decomposition models

Procedia PDF Downloads 47
276 Automatic Multi-Label Image Annotation System Guided by Firefly Algorithm and Bayesian Method

Authors: Saad M. Darwish, Mohamed A. El-Iskandarani, Guitar M. Shawkat

Abstract:

Nowadays, the amount of available multimedia data is continuously on the rise. The need to find a required image for an ordinary user is a challenging task. Content based image retrieval (CBIR) computes relevance based on the visual similarity of low-level image features such as color, textures, etc. However, there is a gap between low-level visual features and semantic meanings required by applications. The typical method of bridging the semantic gap is through the automatic image annotation (AIA) that extracts semantic features using machine learning techniques. In this paper, a multi-label image annotation system guided by Firefly and Bayesian method is proposed. Firstly, images are segmented using the maximum variance intra cluster and Firefly algorithm, which is a swarm-based approach with high convergence speed, less computation rate and search for the optimal multiple threshold. Feature extraction techniques based on color features and region properties are applied to obtain the representative features. After that, the images are annotated using translation model based on the Net Bayes system, which is efficient for multi-label learning with high precision and less complexity. Experiments are performed using Corel Database. The results show that the proposed system is better than traditional ones for automatic image annotation and retrieval.

Keywords: feature extraction, feature selection, image annotation, classification

Procedia PDF Downloads 574
275 Role of Surfactant Protein D (SP-D) as a Biomarker of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) Infection

Authors: Lucia Salvioni, Pietro Giorgio Lovaglio, Valerio Leoni, Miriam Colombo, Luisa Fiandra

Abstract:

The involvement of plasmatic surfactant protein-D (SP-D) in pulmonary diseases has been long investigated, and over the last two years, more interest has been directed to determine its role as a marker of COVID-19. In this direction, several studies aimed to correlate pulmonary surfactant proteins with the clinical manifestations of the virus indicated SP-D as a prognostic biomarker of COVID-19 pneumonia severity. The present work has performed a retrospective study on a relatively large cohort of patients of Hospital Pio XI of Desio (Lombardia, Italy) with the aim to assess differences in the hematic SP-D concentrations among COVID-19 patients and healthy donors and the role of SP-D as a prognostic marker of severity and/or of mortality risk. The obtained results showed a significant difference in the mean of log SP-D levels between COVID-19 patients and healthy donors, so as between dead and survived patients. SP-D values were significantly higher for both hospitalized COVID-19 and dead patients, with threshold values of 150 and 250 ng/mL, respectively. SP-D levels at admission and increasing differences among follow-up and admission values resulted in the strongest significant risk factors of mortality. Therefore, this study demonstrated the role of SP-D as a predictive marker of SARS-CoV-2 infection and its outcome. A significant correlation of SP-D with patient mortality indicated that it is also a prognostic factor in terms of mortality, and its early detection should be considered to design adequate preventive treatments for COVID-19 patients.

Keywords: SARS-CoV-2 infection, COVID-19, surfactant protein-D (SP-D), mortality, biomarker

Procedia PDF Downloads 63
274 Modelling the Yield Stress of Magnetorheological Fluids

Authors: Hesam Khajehsaeid, Naeimeh Alagheband

Abstract:

Magnetorheological fluids (MRF) are a category of smart materials. They exhibit a reversible change from a Newtonian-like fluid to a semi-solid state upon application of an external magnetic field. In contrast to ordinary fluids, MRFs can tolerate shear stresses up to a threshold value called yield stress which strongly depends on the strength of the magnetic field, magnetic particles volume fraction and temperature. Even beyond the yield, a magnetic field can increase MR fluid viscosity up to several orders. As yield stress is an important parameter in the design of MR devices, in this work, the effects of magnetic field intensity and magnetic particle concentration on the yield stress of MRFs are investigated. Four MRF samples with different particle concentrations are developed and tested through flow-ramp analysis to obtain the flow curves at a range of magnetic field intensity as well as shear rate. The viscosity of the fluids is determined by means of the flow curves. The results are then used to determine the yield stresses by means of the steady stress sweep method. The yield stresses are then determined by means of a modified form of the dipole model as well as empirical models. The exponential distribution function is used to describe the orientation of particle chains in the dipole model under the action of the external magnetic field. Moreover, the modified dipole model results in a reasonable distribution of chains compared to previous similar models.

Keywords: magnetorheological fluids, yield stress, particles concentration, dipole model

Procedia PDF Downloads 168
273 Effect of Fat Percentage and Prebiotic Composition on Proteolysis, ACE-Inhibitory and Antioxidant Activity of Probiotic Yogurt

Authors: Mohammad B. HabibiNajafi, Saeideh Sadat Fatemizadeh, Maryam Tavakoli

Abstract:

In recent years, the consumption of functional foods, including foods containing probiotic bacteria, has come to notice. Milk proteins have been identified as a source of angiotensin-I-converting enzyme )ACE( inhibitory peptides and are currently the best-known class of bioactive peptides. In this study, the effects of adding prebiotic ingredients (inulin and wheat fiber) and fat percentage (0%, 2% and 3.5%) in yogurt containing probiotic Lactobacillus casei on physicochemical properties, degree of proteolysis, antioxidant and ACE-inhibitory activity within 21 days of storage at 5 ± 1 °C were evaluated. The results of statistical analysis showed that the application of prebiotic compounds led to a significant increase in water holding capacity, proteolysis and ACE-inhibitory of samples. The degree of proteolysis in yogurt increases as storage time elapses (P < 0.05) but when proteolysis exceeds a certain threshold, this trend begins to decline. Also, during storage time, water holding capacity reduced initially but increased thereafter. Moreover, based on our findings, the survival of Lactobacillus casei in samples treated with inulin and wheat fiber increased significantly in comparison to the control sample (P < 0.05) whereas the effect of fat percentage on the survival of probiotic bacteria was not significant (P = 0.095). Furthermore, the effect of prebiotic ingredients and the presence of probiotic cultures on the antioxidant activity of samples was significant (P < 0.05).

Keywords: probiotic yogurt, proteolysis, ACE-inhibitory, antioxidant activity

Procedia PDF Downloads 237
272 Time Parameter Based for the Detection of Catastrophic Faults in Analog Circuits

Authors: Arabi Abderrazak, Bourouba Nacerdine, Ayad Mouloud, Belaout Abdeslam

Abstract:

In this paper, a new test technique of analog circuits using time mode simulation is proposed for the single catastrophic faults detection in analog circuits. This test process is performed to overcome the problem of catastrophic faults being escaped in a DC mode test applied to the inverter amplifier in previous research works. The circuit under test is a second-order low pass filter constructed around this type of amplifier but performing a function that differs from that of the previous test. The test approach performed in this work is based on two key- elements where the first one concerns the unique square pulse signal selected as an input vector test signal to stimulate the fault effect at the circuit output response. The second element is the filter response conversion to a square pulses sequence obtained from an analog comparator. This signal conversion is achieved through a fixed reference threshold voltage of this comparison circuit. The measurement of the three first response signal pulses durations is regarded as fault effect detection parameter on one hand, and as a fault signature helping to hence fully establish an analog circuit fault diagnosis on another hand. The results obtained so far are very promising since the approach has lifted up the fault coverage ratio in both modes to over 90% and has revealed the harmful side of faults that has been masked in a DC mode test.

Keywords: analog circuits, analog faults diagnosis, catastrophic faults, fault detection

Procedia PDF Downloads 430
271 Vertically Coupled III-V/Silicon Single Mode Laser with a Hybrid Grating Structure

Authors: Zekun Lin, Xun Li

Abstract:

Silicon photonics has gained much interest and extensive research for a promising aspect for fabricating compact, high-speed and low-cost photonic devices compatible with complementary metal-oxide-semiconductor (CMOS) process. Despite the remarkable progress made on the development of silicon photonics, high-performance, cost-effective, and reliable silicon laser sources are still missing. In this work, we present a 1550 nm III-V/silicon laser design with stable single-mode lasing property and robust and high-efficiency vertical coupling. The InP cavity consists of two uniform Bragg grating sections at sides for mode selection and feedback, as well as a central second-order grating for surface emission. A grating coupler is etched on the SOI waveguide by which the light coupling between the parallel III-V and SOI is reached vertically rather than by evanescent wave coupling. Laser characteristic is simulated and optimized by the traveling-wave model (TWM) and a Green’s function analysis as well as a 2D finite difference time domain (FDTD) method for the coupling process. The simulation results show that single-mode lasing with SMSR better than 48dB is achievable, and the threshold current is less than 15mA with a slope efficiency of around 0.13W/A. The coupling efficiency is larger than 42% and possesses a high tolerance with less than 10% reduction for 10 um horizontal or 15 um vertical dislocation. The design can be realized by standard flip-chip bonding techniques without co-fabrication of III-V and silicon or precise alignment.

Keywords: III-V/silicon integration, silicon photonics, single mode laser, vertical coupling

Procedia PDF Downloads 139
270 A Large Ion Collider Experiment (ALICE) Diffractive Detector Control System for RUN-II at the Large Hadron Collider

Authors: J. C. Cabanillas-Noris, M. I. Martínez-Hernández, I. León-Monzón

Abstract:

The selection of diffractive events in the ALICE experiment during the first data taking period (RUN-I) of the Large Hadron Collider (LHC) was limited by the range over which rapidity gaps occur. It would be possible to achieve better measurements by expanding the range in which the production of particles can be detected. For this purpose, the ALICE Diffractive (AD0) detector has been installed and commissioned for the second phase (RUN-II). Any new detector should be able to take the data synchronously with all other detectors and be operated through the ALICE central systems. One of the key elements that must be developed for the AD0 detector is the Detector Control System (DCS). The DCS must be designed to operate safely and correctly this detector. Furthermore, the DCS must also provide optimum operating conditions for the acquisition and storage of physics data and ensure these are of the highest quality. The operation of AD0 implies the configuration of about 200 parameters, from electronics settings and power supply levels to the archiving of operating conditions data and the generation of safety alerts. It also includes the automation of procedures to get the AD0 detector ready for taking data in the appropriate conditions for the different run types in ALICE. The performance of AD0 detector depends on a certain number of parameters such as the nominal voltages for each photomultiplier tube (PMT), their threshold levels to accept or reject the incoming pulses, the definition of triggers, etc. All these parameters define the efficiency of AD0 and they have to be monitored and controlled through AD0 DCS. Finally, AD0 DCS provides the operator with multiple interfaces to execute these tasks. They are realized as operating panels and scripts running in the background. These features are implemented on a SCADA software platform as a distributed control system which integrates to the global control system of the ALICE experiment.

Keywords: AD0, ALICE, DCS, LHC

Procedia PDF Downloads 296
269 In and Out-Of-Sample Performance of Non Simmetric Models in International Price Differential Forecasting in a Commodity Country Framework

Authors: Nicola Rubino

Abstract:

This paper presents an analysis of a group of commodity exporting countries' nominal exchange rate movements in relationship to the US dollar. Using a series of Unrestricted Self-exciting Threshold Autoregressive models (SETAR), we model and evaluate sixteen national CPI price differentials relative to the US dollar CPI. Out-of-sample forecast accuracy is evaluated through calculation of mean absolute error measures on the basis of two-hundred and fifty-three months rolling window forecasts and extended to three additional models, namely a logistic smooth transition regression (LSTAR), an additive non linear autoregressive model (AAR) and a simple linear Neural Network model (NNET). Our preliminary results confirm presence of some form of TAR non linearity in the majority of the countries analyzed, with a relatively higher goodness of fit, with respect to the linear AR(1) benchmark, in five countries out of sixteen considered. Although no model appears to statistically prevail over the other, our final out-of-sample forecast exercise shows that SETAR models tend to have quite poor relative forecasting performance, especially when compared to alternative non-linear specifications. Finally, by analyzing the implied half-lives of the > coefficients, our results confirms the presence, in the spirit of arbitrage band adjustment, of band convergence with an inner unit root behaviour in five of the sixteen countries analyzed.

Keywords: transition regression model, real exchange rate, nonlinearities, price differentials, PPP, commodity points

Procedia PDF Downloads 269
268 A Picture is worth a Billion Bits: Real-Time Image Reconstruction from Dense Binary Pixels

Authors: Tal Remez, Or Litany, Alex Bronstein

Abstract:

The pursuit of smaller pixel sizes at ever increasing resolution in digital image sensors is mainly driven by the stringent price and form-factor requirements of sensors and optics in the cellular phone market. Recently, Eric Fossum proposed a novel concept of an image sensor with dense sub-diffraction limit one-bit pixels (jots), which can be considered a digital emulation of silver halide photographic film. This idea has been recently embodied as the EPFL Gigavision camera. A major bottleneck in the design of such sensors is the image reconstruction process, producing a continuous high dynamic range image from oversampled binary measurements. The extreme quantization of the Poisson statistics is incompatible with the assumptions of most standard image processing and enhancement frameworks. The recently proposed maximum-likelihood (ML) approach addresses this difficulty, but suffers from image artifacts and has impractically high computational complexity. In this work, we study a variant of a sensor with binary threshold pixels and propose a reconstruction algorithm combining an ML data fitting term with a sparse synthesis prior. We also show an efficient hardware-friendly real-time approximation of this inverse operator. Promising results are shown on synthetic data as well as on HDR data emulated using multiple exposures of a regular CMOS sensor.

Keywords: binary pixels, maximum likelihood, neural networks, sparse coding

Procedia PDF Downloads 187
267 Non-Linear Regression Modeling for Composite Distributions

Authors: Mostafa Aminzadeh, Min Deng

Abstract:

Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.

Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions

Procedia PDF Downloads 4
266 Bayesian Inference of Physicochemical Quality Elements of Tropical Lagoon Nokoué (Benin)

Authors: Hounyèmè Romuald, Maxime Logez, Mama Daouda, Argillier Christine

Abstract:

In view of the very strong degradation of aquatic ecosystems, it is urgent to set up monitoring systems that are best able to report on the effects of the stresses they undergo. This is particularly true in developing countries, where specific and relevant quality standards and funding for monitoring programs are lacking. The objective of this study was to make a relevant and objective choice of physicochemical parameters informative of the main stressors occurring on African lakes and to identify their alteration thresholds. Based on statistical analyses of the relationship between several driving forces and the physicochemical parameters of the Nokoué lagoon, relevant Physico-chemical parameters were selected for its monitoring. An innovative method based on Bayesian statistical modeling was used. Eleven Physico-chemical parameters were selected for their response to at least one stressor and their threshold quality standards were also established: Total Phosphorus (<4.5mg/L), Orthophosphates (<0.2mg/L), Nitrates (<0.5 mg/L), TKN (<1.85 mg/L), Dry Organic Matter (<5 mg/L), Dissolved Oxygen (>4 mg/L), BOD (<11.6 mg/L), Salinity (7.6 .), Water Temperature (<28.7 °C), pH (>6.2), and Transparency (>0.9 m). According to the System for the Evaluation of Coastal Water Quality, these thresholds correspond to” good to medium” suitability classes, except for total phosphorus. One of the original features of this study is the use of the bounds of the credibility interval of the fixed-effect coefficients as local weathering standards for the characterization of the Physico-chemical status of this anthropized African ecosystem.

Keywords: driving forces, alteration thresholds, acadjas, monitoring, modeling, human activities

Procedia PDF Downloads 79
265 Establishment of Landslide Warning System Using Surface or Sub-Surface Sensors Data

Authors: Neetu Tyagi, Sumit Sharma

Abstract:

The study illustrates the results of an integrated study done on Tangni landslide located on NH-58 at Chamoli, Uttarakhand. Geological, geo-morphological and geotechnical investigations were carried out to understand the mechanism of landslide and to plan further investigation and monitoring. At any rate, the movements were favored by continuous rainfall water infiltration from the zones where the phyllites/slates and Dolomites outcrop. The site investigations were carried out including the monitoring of landslide movements and of the water level fluctuations due to rainfall give us a better understanding of landslide dynamics that have been causing in time soil instability at Tangni landslide site. The Early Warning System (EWS) installed different types of sensors and all sensors were directly connected to data logger and raw data transfer to the Defence Terrain Research Laboratory (DTRL) server room with the help of File Transfer Protocol (FTP). The slip surfaces were found at depths ranging from 8 to 10 m from Geophysical survey and hence sensors were installed to the depth of 15m at various locations of landslide. Rainfall is the main triggering factor of landslide. In this study, the developed model of unsaturated soil slope stability is carried out. The analysis of sensors data available for one year, indicated the sliding surface of landslide at depth between 6 to 12m with total displacement up to 6cm per year recorded at the body of landslide. The aim of this study is to set the threshold and generate early warning. Local peoples already alert towards landslide, if they have any types of warning system.

Keywords: early warning system, file transfer protocol, geo-morphological, geotechnical, landslide

Procedia PDF Downloads 144
264 Developing a Spatial Decision Support System for Rationality Assessment of Land Use Planning Locations in Thai Binh Province, Vietnam

Authors: Xuan Linh Nguyen, Tien Yin Chou, Yao Min Fang, Feng Cheng Lin, Thanh Van Hoang, Yin Min Huang

Abstract:

In Vietnam, land use planning is the most important and powerful tool of the government for sustainable land use and land management. Nevertheless, many of land use planning locations are facing protests from surrounding households due to environmental impacts. In addition, locations are planned completely based on the subjective decisions of planners who are unsupported by tools or scientific methods. Hence, this research aims to assist the decision-makers in evaluating the rationality of planning locations by developing a Spatial Decision Support System (SDSS) using approaches of Geographic Information System (GIS)-based technology, Analytic Hierarchy Process (AHP) multi-criteria-based technique and Fuzzy set theory. An ArcGIS Desktop add-ins named SDSS-LUPA was developed to support users analyzing data and presenting results in friendly format. The Fuzzy-AHP method has been utilized as analytic model for this SDSS. There are 18 planned locations in Hung Ha district (Thai Binh province, Vietnam) as a case study. The experimental results indicated that the assessment threshold higher than 0.65 while the 18 planned locations were irrational because of close to residential areas or close to water sources. Some potential sites were also proposed to the authorities for consideration of land use planning changes.

Keywords: analytic hierarchy process, fuzzy set theory, land use planning, spatial decision support system

Procedia PDF Downloads 363
263 Assessment of Water Availability and Quality in the Climate Change Context in Urban Areas

Authors: Rose-Michelle Smith, Musandji Fuamba, Salomon Salumu

Abstract:

Water is vital for life. Access to drinking water and sanitation for humans is one of the Sustainable Development Goals (specifically the sixth) approved by United Nations Member States in September 2015. There are various problems identified relating to water: insufficient fresh water, inequitable distribution of water resources, poor water management in certain places on the planet, detection of water-borne diseases due to poor water quality, and the negative impacts of climate change on water. One of the major challenges in the world is finding ways to ensure that people and the environment have enough water resources to sustain and support their existence. Thus, this research project aims to develop a tool to assess the availability, quality and needs of water in current and future situations with regard to climate change. This tool was tested using threshold values for three regions in three countries: the Metropolitan Community of Montreal (Canada), Normandie Region (France) and North Department (Haiti). The WEAP software was used to evaluate the available quantity of water resources. For water quality, two models were performed: the Canadian Council of Ministers of the Environment (CCME) and the Malaysian Water Quality Index (WQI). Preliminary results showed that the ratio of the needs could be estimated at 155, 308 and 644 m3/capita in 2023 for Normandie, Cap-Haitian and CMM, respectively. Then, the Water Quality Index (WQI) varied from one country to another. Other simulations regarding the water availability and quality are still in progress. This tool will be very useful in decision-making on projects relating to water use in the future; it will make it possible to estimate whether the available resources will be able to satisfy the needs.

Keywords: climate change, water needs, balance sheet, water quality

Procedia PDF Downloads 57
262 An Inspection of Two Layer Model of Agency: An fMRI Study

Authors: Keyvan Kashkouli Nejad, Motoaki Sugiura, Atsushi Sato, Takayuki Nozawa, Hyeonjeong Jeong, Sugiko Hanawa , Yuka Kotozaki, Ryuta Kawashima

Abstract:

The perception of agency/control is altered with presence of discrepancies in the environment or mismatch of predictions (of possible results) and actual results the sense of agency might become altered. Synofzik et al. proposed a two layer model of agency: In the first layer, the Feeling of Agency (FoA) is not directly available to awareness; a slight mismatch in the environment/outcome might cause alterations in FoA, while the agent still feels in control. If the discrepancy passes a threshold, it becomes available to consciousness and alters Judgment of Agency (JoA), which is directly available in the person’s awareness. Most experiments so far only investigate subjects rather conscious JoA, while FoA has been neglected. In this experiment we target FoA by using subliminal discrepancies that can not be consciously detectable by the subjects. Here, we explore whether we can detect this two level model in the subjects behavior and then try to map this in their brain activity. To do this, in a fMRI study, we incorporated both consciously detectable mismatching between action and result and also subliminal discrepancies in the environment. Also, unlike previous experiments where subjective questions from the participants mainly trigger the rather conscious JoA, we also tried to measure the rather implicit FoA by asking participants to rate their performance. We compared behavioral results and also brain activation when there were conscious discrepancies and when there were subliminal discrepancies against trials with no discrepancies and against each other. In line with our expectations, conditions with consciously detectable incongruencies triggered lower JoA ratings than conditions without. Also, conditions with any type of discrepancies had lower FoA ratings compared to conditions without. Additionally, we found out that TPJ and angular gyrus in particular to have a role in coding of JoA and also FoA.

Keywords: agency, fMRI, TPJ, two layer model

Procedia PDF Downloads 461
261 An Academic Theory on a Sustainable Evaluation of Achatina Fulica Within Ethekwini, KwaZulu-Natal

Authors: Sibusiso Trevor Tshabalala, Samuel Lubbe, Vince Vuledzani Ndou

Abstract:

Dependency on chemicals has had many disadvantages in pest management control strategies. Such genetic rodenticide resistance and secondary exposure risk are what is currently being experienced. Emphasis on integrated pest management suggests that to control future pests, early intervention and economic threshold development are key starting points in crop production. The significance of this research project is to help establish a relationship between Giant African Land Snail (Achatina Fulica) solution extract, its shell chemical properties, and farmer’s perceptions of biological control in eThekwini Municipality Agri-hubs. A mixed design approach to collecting data will be explored using a trial layout in the field and through interviews. The experimental area will be explored using a split-plot design that will be replicated and arranged in a randomised complete block design. The split-plot will have 0, 10, 20 and 30 liters of water to one liter of snail solution extract. Plots were 50 m² each with a spacing of 12 m between each plot and a plant spacing of 0.5 m (inter-row) ‘and 0.5 m (intra-row). Trials will be irrigated using sprinkler irrigation, with objective two being added to the mix every 4-5 days. The expected outcome will be improved soil fertility and micro-organisms population proliferation.

Keywords: giant african land snail, integrated pest management, photosynthesis, genetic rodenticide resistance, control future pests, shell chemical properties

Procedia PDF Downloads 90
260 Deployment of Beyond 4G Wireless Communication Networks with Carrier Aggregation

Authors: Bahram Khan, Anderson Rocha Ramos, Rui R. Paulo, Fernando J. Velez

Abstract:

With the growing demand for a new blend of applications, the users dependency on the internet is increasing day by day. Mobile internet users are giving more attention to their own experiences, especially in terms of communication reliability, high data rates and service stability on move. This increase in the demand is causing saturation of existing radio frequency bands. To address these challenges, researchers are investigating the best approaches, Carrier Aggregation (CA) is one of the newest innovations, which seems to fulfill the demands of the future spectrum, also CA is one the most important feature for Long Term Evolution - Advanced (LTE-Advanced). For this purpose to get the upcoming International Mobile Telecommunication Advanced (IMT-Advanced) mobile requirements (1 Gb/s peak data rate), the CA scheme is presented by 3GPP, which would sustain a high data rate using widespread frequency bandwidth up to 100 MHz. Technical issues such as aggregation structure, its implementations, deployment scenarios, control signal techniques, and challenges for CA technique in LTE-Advanced, with consideration of backward compatibility, are highlighted in this paper. Also, performance evaluation in macro-cellular scenarios through a simulation approach is presented, which shows the benefits of applying CA, low-complexity multi-band schedulers in service quality, system capacity enhancement and concluded that enhanced multi-band scheduler is less complex than the general multi-band scheduler, which performs better for a cell radius longer than 1800 m (and a PLR threshold of 2%).

Keywords: component carrier, carrier aggregation, LTE-advanced, scheduling

Procedia PDF Downloads 180
259 Localization of Frontal and Temporal Speech Areas in Brain Tumor Patients by Their Structural Connections with Probabilistic Tractography

Authors: B.Shukir, H.Woo, P.Barzo, D.Kis

Abstract:

Preoperative brain mapping in tumors involving the speech areas has an important role to reduce surgical risks. Functional magnetic resonance imaging (fMRI) is the gold standard method to localize cortical speech areas preoperatively, but its availability in clinical routine is difficult. Diffusion MRI based probabilistic tractography is available in head MRI. It’s used to segment cortical subregions by their structural connectivity. In our study, we used probabilistic tractography to localize the frontal and temporal cortical speech areas. 15 patients with left frontal tumor were enrolled to our study. Speech fMRI and diffusion MRI acquired preoperatively. The standard automated anatomical labelling atlas 3 (AAL3) cortical atlas used to define 76 left frontal and 118 left temporal potential speech areas. 4 types of tractography were run according to the structural connection of these regions to the left arcuate fascicle (FA) to localize those cortical areas which have speech functions: 1, frontal through FA; 2, frontal with FA; 3, temporal to FA; 4, temporal with FA connections were determined. Thresholds of 1%, 5%, 10% and 15% applied. At each level, the number of affected frontal and temporal regions by fMRI and tractography were defined, the sensitivity and specificity were calculated. At the level of 1% threshold showed the best results. Sensitivity was 61,631,4% and 67,1523,12%, specificity was 87,210,4% and 75,611,37% for frontal and temporal regions, respectively. From our study, we conclude that probabilistic tractography is a reliable preoperative technique to localize cortical speech areas. However, its results are not feasible that the neurosurgeon rely on during the operation.

Keywords: brain mapping, brain tumor, fMRI, probabilistic tractography

Procedia PDF Downloads 146
258 Heat Waves and Hospital Admissions for Mental Disorders in Hanoi Vietnam

Authors: Phan Minh Trang, Joacim Rocklöv, Kim Bao Giang, Gunnar Kullgren, Maria Nilsson

Abstract:

There are recent studies from high income countries reporting an association between heat waves and hospital admissions for mental health disorders. It is not previously studied if such relations exist in sub-tropical and tropical low- and middle-income countries. In this study from Vietnam, the assumption was that hospital admissions for mental disorders may be triggered, or exacerbated, by heat exposure and heat waves. A database from Hanoi Mental Hospital with mental disorders diagnosed by the International Classification of Diseases 10, spanning over five years, was used to estimate the heatwave-related impacts on admissions for mental disorders. The relationship was analysed by a Negative Binomial regression model accounting for year, month, and days of week. The focus of the study was heat-wave events with periods of three or seven consecutive days above the threshold of 35oC daily maximum temperature. The preliminary study results indicated that heat-waves increased the risks for hospital admission for mental disorders (F00-79) from heat-waves of three and seven days with relative risks (RRs) of 1.16 (1.01–1.33) and 1.42 (1.02–1.99) respectively, when compared with non-heat-wave periods. Heatwave-related admissions for mental disorders increased statistically significantly among men, among residents in rural communities and in elderly. Moreover, cases for organic mental disorders including symptomatic illnesses (F0-9) and mental retardation (F70-79) raised in high risks during heat waves. The findings are novel studying a sub-tropical middle-income city, facing rapid urbanisation and epidemiological and demographic transitions.

Keywords: mental disorders, admissions for F0-9 or F70-79, maximum temperature, heat waves

Procedia PDF Downloads 233
257 Prevalence and Factors Associated to Work Accidents in the Construction Sector in Benin: Cases of CFIR – Consulting

Authors: Antoine Vikkey Hinson, Menonli Adjobimey, Gemayel Ahmed Biokou, Rose Mikponhoue

Abstract:

Introduction: Construction industry is a critical concern with regard to Health and Safety Service worldwide. World health Organization revealed that work-related disease and trauma were held responsible for the death of one million nine hundred thousand people in 2016. The aim of this study it was to determine the prevalence and factors associated with the occurrence of work accidents in a construction industry in Benin. Method: It was a descriptive cross-sectional and analytical study. Data analysis was performed with R software 4.1.1. In multivariate analysis, we performed a binary logistic regression. OR adjusted (ORa) association measures and their 95% confidence interval [CI95%] were presented for the explanatory variables used in the final model. The significance threshold for all tests selected was 5% (p < 0.05) Result: In this study, 472 workers were included, and, of these, 452 (95.7%) were men corresponding to a sex ratio of 22.6. The average age of the workers was 33 years ± 8.8 years. Workers were mostly laborers (84.7%), and had declared having inadequate personal protective equipment (50.6%, n=239). The prevalence of work accidents is 50.8%. Collision with a rolling stock (25.8%), cut (16.2%), and stumbling (16.2%) were the main types of work accidents on the construction site. Four factors were associated with contributing to work accidents. Fatigue or exhaustion (ORa : 1.53[1.03 ; 2.28]); The use of dangerous tools (ORa : 1.81 [1.22 ; 2.71]); The various laborers’ jobs (ORa : 4.78 [2.62 ; 9.21]); and seniority in the company ≥ 4 years (ORa : 2.00 [1.35 ; 2.96]). Conclusion: This study allowed us to identify the associated factors. It is imperative to implement a rigorous policy of occupational health and security mostly the continuing training for workers safe, the supply of appropriate work tools and protective

Keywords: prevalence, work accident, associated factors, construction, benin

Procedia PDF Downloads 44
256 Heavy Metal Contamination of Mining-Impacted Mangrove Sediments and Its Correlation with Vegetation and Sediment Attributes

Authors: Jumel Christian P. Nicha, Severino G. Salmo III

Abstract:

This study investigated the concentration of heavy metals (HM) in mangrove sediments of Lake Uacon, Zambales, Philippines. The relationship among the studied HM (Cr, Ni, Pb, Cu, Cd, Fe) and the mangrove vegetation and sediment characteristics were assessed. Fourteen sampling plots were designated across the lake (10 vegetated and 4 un-vegetated) based on distance from the mining effluents. In each plot, three sediment cores were collected at 20 cm depth. Among the dominant mangrove species recorded were (in order of dominance): Sonneratia alba, Rhizophora stylosa, Avicennia marina, Excoecaria agallocha and Bruguiera gymnorrhiza. Sediment samples were digested with aqua regia, and the HM concentrations were quantified using Atomic Absorption Spectroscopy (AAS). Results showed that HM concentrations were higher in the vegetated plots as compared to the un-vegetated sites. Vegetated sites had high Ni (mean: 881.71 mg/kg) and Cr (mean: 776.36 mg/kg) that exceeded the threshold values (cf. by the United States Environmental Protection Agency; USEPA). Fe, Pb, Cu and Cd had a mean concentration of 2597.92 mg/kg, 40.94 mg/kg, 36.81 mg/kg and 2.22 mg/kg respectively. Vegetation variables were not significantly correlated with HM concentration. However, the HM concentration was significantly correlated with sediment variables particularly pH, redox, particle size, nitrogen, phosphorus, moisture and organic matter contents. The Pollution Load Index (PLI) indicated moderate to high pollution in the lake. Risk assessment and management should be designed in order to mitigate the ecological risk posed by HM. The need of a regular monitoring scheme for lake and mangrove rehabilitation programs and management should be designed.

Keywords: heavy metals, mangrove vegetation, mining, Philippines, sediment

Procedia PDF Downloads 152
255 CT Medical Images Denoising Based on New Wavelet Thresholding Compared with Curvelet and Contourlet

Authors: Amir Moslemi, Amir movafeghi, Shahab Moradi

Abstract:

One of the most important challenging factors in medical images is nominated as noise.Image denoising refers to the improvement of a digital medical image that has been infected by Additive White Gaussian Noise (AWGN). The digital medical image or video can be affected by different types of noises. They are impulse noise, Poisson noise and AWGN. Computed tomography (CT) images are subjected to low quality due to the noise. The quality of CT images is dependent on the absorbed dose to patients directly in such a way that increase in absorbed radiation, consequently absorbed dose to patients (ADP), enhances the CT images quality. In this manner, noise reduction techniques on the purpose of images quality enhancement exposing no excess radiation to patients is one the challenging problems for CT images processing. In this work, noise reduction in CT images was performed using two different directional 2 dimensional (2D) transformations; i.e., Curvelet and Contourlet and Discrete wavelet transform(DWT) thresholding methods of BayesShrink and AdaptShrink, compared to each other and we proposed a new threshold in wavelet domain for not only noise reduction but also edge retaining, consequently the proposed method retains the modified coefficients significantly that result in good visual quality. Data evaluations were accomplished by using two criterions; namely, peak signal to noise ratio (PSNR) and Structure similarity (Ssim).

Keywords: computed tomography (CT), noise reduction, curve-let, contour-let, signal to noise peak-peak ratio (PSNR), structure similarity (Ssim), absorbed dose to patient (ADP)

Procedia PDF Downloads 429
254 Optimal Risk and Financial Stability

Authors: Rahmoune Abdelhaq

Abstract:

Systemic risk is a key concern for central banks charged with safeguarding overall financial stability. In this work, we investigate how systemic risk is affected by the structure of the financial system. We construct banking systems that are composed of a number of banks that are connected by interbank linkages. We then vary the key parameters that define the structure of the financial system — including its level of capitalization, the degree to which banks are connected, the size of interbank exposures and the degree of concentration of the system — and analyses the influence of these parameters on the likelihood of contagious (knock-on) defaults. First, we find that the better-capitalized banks are, the more resilient is the banking system against contagious defaults and this effect is non-linear. Second, the effect of the degree of connectivity is non-monotonic, that is, initially a small increase in connectivity increases the contagion effect; but after a certain threshold value, connectivity improves the ability of a banking system to absorb shocks. Third, the size of interbank liabilities tends to increase the risk of knock-on default, even if banks hold capital against such exposures. Fourth, more concentrated banking systems are shown to be prone to larger systemic risk, all else equal. In an extension to the main analysis, we study how liquidity effects interact with banking structure to produce a greater chance of systemic breakdown. We finally consider how the risk of contagion might depend on the degree of asymmetry (tier) inherent in the structure of the banking system. A number of our results have important implications for public policy, which this paper also draws out. This paper also discusses why bank risk management is needed to get the optimal one.

Keywords: financial stability, contagion, liquidity risk, optimal risk

Procedia PDF Downloads 388
253 Physical Tests on Localized Fluidization in Offshore Suction Bucket Foundations

Authors: Li-Hua Luu, Alexis Doghmane, Abbas Farhat, Mohammad Sanayei, Pierre Philippe, Pablo Cuellar

Abstract:

Suction buckets are promising innovative foundations for offshore wind turbines. They generally feature the shape of an inverted bucket and rely on a suction system as a driving agent for their installation into the seabed. Water is pumped out of the buckets that are initially placed to rest on the seabed, creating a net pressure difference across the lid that generates a seepage flow, lowers the soil resistance below the foundation skirt, and drives them effectively into the seabed. The stability of the suction mechanism as well as the possibility of a piping failure (i.e., localized fluidization within the internal soil plug) during their installation are some of the key questions that remain open. The present work deals with an experimental study of localized fluidization by suction within a fixed bucket partially embedded into a submerged artificial soil made of spherical beads. The transient process, from the onset of granular motion until reaching a stationary regime for the fluidization at the embedded bucket wall, is recorded using the combined optical techniques of planar laser-induced fluorescence and refractive index matching. To conduct a systematic study of the piping threshold for the seepage flow, we vary the beads size, the suction pressure, and the initial depth for the bucket. This experimental modelling, by dealing with erosion-related phenomena from a micromechanical perspective, shall provide qualitative scenarios for the local processes at work which are missing in the offshore practice so far.

Keywords: fluidization, micromechanical approach, offshore foundations, suction bucket

Procedia PDF Downloads 171
252 Evaluating Traffic Congestion Using the Bayesian Dirichlet Process Mixture of Generalized Linear Models

Authors: Ren Moses, Emmanuel Kidando, Eren Ozguven, Yassir Abdelrazig

Abstract:

This study applied traffic speed and occupancy to develop clustering models that identify different traffic conditions. Particularly, these models are based on the Dirichlet Process Mixture of Generalized Linear regression (DML) and change-point regression (CR). The model frameworks were implemented using 2015 historical traffic data aggregated at a 15-minute interval from an Interstate 295 freeway in Jacksonville, Florida. Using the deviance information criterion (DIC) to identify the appropriate number of mixture components, three traffic states were identified as free-flow, transitional, and congested condition. Results of the DML revealed that traffic occupancy is statistically significant in influencing the reduction of traffic speed in each of the identified states. Influence on the free-flow and the congested state was estimated to be higher than the transitional flow condition in both evening and morning peak periods. Estimation of the critical speed threshold using CR revealed that 47 mph and 48 mph are speed thresholds for congested and transitional traffic condition during the morning peak hours and evening peak hours, respectively. Free-flow speed thresholds for morning and evening peak hours were estimated at 64 mph and 66 mph, respectively. The proposed approaches will facilitate accurate detection and prediction of traffic congestion for developing effective countermeasures.

Keywords: traffic congestion, multistate speed distribution, traffic occupancy, Dirichlet process mixtures of generalized linear model, Bayesian change-point detection

Procedia PDF Downloads 281
251 Identity and Disability in Contemporary East Asian Dance

Authors: Sanghyun Park

Abstract:

Influenced by the ideas of collectivism, East Asian contemporary dance is marked by an emphasis on unity and synchronization. A growing element of this discipline that disrupts the path that strives to attain perfection, requiring coordination between multiple parties in order to produce work of their highest artistic potential, with the support from individuals or groups is the presence of disabled dancers. Kawanaka Yo, a Japanese dancer with a mental disability, argues through her '“Dance of Peace' that a dancer should focus on her impulses and natural thoughts through improvisational dancing and eschewal of documentation. Professor and poet Jung-Gyu Jeong, co-founder of the Korea Disability International Art Company, demonstrates with his company’s modernized performances of popular works and musicals that disabled artists do not need perfection so long as they can assert their finesse to mimic or create an equivalence with able-bodied dancers. Yo has studied various forms of modern dance and ballet in Japan and has used her training to ease her mental disability but also accept her handicap as an extension of her identity, representing a trend in disabled dance that favors individuality and acceptance. In contrast, Jeong is an influential figure in South Korea for disabled dancers and artists, believing that disabled artists must overcome a certain threshold in order to reach a status as an artist that is equivalent to a 'normal artist.' East Asian art created by the disabled should not be judged according to different criteria or rubrics compared to able-bodied artists because, as Yo explains, a person’s identity and her handicaps characterize the meaning of, and the value of, the piece.

Keywords: disability studies, modern dance, East Asia, politics of identity

Procedia PDF Downloads 194
250 Design and Simulation of a Radiation Spectrometer Using Scintillation Detectors

Authors: Waleed K. Saib, Abdulsalam M. Alhawsawi, Essam Banoqitah

Abstract:

The idea of this research is to design a radiation spectrometer using LSO scintillation detector coupled to a C series of SiPM (silicon photomultiplier). The device can be used to detects gamma and X-ray radiation. This device is also designed to estimates the activity of the source contamination. The SiPM will detect light in the visible range above the threshold and read them as counts. Three gamma sources were used for these experiments Cs-137, Am-241 and Co-60 with various activities. These sources are applied for four experiments operating the SiPM as a spectrometer, energy resolution, pile-up set and efficiency. The SiPM is connected to a MCA to perform as a spectrometer. Cerium doped Lutetium Silicate (Lu₂SiO₅) with light yield 26000 photons/Mev coupled with the SiPM. As a result, all the main features of the Cs-137, Am-241 and Co-60 are identified in MCA. The experiment shows how photon energy and probability of interaction are inversely related. Total attenuation reduces as photon energy increases. An analytical calculation was made to obtain the FWHM resolution for each gamma source. The FWHM resolution for Am-241 (59 keV) is 28.75 %, for Cs-137 (662 keV) is 7.85 %, for Co-60 (1173 keV) is 4.46 % and for Co-60 (1332 keV) is 3.70%. Moreover, the experiment shows that the dead time and counts number decreased when the pile-up rejection was disabled and the FWHM decreased when the pile-up was enabled. The efficiencies were calculated at four different distances from the detector 2, 4, 8 and 16 cm. The detection efficiency was observed to declined exponentially with increasing distance from the detector face. Conclusively, the SiPM board operated with an LSO scintillator crystal as a spectrometer. The SiPM energy resolution for the three gamma sources used was a decent comparison to other PMTs.

Keywords: PMT, radiation, radiation detection, scintillation detectors, silicon photomultiplier, spectrometer

Procedia PDF Downloads 144
249 Design and Implementation of Low-code Model-building Methods

Authors: Zhilin Wang, Zhihao Zheng, Linxin Liu

Abstract:

This study proposes a low-code model-building approach that aims to simplify the development and deployment of artificial intelligence (AI) models. With an intuitive way to drag and drop and connect components, users can easily build complex models and integrate multiple algorithms for training. After the training is completed, the system automatically generates a callable model service API. This method not only lowers the technical threshold of AI development and improves development efficiency but also enhances the flexibility of algorithm integration and simplifies the deployment process of models. The core strength of this method lies in its ease of use and efficiency. Users do not need to have a deep programming background and can complete the design and implementation of complex models with a simple drag-and-drop operation. This feature greatly expands the scope of AI technology, allowing more non-technical people to participate in the development of AI models. At the same time, the method performs well in algorithm integration, supporting many different types of algorithms to work together, which further improves the performance and applicability of the model. In the experimental part, we performed several performance tests on the method. The results show that compared with traditional model construction methods, this method can make more efficient use, save computing resources, and greatly shorten the model training time. In addition, the system-generated model service interface has been optimized for high availability and scalability, which can adapt to the needs of different application scenarios.

Keywords: low-code, model building, artificial intelligence, algorithm integration, model deployment

Procedia PDF Downloads 7
248 An Investigation into Why Liquefaction Charts Work: A Necessary Step toward Integrating the States of Art and Practice

Authors: Tarek Abdoun, Ricardo Dobry

Abstract:

This paper is a systematic effort to clarify why field liquefaction charts based on Seed and Idriss’ Simplified Procedure work so well. This is a necessary step toward integrating the states of the art (SOA) and practice (SOP) for evaluating liquefaction and its effects. The SOA relies mostly on laboratory measurements and correlations with void ratio and relative density of the sand. The SOP is based on field measurements of penetration resistance and shear wave velocity coupled with empirical or semi-empirical correlations. This gap slows down further progress in both SOP and SOA. The paper accomplishes its objective through: a literature review of relevant aspects of the SOA including factors influencing threshold shear strain and pore pressure buildup during cyclic strain-controlled tests; a discussion of factors influencing field penetration resistance and shear wave velocity; and a discussion of the meaning of the curves in the liquefaction charts separating liquefaction from no liquefaction, helped by recent full-scale and centrifuge results. It is concluded that the charts are curves of constant cyclic strain at the lower end (Vs1 < 160 m/s), with this strain being about 0.03 to 0.05% for earthquake magnitude, Mw ≈ 7. It is also concluded, in a more speculative way, that the curves at the upper end probably correspond to a variable increasing cyclic strain and Ko, with this upper end controlled by over consolidated and preshaken sands, and with cyclic strains needed to cause liquefaction being as high as 0.1 to 0.3%. These conclusions are validated by application to case histories corresponding to Mw ≈ 7, mostly in the San Francisco Bay Area of California during the 1989 Loma Prieta earthquake.

Keywords: permeability, lateral spreading, liquefaction, centrifuge modeling, shear wave velocity charts

Procedia PDF Downloads 282