Search results for: predictive density functions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6607

Search results for: predictive density functions

6157 Optical Properties of TlInSe₂<AU> Si̇ngle Crystals

Authors: Gulshan Mammadova

Abstract:

This paper presents the results of studying the surface microrelief in 2D and 3D models and analyzing the spectroscopy of a three-junction TlInSe₂ crystal. Analysis of the results obtained showed that with a change in the composition of the TlInSe₂ crystal, sharp changes occur in the microrelief of its surface. An X-ray optical diffraction analysis of the TlInSe₂ crystal was experimentally carried out. Based on ellipsometric data, optical functions were determined - the real and imaginary parts of the dielectric permittivity of crystals, the coefficients of optical absorption and reflection, the dependence of energy losses and electric field power on the effective density, the spectral dependences of the real (σᵣ) and imaginary (σᵢ) parts, optical electrical conductivity were experimentally studied. The fluorescence spectra of the ternary compound TlInSe₂ were isolated and analyzed when excited by light with a wavelength of 532 nm. X-ray studies of TlInSe₂ showed that this phase crystallizes into tetragonal systems. Ellipsometric measurements showed that the real (ε₁) and imaginary (ε₂) parts of the dielectric constant are components of the dielectric constant tensor of the uniaxial joints under consideration and do not depend on the angle. Analysis of the dependence of the real and imaginary parts of the refractive index of the TlInSe₂ crystal on photon energy showed that the nature of the change in the real and imaginary parts of the dielectric constant does not differ significantly. When analyzing the spectral dependences of the real (σr) and imaginary (σi) parts of the optical electrical conductivity, it was noticed that the real part of the optical electrical conductivity increases exponentially in the energy range 0.894-3.505 eV. In the energy range of 0.654-2.91 eV, the imaginary part of the optical electrical conductivity increases linearly, reaches a maximum value, and decreases at an energy of 2.91 eV. At 3.6 eV, an inversion of the imaginary part of the optical electrical conductivity of the TlInSe₂ compound is observed. From the graphs of the effective power density versus electric field energy losses, it is known that the effective power density increases significantly in the energy range of 0.805–3.52 eV. The fluorescence spectrum of the ternary compound TlInSe₂ upon excitation with light with a wavelength of 532 nm has been studied and it has been established that this phase has luminescent properties.

Keywords: optical properties, dielectric permittivity, real and imaginary dielectric permittivity, optical electrical conductivity

Procedia PDF Downloads 51
6156 Fuzzy Logic Based Fault Tolerant Model Predictive MLI Topology

Authors: Abhimanyu Kumar, Chirag Gupta

Abstract:

This work presents a comprehensive study on the employment of Model Predictive Control (MPC) for a three-phase voltage-source inverter to regulate the output voltage efficiently. The inverter is modeled via the Clarke Transformation, considering a scenario where the load is unknown. An LC filter model is developed, demonstrating its efficacy in Total Harmonic Distortion (THD) reduction. The system, when implemented with fault-tolerant multilevel inverter topologies, ensures reliable operation even under fault conditions, a requirement that is paramount with the increasing dependence on renewable energy sources. The research also integrates a Fuzzy Logic based fault tolerance system which identifies and manages faults, ensuring consistent inverter performance. The efficacy of the proposed methodology is substantiated through rigorous simulations and comparative results, shedding light on the voltage prediction efficiency and the robustness of the model even under fault conditions.

Keywords: total harmonic distortion, fuzzy logic, renewable energy sources, MLI

Procedia PDF Downloads 91
6155 Enhancement of Mechanical and Biological Properties in Wollastonite Bioceramics by MgSiO3 Addition

Authors: Jae Hong Kim, Sang Cheol Um, Jong Kook Lee

Abstract:

Strong and biocompatible wollastonite (CaSiO3) was fabricated by pressureless sintering at temperature range of 1250~ 1300 ℃ and phase transition of to β-wollastonite with an addition of MgSiO3. Starting pure α-wollastonite powder were prepared by solid state reaction, and MgSiO3 powder was added to α-wollastonite powder to induce the phase transition α to β-wollastonite over 1250℃. Sintered wollastonite samples at 1250℃ with 5 and 10 wt% MgSiO3 were α+β phase and β phase respectively, and showed higher densification rate than that of α or β-wollastonite, which are almost the same as the theoretical density. Hardness and Young’s modulus of sintered wollastonite were dependent on the apparent density and the amount of β-wollastonite. Young’s modulus (78GPa) of β-wollastonite added 10 wt% MgSiO3 was almost double time of sintered α-wollastonite. From the in-vitro test, biphasic (α+β) wollastonite with 5wt% MgSiO3 addition had good bioactivity in simulated body fluid solution.

Keywords: β-wollastonite, high density, MgSiO3, phase transition

Procedia PDF Downloads 569
6154 A Modified Estimating Equations in Derivation of the Causal Effect on the Survival Time with Time-Varying Covariates

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

a systematic observation from a defined time of origin up to certain failure or censor is known as survival data. Survival analysis is a major area of interest in biostatistics and biomedical researches. At the heart of understanding, the most scientific and medical research inquiries lie for a causality analysis. Thus, the main concern of this study is to investigate the causal effect of treatment on survival time conditional to the possibly time-varying covariates. The theory of causality often differs from the simple association between the response variable and predictors. A causal estimation is a scientific concept to compare a pragmatic effect between two or more experimental arms. To evaluate an average treatment effect on survival outcome, the estimating equation was adjusted for time-varying covariates under the semi-parametric transformation models. The proposed model intuitively obtained the consistent estimators for unknown parameters and unspecified monotone transformation functions. In this article, the proposed method estimated an unbiased average causal effect of treatment on survival time of interest. The modified estimating equations of semiparametric transformation models have the advantage to include the time-varying effect in the model. Finally, the finite sample performance characteristics of the estimators proved through the simulation and Stanford heart transplant real data. To this end, the average effect of a treatment on survival time estimated after adjusting for biases raised due to the high correlation of the left-truncation and possibly time-varying covariates. The bias in covariates was restored, by estimating density function for left-truncation. Besides, to relax the independence assumption between failure time and truncation time, the model incorporated the left-truncation variable as a covariate. Moreover, the expectation-maximization (EM) algorithm iteratively obtained unknown parameters and unspecified monotone transformation functions. To summarize idea, the ratio of cumulative hazards functions between the treated and untreated experimental group has a sense of the average causal effect for the entire population.

Keywords: a modified estimation equation, causal effect, semiparametric transformation models, survival analysis, time-varying covariate

Procedia PDF Downloads 154
6153 A High Content Screening Platform for the Accurate Prediction of Nephrotoxicity

Authors: Sijing Xiong, Ran Su, Lit-Hsin Loo, Daniele Zink

Abstract:

The kidney is a major target for toxic effects of drugs, industrial and environmental chemicals and other compounds. Typically, nephrotoxicity is detected late during drug development, and regulatory animal models could not solve this problem. Validated or accepted in silico or in vitro methods for the prediction of nephrotoxicity are not available. We have established the first and currently only pre-validated in vitro models for the accurate prediction of nephrotoxicity in humans and the first predictive platforms based on renal cells derived from human pluripotent stem cells. In order to further improve the efficiency of our predictive models, we recently developed a high content screening (HCS) platform. This platform employed automated imaging in combination with automated quantitative phenotypic profiling and machine learning methods. 129 image-based phenotypic features were analyzed with respect to their predictive performance in combination with 44 compounds with different chemical structures that included drugs, environmental and industrial chemicals and herbal and fungal compounds. The nephrotoxicity of these compounds in humans is well characterized. A combination of chromatin and cytoskeletal features resulted in high predictivity with respect to nephrotoxicity in humans. Test balanced accuracies of 82% or 89% were obtained with human primary or immortalized renal proximal tubular cells, respectively. Furthermore, our results revealed that a DNA damage response is commonly induced by different PTC-toxicants with diverse chemical structures and injury mechanisms. Together, the results show that the automated HCS platform allows efficient and accurate nephrotoxicity prediction for compounds with diverse chemical structures.

Keywords: high content screening, in vitro models, nephrotoxicity, toxicity prediction

Procedia PDF Downloads 300
6152 Uncertainty Estimation in Neural Networks through Transfer Learning

Authors: Ashish James, Anusha James

Abstract:

The impressive predictive performance of deep learning techniques on a wide range of tasks has led to its widespread use. Estimating the confidence of these predictions is paramount for improving the safety and reliability of such systems. However, the uncertainty estimates provided by neural networks (NNs) tend to be overconfident and unreasonable. Ensemble of NNs typically produce good predictions but uncertainty estimates tend to be inconsistent. Inspired by these, this paper presents a framework that can quantitatively estimate the uncertainties by leveraging the advances in transfer learning through slight modification to the existing training pipelines. This promising algorithm is developed with an intention of deployment in real world problems which already boast a good predictive performance by reusing those pretrained models. The idea is to capture the behavior of the trained NNs for the base task by augmenting it with the uncertainty estimates from a supplementary network. A series of experiments with known and unknown distributions show that the proposed approach produces well calibrated uncertainty estimates with high quality predictions.

Keywords: uncertainty estimation, neural networks, transfer learning, regression

Procedia PDF Downloads 108
6151 Suitability Number of Coarse-Grained Soils and Relationships among Fineness Modulus, Density and Strength Parameters

Authors: Khandaker Fariha Ahmed, Md. Noman Munshi, Tarin Sultana, Md. Zoynul Abedin

Abstract:

Suitability number (SN) is perhaps one of the most important parameters of coarse-grained soil in assessing its appropriateness to use as a backfill in retaining structures, sand compaction pile, Vibro compaction, and other similar foundation and ground improvement works. Though determined in an empirical manner, it is imperative to study SN to understand its relation with other aggregate properties like fineness modulus (FM), and strength and density properties of sandy soil. The present paper reports the findings of the study on the examination of the properties of sandy soil, as mentioned. Random numbers were generated to obtain the percent fineness on various sieve sizes, and fineness modulus and suitability numbers were predicted. Sand samples were collected from the field, and test samples were prepared to determine maximum density, minimum density and shear strength parameter φ against particular fineness modulus and corresponding suitability number Five samples of SN value of excellent (0-10) and three samples of SN value fair (20-30) were taken and relevant tests were done. The data obtained from the laboratory tests were statistically analyzed. Results show that with the increase of SN, the value of FM decreases. Within the SN value rated as excellent (0-10), there is a decreasing trend of φ for a higher value of SN. It is found that SN is dependent on various combinations of grain size properties like D10, D30, and D20, D50. Strong linear relationships were obtained between SN and FM (R²=.0.93) and between SN value and φ (R²=.94). Correlation equations are proposed to define relationships among SN, φ, and FM.

Keywords: density, fineness modulus, shear strength parameter, suitability number

Procedia PDF Downloads 91
6150 Breast Cancer Mortality and Comorbidities in Portugal: A Predictive Model Built with Real World Data

Authors: Cecília M. Antão, Paulo Jorge Nogueira

Abstract:

Breast cancer (BC) is the first cause of cancer mortality among Portuguese women. This retrospective observational study aimed at identifying comorbidities associated with BC female patients admitted to Portuguese public hospitals (2010-2018), investigating the effect of comorbidities on BC mortality rate, and building a predictive model using logistic regression. Results showed that the BC mortality in Portugal decreased in this period and reached 4.37% in 2018. Adjusted odds ratio indicated that secondary malignant neoplasms of liver, of bone and bone marrow, congestive heart failure, and diabetes were associated with an increased chance of dying from breast cancer. Although the Lisbon district (the most populated area) accounted for the largest percentage of BC patients, the logistic regression model showed that, besides patient’s age, being resident in Bragança, Castelo Branco, or Porto districts was directly associated with an increase of the mortality rate.

Keywords: breast cancer, comorbidities, logistic regression, adjusted odds ratio

Procedia PDF Downloads 69
6149 What the Future Holds for Social Media Data Analysis

Authors: P. Wlodarczak, J. Soar, M. Ally

Abstract:

The dramatic rise in the use of Social Media (SM) platforms such as Facebook and Twitter provide access to an unprecedented amount of user data. Users may post reviews on products and services they bought, write about their interests, share ideas or give their opinions and views on political issues. There is a growing interest in the analysis of SM data from organisations for detecting new trends, obtaining user opinions on their products and services or finding out about their online reputations. A recent research trend in SM analysis is making predictions based on sentiment analysis of SM. Often indicators of historic SM data are represented as time series and correlated with a variety of real world phenomena like the outcome of elections, the development of financial indicators, box office revenue and disease outbreaks. This paper examines the current state of research in the area of SM mining and predictive analysis and gives an overview of the analysis methods using opinion mining and machine learning techniques.

Keywords: social media, text mining, knowledge discovery, predictive analysis, machine learning

Procedia PDF Downloads 409
6148 Tsunami Wave Height and Flow Velocity Calculations Based on Density Measurements of Boulders: Case Studies from Anegada and Pakarang Cape

Authors: Zakiul Fuady, Michaela Spiske

Abstract:

Inundation events, such as storms and tsunamis can leave onshore sedimentary evidence like sand deposits or large boulders. These deposits store indirect information on the related inundation parameters (e.g., flow velocity, flow depth, wave height). One tool to reveal these parameters are inverse models that use the physical characteristics of the deposits to refer to the magnitude of inundation. This study used boulders of the 2004 Indian Ocean Tsunami from Thailand (Pakarang Cape) and form a historical tsunami event that inundated the outer British Virgin Islands (Anegada). For the largest boulder found in Pakarang Cape with a volume of 26.48 m³ the required tsunami wave height is 0.44 m and storm wave height are 1.75 m (for a bulk density of 1.74 g/cm³. In Pakarang Cape the highest tsunami wave height is 0.45 m and storm wave height are 1.8 m for transporting a 20.07 m³ boulder. On Anegada, the largest boulder with a diameter of 2.7 m is the asingle coral head (species Diploria sp.) with a bulk density of 1.61 g/cm³, and requires a minimum tsunami wave height of 0.31 m and storm wave height of 1.25 m. The highest required tsunami wave height on Anegada is 2.12 m for a boulder with a bulk density of 2.46 g/cm³ (volume 0.0819 m³) and the highest storm wave height is 5.48 m (volume 0.216 m³) from the same bulk density and the coral type is limestone. Generally, the higher the bulk density, volume, and weight of the boulders, the higher the minimum tsunami and storm wave heights required to initiate transport. It requires 4.05 m/s flow velocity by Nott’s equation (2003) and 3.57 m/s by Nandasena et al. (2011) to transport the largest boulder in Pakarang Cape, whereas on Anegada, it requires 3.41 m/s to transport a boulder with diameter 2.7 m for both equations. Thus, boulder equations need to be handled with caution because they make many assumptions and simplifications. Second, the physical boulder parameters, such as density and volume need to be determined carefully to minimize any errors.

Keywords: tsunami wave height, storm wave height, flow velocity, boulders, Anegada, Pakarang Cape

Procedia PDF Downloads 219
6147 Aerodynamic Design an UAV with Application on the Spraying Agricola with Method of Genetic Algorithm Optimization

Authors: Saul A. Torres Z., Eduardo Liceaga C., Alfredo Arias M.

Abstract:

Agriculture in the world falls within the main sources of economic and global needs, so care of crop is extremely important for owners and workers; one of the major causes of loss of product is the pest infection of different types of organisms. We seek to develop a UAV for agricultural spraying at a maximum altitude of 5000 meters above sea level, with a payload of 100 liters of fumigant. For the developing the aerodynamic design of the aircraft is using computational tools such as the "Vortex Lattice Athena" software, "MATLAB"," ANSYS FLUENT"," XFoil " package among others. Also methods are being used structured programming, exhaustive analysis of optimization methods and search. The results have a very low margin of error, and the multi- objective problems can be helpful for future developments. The program has 10 functions developed in MATLAB, these functions are related to each other to enable the development of design, and all these functions are controlled by the principal code "Master.m".

Keywords: aerodynamics design, optimization, algorithm genetic, multi-objective problem, stability, vortex

Procedia PDF Downloads 512
6146 Aliasing Free and Additive Error in Spectra for Alpha Stable Signals

Authors: R. Sabre

Abstract:

This work focuses on the symmetric alpha stable process with continuous time frequently used in modeling the signal with indefinitely growing variance, often observed with an unknown additive error. The objective of this paper is to estimate this error from discrete observations of the signal. For that, we propose a method based on the smoothing of the observations via Jackson polynomial kernel and taking into account the width of the interval where the spectral density is non-zero. This technique allows avoiding the “Aliasing phenomenon” encountered when the estimation is made from the discrete observations of a process with continuous time. We have studied the convergence rate of the estimator and have shown that the convergence rate improves in the case where the spectral density is zero at the origin. Thus, we set up an estimator of the additive error that can be subtracted for approaching the original signal without error.

Keywords: spectral density, stable processes, aliasing, non parametric

Procedia PDF Downloads 115
6145 A Mega-Analysis of the Predictive Power of Initial Contact within Minimal Social Network

Authors: Cathal Ffrench, Ryan Barrett, Mike Quayle

Abstract:

It is accepted in social psychology that categorization leads to ingroup favoritism, without further thought given to the processes that may co-occur or even precede categorization. These categorizations move away from the conceptualization of the self as a unique social being toward an increasingly collective identity. Subsequently, many individuals derive much of their self-evaluations from these collective identities. The seminal literature on this topic argues that it is primarily categorization that evokes instances of ingroup favoritism. Apropos to these theories, we argue that categorization acts to enhance and further intergroup processes rather than defining them. More accurately, we propose categorization aids initial ingroup contact and this first contact is predictive of subsequent favoritism on individual and collective levels. This analysis focuses on Virtual Interaction APPLication (VIAPPL) based studies, a software interface that builds on the flaws of the original minimal group studies. The VIAPPL allows the exchange of tokens in an intra and inter-group manner. This token exchange is how we classified the first contact. The study involves binary longitudinal analysis to better understand the subsequent exchanges of individuals based on who they first interacted with. Studies were selected on the criteria of evidence of explicit first interactions and two-group designs. Our findings paint a compelling picture in support of a motivated contact hypothesis, which suggests that an individual’s first motivated contact toward another has strong predictive capabilities for future behavior. This contact can lead to habit formation and specific favoritism towards individuals where contact has been established. This has important implications for understanding how group conflict occurs, and how intra-group individual bias can develop.

Keywords: categorization, group dynamics, initial contact, minimal social networks, momentary contact

Procedia PDF Downloads 130
6144 Applying Element Free Galerkin Method on Beam and Plate

Authors: Mahdad M’hamed, Belaidi Idir

Abstract:

This paper develops a meshless approach, called Element Free Galerkin (EFG) method, which is based on the weak form Moving Least Squares (MLS) of the partial differential governing equations and employs the interpolation to construct the meshless shape functions. The variation weak form is used in the EFG where the trial and test functions are approximated bye the MLS approximation. Since the shape functions constructed by this discretization have the weight function property based on the randomly distributed points, the essential boundary conditions can be implemented easily. The local weak form of the partial differential governing equations is obtained by the weighted residual method within the simple local quadrature domain. The spline function with high continuity is used as the weight function. The presently developed EFG method is a truly meshless method, as it does not require the mesh, either for the construction of the shape functions, or for the integration of the local weak form. Several numerical examples of two-dimensional static structural analysis are presented to illustrate the performance of the present EFG method. They show that the EFG method is highly efficient for the implementation and highly accurate for the computation. The present method is used to analyze the static deflection of beams and plate hole

Keywords: numerical computation, element-free Galerkin (EFG), moving least squares (MLS), meshless methods

Procedia PDF Downloads 268
6143 Computer-Assisted Management of Building Climate and Microgrid with Model Predictive Control

Authors: Vinko Lešić, Mario Vašak, Anita Martinčević, Marko Gulin, Antonio Starčić, Hrvoje Novak

Abstract:

With 40% of total world energy consumption, building systems are developing into technically complex large energy consumers suitable for application of sophisticated power management approaches to largely increase the energy efficiency and even make them active energy market participants. Centralized control system of building heating and cooling managed by economically-optimal model predictive control shows promising results with estimated 30% of energy efficiency increase. The research is focused on implementation of such a method on a case study performed on two floors of our faculty building with corresponding sensors wireless data acquisition, remote heating/cooling units and central climate controller. Building walls are mathematically modeled with corresponding material types, surface shapes and sizes. Models are then exploited to predict thermal characteristics and changes in different building zones. Exterior influences such as environmental conditions and weather forecast, people behavior and comfort demands are all taken into account for deriving price-optimal climate control. Finally, a DC microgrid with photovoltaics, wind turbine, supercapacitor, batteries and fuel cell stacks is added to make the building a unit capable of active participation in a price-varying energy market. Computational burden of applying model predictive control on such a complex system is relaxed through a hierarchical decomposition of the microgrid and climate control, where the former is designed as higher hierarchical level with pre-calculated price-optimal power flows control, and latter is designed as lower level control responsible to ensure thermal comfort and exploit the optimal supply conditions enabled by microgrid energy flows management. Such an approach is expected to enable the inclusion of more complex building subsystems into consideration in order to further increase the energy efficiency.

Keywords: price-optimal building climate control, Microgrid power flow optimisation, hierarchical model predictive control, energy efficient buildings, energy market participation

Procedia PDF Downloads 447
6142 A Study for Area-level Mosquito Abundance Prediction by Using Supervised Machine Learning Point-level Predictor

Authors: Theoktisti Makridou, Konstantinos Tsaprailis, George Arvanitakis, Charalampos Kontoes

Abstract:

In the literature, the data-driven approaches for mosquito abundance prediction relaying on supervised machine learning models that get trained with historical in-situ measurements. The counterpart of this approach is once the model gets trained on pointlevel (specific x,y coordinates) measurements, the predictions of the model refer again to point-level. These point-level predictions reduce the applicability of those solutions once a lot of early warning and mitigation actions applications need predictions for an area level, such as a municipality, village, etc... In this study, we apply a data-driven predictive model, which relies on public-open satellite Earth Observation and geospatial data and gets trained with historical point-level in-Situ measurements of mosquito abundance. Then we propose a methodology to extract information from a point-level predictive model to a broader area-level prediction. Our methodology relies on the randomly spatial sampling of the area of interest (similar to the Poisson hardcore process), obtaining the EO and geomorphological information for each sample, doing the point-wise prediction for each sample, and aggregating the predictions to represent the average mosquito abundance of the area. We quantify the performance of the transformation from the pointlevel to the area-level predictions, and we analyze it in order to understand which parameters have a positive or negative impact on it. The goal of this study is to propose a methodology that predicts the mosquito abundance of a given area by relying on point-level prediction and to provide qualitative insights regarding the expected performance of the area-level prediction. We applied our methodology to historical data (of Culex pipiens) of two areas of interest (Veneto region of Italy and Central Macedonia of Greece). In both cases, the results were consistent. The mean mosquito abundance of a given area can be estimated with similar accuracy to the point-level predictor, sometimes even better. The density of the samples that we use to represent one area has a positive effect on the performance in contrast to the actual number of sampling points which is not informative at all regarding the performance without the size of the area. Additionally, we saw that the distance between the sampling points and the real in-situ measurements that were used for training did not strongly affect the performance.

Keywords: mosquito abundance, supervised machine learning, culex pipiens, spatial sampling, west nile virus, earth observation data

Procedia PDF Downloads 126
6141 Towards a Strategic Framework for State-Level Epistemological Functions

Authors: Mark Darius Juszczak

Abstract:

While epistemology, as a sub-field of philosophy, is generally concerned with theoretical questions about the nature of knowledge, the explosion in digital media technologies has resulted in an exponential increase in the storage and transmission of human information. That increase has resulted in a particular non-linear dynamic – digital epistemological functions are radically altering how and what we know. Neither the rate of that change nor the consequences of it have been well studied or taken into account in developing state-level strategies for epistemological functions. At the current time, US Federal policy, like that of virtually all other countries, maintains, at the national state level, clearly defined boundaries between various epistemological agencies - agencies that, in one way or another, mediate the functional use of knowledge. These agencies can take the form of patent and trademark offices, national library and archive systems, departments of education, departments such as the FTC, university systems and regulations, military research systems such as DARPA, federal scientific research agencies, medical and pharmaceutical accreditation agencies, federal funding for scientific research and legislative committees and subcommittees that attempt to alter the laws that govern epistemological functions. All of these agencies are in the constant process of creating, analyzing, and regulating knowledge. Those processes are, at the most general level, epistemological functions – they act upon and define what knowledge is. At the same time, however, there are no high-level strategic epistemological directives or frameworks that define those functions. The only time in US history where a proxy state-level epistemological strategy existed was between 1961 and 1969 when the Kennedy Administration committed the United States to the Apollo program. While that program had a singular technical objective as its outcome, that objective was so technologically advanced for its day and so complex so that it required a massive redirection of state-level epistemological functions – in essence, a broad and diverse set of state-level agencies suddenly found themselves working together towards a common epistemological goal. This paper does not call for a repeat of the Apollo program. Rather, its purpose is to investigate the minimum structural requirements for a national state-level epistemological strategy in the United States. In addition, this paper also seeks to analyze how the epistemological work of the multitude of national agencies within the United States would be affected by such a high-level framework. This paper is an exploratory study of this type of framework. The primary hypothesis of the author is that such a function is possible but would require extensive re-framing and reclassification of traditional epistemological functions at the respective agency level. In much the same way that, for example, DHS (Department of Homeland Security) evolved to respond to a new type of security threat in the world for the United States, it is theorized that a lack of coordination and alignment in epistemological functions will equally result in a strategic threat to the United States.

Keywords: strategic security, epistemological functions, epistemological agencies, Apollo program

Procedia PDF Downloads 65
6140 Calibration of Hybrid Model and Arbitrage-Free Implied Volatility Surface

Authors: Kun Huang

Abstract:

This paper investigates whether the combination of local and stochastic volatility models can be calibrated exactly to any arbitrage-free implied volatility surface of European option. The risk neutral Brownian Bridge density is applied for calibration of the leverage function of our Hybrid model. Furthermore, the tails of marginal risk neutral density are generated by Generalized Extreme Value distribution in order to capture the properties of asset returns. The local volatility is generated from the arbitrage-free implied volatility surface using stochastic volatility inspired parameterization.

Keywords: arbitrage free implied volatility, calibration, extreme value distribution, hybrid model, local volatility, risk-neutral density, stochastic volatility

Procedia PDF Downloads 250
6139 An Innovative High Energy Density Power Pack for Portable and Off-Grid Power Applications

Authors: Idit Avrahami, Alex Schechter, Lev Zakhvatkin

Abstract:

This research focuses on developing a compact and light Hydrogen Generator (HG), coupled with fuel cells (FC) to provide a High-Energy-Density Power-Pack (HEDPP) solution, which is 10 times Li-Ion batteries. The HEDPP is designed for portable & off-grid power applications such as Drones, UAVs, stationary off-grid power sources, unmanned marine vehicles, and more. Hydrogen gas provided by this device is delivered in the safest way as a chemical powder at room temperature and ambient pressure is activated only when the power is on. Hydrogen generation is based on a stabilized chemical reaction of Sodium Borohydride (SBH) and water. The proposed solution enables a ‘No Storage’ Hydrogen-based Power Pack. Hydrogen is produced and consumed on-the-spot, during operation; therefore, there’s no need for high-pressure hydrogen tanks, which are large, heavy, and unsafe. In addition to its high energy density, ease of use, and safety, the presented power pack has a significant advantage of versatility and deployment in numerous applications and scales. This patented HG was demonstrated using several prototypes in our lab and was proved to be feasible and highly efficient for several applications. For example, in applications where water is available (such as marine vehicles, water and sewage infrastructure, and stationary applications), the Energy Density of the suggested power pack may reach 2700-3000 Wh/kg, which is again more than 10 times higher than conventional lithium-ion batteries. In other applications (e.g., UAV or small vehicles) the energy density may exceed 1000 Wh/kg.

Keywords: hydrogen energy, sodium borohydride, fixed-wing UAV, energy pack

Procedia PDF Downloads 65
6138 Predicting Machine-Down of Woodworking Industrial Machines

Authors: Matteo Calabrese, Martin Cimmino, Dimos Kapetis, Martina Manfrin, Donato Concilio, Giuseppe Toscano, Giovanni Ciandrini, Giancarlo Paccapeli, Gianluca Giarratana, Marco Siciliano, Andrea Forlani, Alberto Carrotta

Abstract:

In this paper we describe a machine learning methodology for Predictive Maintenance (PdM) applied on woodworking industrial machines. PdM is a prominent strategy consisting of all the operational techniques and actions required to ensure machine availability and to prevent a machine-down failure. One of the challenges with PdM approach is to design and develop of an embedded smart system to enable the health status of the machine. The proposed approach allows screening simultaneously multiple connected machines, thus providing real-time monitoring that can be adopted with maintenance management. This is achieved by applying temporal feature engineering techniques and training an ensemble of classification algorithms to predict Remaining Useful Lifetime of woodworking machines. The effectiveness of the methodology is demonstrated by testing an independent sample of additional woodworking machines without presenting machine down event.

Keywords: predictive maintenance, machine learning, connected machines, artificial intelligence

Procedia PDF Downloads 203
6137 Derivation of a Risk-Based Level of Service Index for Surface Street Network Using Reliability Analysis

Authors: Chang-Jen Lan

Abstract:

Current Level of Service (LOS) index adopted in Highway Capacity Manual (HCM) for signalized intersections on surface streets is based on the intersection average delay. The delay thresholds for defining LOS grades are subjective and is unrelated to critical traffic condition. For example, an intersection delay of 80 sec per vehicle for failing LOS grade F does not necessarily correspond to the intersection capacity. Also, a specific measure of average delay may result from delay minimization, delay equality, or other meaningful optimization criteria. To that end, a reliability version of the intersection critical degree of saturation (v/c) as the LOS index is introduced. Traditionally, the level of saturation at a signalized intersection is defined as the ratio of critical volume sum (per lane) to the average saturation flow (per lane) during all available effective green time within a cycle. The critical sum is the sum of the maximal conflicting movement-pair volumes in northbound-southbound and eastbound/westbound right of ways. In this study, both movement volume and saturation flow are assumed log-normal distributions. Because, when the conditions of central limit theorem obtain, multiplication of the independent, positive random variables tends to result in a log-normal distributed outcome in the limit, the critical degree of saturation is expected to be a log-normal distribution as well. Derivation of the risk index predictive limits is complex due to the maximum and absolute value operators, as well as the ratio of random variables. A fairly accurate functional form for the predictive limit at a user-specified significant level is yielded. The predictive limit is then compared with the designated LOS thresholds for the intersection critical degree of saturation (denoted as X

Keywords: reliability analysis, level of service, intersection critical degree of saturation, risk based index

Procedia PDF Downloads 118
6136 Characterization of Retinal Pigmented Cell Epithelium Cell Sheet Cultivated on Synthetic Scaffold

Authors: Tan Yong Sheng Edgar, Yeong Wai Yee

Abstract:

Age-related macular degeneration (AMD) is one of the leading cause of blindness. It can cause severe visual loss due to damaged retinal pigment epithelium (RPE). RPE is an important component of the retinal tissue. It functions as a transducing boundary for visual perception making it an essential factor for sight. The RPE also functions as a metabolically complex and functional cell layer that is responsible for the local homeostasis and maintenance of the extra photoreceptor environment. Thus one of the suggested method of treating such diseases would be regenerating these RPE cells. As such, we intend to grow these cells using a synthetic scaffold to provide a stable environment that reduces the batch effects found in natural scaffolds. Stiffness of the scaffold will also be investigated to determine the optimal Young’s modulus for cultivating these cells. The cells will be generated into a monolayer cell sheet and their functions such as formation of tight junctions and gene expression patterns will be assessed to evaluate the cell sheet quality compared to a native RPE tissue.

Keywords: RPE, scaffold, characterization, biomaterials, colloids and nanomedicine

Procedia PDF Downloads 418
6135 Evaluating the Suitability and Performance of Dynamic Modulus Predictive Models for North Dakota’s Asphalt Mixtures

Authors: Duncan Oteki, Andebut Yeneneh, Daba Gedafa, Nabil Suleiman

Abstract:

Most agencies lack the equipment required to measure the dynamic modulus (|E*|) of asphalt mixtures, necessitating the need to use predictive models. This study compared measured |E*| values for nine North Dakota asphalt mixes using the original Witczak, modified Witczak, and Hirsch models. The influence of temperature on the |E*| models was investigated, and Pavement ME simulations were conducted using measured |E*| and predictions from the most accurate |E*| model. The results revealed that the original Witczak model yielded the lowest Se/Sy and highest R² values, indicating the lowest bias and highest accuracy, while the poorest overall performance was exhibited by the Hirsch model. Using predicted |E*| as inputs in the Pavement ME generated conservative distress predictions compared to using measured |E*|. The original Witczak model was recommended for predicting |E*| for low-reliability pavements in North Dakota.

Keywords: asphalt mixture, binder, dynamic modulus, MEPDG, pavement ME, performance, prediction

Procedia PDF Downloads 29
6134 Modeling Thermionic Emission from Carbon Nanotubes with Modified Richardson-Dushman Equation

Authors: Olukunle C. Olawole, Dilip Kumar De

Abstract:

We have modified Richardson-Dushman equation considering thermal expansion of lattice and change of chemical potential with temperature in material. The corresponding modified Richardson-Dushman (MRDE) equation fits quite well the experimental data of thermoelectronic current density (J) vs T from carbon nanotubes. It provides a unique technique for accurate determination of W0 Fermi energy, EF0 at 0 K and linear thermal expansion coefficient of carbon nano-tube in good agreement with experiment. From the value of EF0 we obtain the charge carrier density in excellent agreement with experiment. We describe application of the equations for the evaluation of performance of concentrated solar thermionic energy converter (STEC) with emitter made of carbon nanotube for future applications.

Keywords: carbon nanotube, modified Richardson-Dushman equation, fermi energy at 0 K, charge carrier density

Procedia PDF Downloads 360
6133 Multimodal Optimization of Density-Based Clustering Using Collective Animal Behavior Algorithm

Authors: Kristian Bautista, Ruben A. Idoy

Abstract:

A bio-inspired metaheuristic algorithm inspired by the theory of collective animal behavior (CAB) was integrated to density-based clustering modeled as multimodal optimization problem. The algorithm was tested on synthetic, Iris, Glass, Pima and Thyroid data sets in order to measure its effectiveness relative to CDE-based Clustering algorithm. Upon preliminary testing, it was found out that one of the parameter settings used was ineffective in performing clustering when applied to the algorithm prompting the researcher to do an investigation. It was revealed that fine tuning distance δ3 that determines the extent to which a given data point will be clustered helped improve the quality of cluster output. Even though the modification of distance δ3 significantly improved the solution quality and cluster output of the algorithm, results suggest that there is no difference between the population mean of the solutions obtained using the original and modified parameter setting for all data sets. This implies that using either the original or modified parameter setting will not have any effect towards obtaining the best global and local animal positions. Results also suggest that CDE-based clustering algorithm is better than CAB-density clustering algorithm for all data sets. Nevertheless, CAB-density clustering algorithm is still a good clustering algorithm because it has correctly identified the number of classes of some data sets more frequently in a thirty trial run with a much smaller standard deviation, a potential in clustering high dimensional data sets. Thus, the researcher recommends further investigation in the post-processing stage of the algorithm.

Keywords: clustering, metaheuristics, collective animal behavior algorithm, density-based clustering, multimodal optimization

Procedia PDF Downloads 208
6132 Effectiveness of the Lacey Assessment of Preterm Infants to Predict Neuromotor Outcomes of Premature Babies at 12 Months Corrected Age

Authors: Thanooja Naushad, Meena Natarajan, Tushar Vasant Kulkarni

Abstract:

Background: The Lacey Assessment of Preterm Infants (LAPI) is used in clinical practice to identify premature babies at risk of neuromotor impairments, especially cerebral palsy. This study attempted to find the validity of the Lacey assessment of preterm infants to predict neuromotor outcomes of premature babies at 12 months corrected age and to compare its predictive ability with the brain ultrasound. Methods: This prospective cohort study included 89 preterm infants (45 females and 44 males) born below 35 weeks gestation who were admitted to the neonatal intensive care unit of a government hospital in Dubai. Initial assessment was done using the Lacey assessment after the babies reached 33 weeks postmenstrual age. Follow up assessment on neuromotor outcomes was done at 12 months (± 1 week) corrected age using two standardized outcome measures, i.e., infant neurological international battery and Alberta infant motor scale. Brain ultrasound data were collected retrospectively. Data were statistically analyzed, and the diagnostic accuracy of the Lacey assessment of preterm infants (LAPI) was calculated -when used alone and in combination with the brain ultrasound. Results: On comparison with brain ultrasound, the Lacey assessment showed superior specificity (96% vs. 77%), higher positive predictive value (57% vs. 22%), and higher positive likelihood ratio (18 vs. 3) to predict neuromotor outcomes at one year of age. The sensitivity of Lacey assessment was lower than brain ultrasound (66% vs. 83%), whereas specificity was similar (97% vs. 98%). A combination of Lacey assessment and brain ultrasound results showed higher sensitivity (80%), positive (66%), and negative (98%) predictive values, positive likelihood ratio (24), and test accuracy (95%) than Lacey assessment alone in predicting neurological outcomes. The negative predictive value of the Lacey assessment was similar to that of its combination with brain ultrasound (96%). Conclusion: Results of this study suggest that the Lacey assessment of preterm infants can be used as a supplementary assessment tool for premature babies in the neonatal intensive care unit. Due to its high specificity, Lacey assessment can be used to identify those babies at low risk of abnormal neuromotor outcomes at a later age. When used along with the findings of the brain ultrasound, Lacey assessment has better sensitivity to identify preterm babies at particular risk. These findings have applications in identifying premature babies who may benefit from early intervention services.

Keywords: brain ultrasound, lacey assessment of preterm infants, neuromotor outcomes, preterm

Procedia PDF Downloads 124
6131 Psychological Testing in Industrial/Organizational Psychology: Validity and Reliability of Psychological Assessments in the Workplace

Authors: Melissa C. Monney

Abstract:

Psychological testing has been of interest to researchers for many years as useful tools in assessing and diagnosing various disorders as well as to assist in understanding human behavior. However, for over 20 years now, researchers and laypersons alike have been interested in using them for other purposes, such as determining factors in employee selection, promotion, and even termination. In recent years, psychological assessments have been useful in facilitating workplace decision processing, regarding employee circulation within organizations. This literature review explores four of the most commonly used psychological tests in workplace environments, namely cognitive ability, emotional intelligence, integrity, and personality tests, as organizations have used these tests to assess different factors of human behavior as predictive measures of future employee behaviors. The findings suggest that while there is much controversy and debate regarding the validity and reliability of these tests in workplace settings as they were not originally designed for these purposes, the use of such assessments in the workplace has been useful in decreasing costs and employee turnover as well as increase job satisfaction by ensuring the right employees are selected for their roles.

Keywords: cognitive ability, personality testing, predictive validity, workplace behavior

Procedia PDF Downloads 226
6130 Hydraulic Characteristics of the Tidal River Dongcheon in Busan City

Authors: Young Man Cho, Sang Hyun Kim

Abstract:

Even though various management practices such as sediment dredging were attempted to improve water quality of Dongcheon located in Busan, the environmental condition of this stream was deteriorated. Therefore, Busan metropolitan city had pumped and diverted sea water to upstream of Dongcheon for several years. This study explored hydraulic characteristics of Dongcheon to configure the best management practice for ecological restoration and water quality improvement of a man-made urban stream. Intensive field investigation indicates that average flow velocities at depths of 20% and 80% from the water surface ranged 5 to 10 cm/s and 2 to 5 cm/s, respectively. Concentrations of dissolved oxygen for all depths were less than 0.25 mg/l during low tidal period. Even though density difference can be found along stream depth, density current seems rarely generated in Dongcheon. Short period of high tidal portion and shallow depths are responsible for well-mixing nature of Doncheon.

Keywords: hydraulic, tidal river, density current, sea water

Procedia PDF Downloads 208
6129 Enhancing of Paraffin Wax Properties by Adding of Low Density Polyethylene (LDPE)

Authors: Siham Mezher Yousif, Intisar Yahiya Mohammed, Salma Nagem Mouhy

Abstract:

Low Density Polyethylene is a thermoplastic resin extracted from petroleum based, whereas the wax is an oily organic component that is contains of alkanes, ester, polyester, and hydroxyl ester. The purpose of this research is to find out the optimum conditions of the wax produced by inducing with LDPE. The experiments were carried out by mixing different percentages of wax and LDPE to produce different polymer/wax compositions, in which lower values of the penetration, thickness, and electrical conductivity are obtained with increasing of mixing ratio of LDPE/wax which showed results of 19 mm penetration, 692 micron thickness and 5.9 mA electrical conductivity for 90 wt % of LDPE/wax) maximum mixing ratio (. It’s found that the optimum results regarding penetration, enamel thickness, and electrical conductivity “according to the enamel hardness, insulation properties, and economic aspects” are 20 mm, 276 micron, and 6.2 mA respectively.

Keywords: paraffin wax, low density polyethylene, blending, mixing ratio, bleaching

Procedia PDF Downloads 92
6128 Intraspecific Response of the Ciliate Tetrahymena thermophila to Copper and Thermal Stress

Authors: Doufoungognon Carine Kone

Abstract:

Heavy metals present in large quantities in ecosystems can alter biological and cellular functions and disrupt trophic functions. However, their toxicity can change according to thermal conditions, as toxicity depends on their bioavailability and thermal optimum of organisms. Organisms can develop different tolerance strategies to maintain themselves in a stressful environment, but these strategies are often studied in a single-stressor context. This study evaluates the responses of the ciliate Tetrahymena thermophila to copper, high temperature, and their interaction. Six genotypes were exposed to a gradient of copper concentrations ranging from 0 to 350mg/L in synthetic media at three temperatures: 15°C, 23°C, and 31°C. Cell density, cell shape and size (and their variance), swimming speed and trajectory, and copper uptake rate were measured. Depending on the genotype, swimming speed, trajectory, and cell size were highly affected by stress gradients. One gets bigger, while two genotypes get smaller and the other remain unchanged. Some genotypes swam less rapidly, while others speed up as copper and temperature increased. Concerning copper uptake, the two genotypes accumulating the best and the worst, whatever the copper concentration or temperature, were also those that had the highest densities. Finally, very few temperature x copper interactions were observed on phenotypic parameters. The diversity of phenotypic responses revealed in this study reflects the existence of divergent strategies adopted by Tetrahymena thermophila to resist to copper and thermal stress, which suggests an important role of intraspecific variability in biodiversity response to environmental stress. One general and the surprising pattern was a global absence of interactive effects between copper and high temperature exposure on the observed phenotypic responses.

Keywords: ciliate, copper, intraspecific variability, phenotype, temperature, tolerance, multiple stressors

Procedia PDF Downloads 58