Search results for: point estimate method
23312 Assessment of Incidence and Predictors of Mortality Among HIV Positive Children on Art in Public Hospitals of Harer Town Who Were Enrolled From 2011 to 2021
Authors: Getahun Nigusie Demise
Abstract:
Background; antiretroviral treatment reduce HIV-related morbidity, and prolonged survival of patients however, there is lack of up-to-date information concerning the treatment long term effect on the survival of HIV positive children especially in the study area. Objective: The aim of this study is to assess the incidence and predictors of mortality among HIV positive children on antiretroviral therapy (ART) in public hospitals of Harer town who were enrolled from 2011 to 2021. Methodology: Institution based retrospective cohort study was conducted among 429 HIV positive children enrolled in ART clinic from January 1st 2011 to December30th 2021. Data were collected from medical cards by using a data extraction form, Descriptive analyses were used to Summarized the results, and life table was used to estimate survival probability at specific point of time after introduction of ART. Kaplan Meier survival curve together with log rank test was used to compare survival between different categories of covariates, and Multivariate Cox-proportional hazard regression model was used to estimate adjusted Hazard rate. Variables with p-values ≤0.25 in bivariable analysis were candidates to the multivariable analysis. Finally, variables with p-values < 0.05 were considered as significant variables. Results: The study participants had followed for a total of 2549.6 child-years (30596 child months) with an overall mortality rate of 1.5 (95% CI: 1.1, 2.04) per 100 child-years. Their median survival time was 112 months (95% CI: 101–117). There were 38 children with unknown outcome, 39 deaths, and 55 children transfer out to different facility. The overall survival at 6, 12, 24, 48 months were 98%, 96%, 95%, 94% respectively. being in WHO clinical Stage four (AHR=4.55, 95% CI:1.36, 15.24), having anemia(AHR=2.56, 95% CI:1.11, 5.93), baseline low absolute CD4 count (AHR=2.95, 95% CI: 1.22, 7.12), stunting (AHR=4.1, 95% CI: 1.11, 15.42), wasting (AHR=4.93, 95% CI: 1.31, 18.76), poor adherence to treatment (AHR=3.37, 95% CI: 1.25, 9.11), having TB infection at enrollment (AHR=3.26, 95% CI: 1.25, 8.49),and no history of change their regimen(AHR=7.1, 95% CI: 2.74, 18.24), were independent predictors of death. Conclusion: more than half of death occurs within 2 years. Prevalent tuberculosis, anemia, wasting, and stunting nutritional status, socioeconomic factors, and baseline opportunistic infection were independent predictors of death. Increasing early screening and managing those predictors are required.Keywords: human immunodeficiency virus-positive children, anti-retroviral therapy, survival, treatment, Ethiopia
Procedia PDF Downloads 4823311 Hansen Solubility Parameter from Surface Measurements
Authors: Neveen AlQasas, Daniel Johnson
Abstract:
Membranes for water treatment are an established technology that attracts great attention due to its simplicity and cost effectiveness. However, membranes in operation suffer from the adverse effect of membrane fouling. Bio-fouling is a phenomenon that occurs at the water-membrane interface, and is a dynamic process that is initiated by the adsorption of dissolved organic material, including biomacromolecules, on the membrane surface. After initiation, attachment of microorganisms occurs, followed by biofilm growth. The biofilm blocks the pores of the membrane and consequently results in reducing the water flux. Moreover, the presence of a fouling layer can have a substantial impact on the membrane separation properties. Understanding the mechanism of the initiation phase of biofouling is a key point in eliminating the biofouling on membrane surfaces. The adhesion and attachment of different fouling materials is affected by the surface properties of the membrane materials. Therefore, surface properties of different polymeric materials had been studied in terms of their surface energies and Hansen solubility parameters (HSP). The difference between the combined HSP parameters (HSP distance) allows prediction of the affinity of two materials to each other. The possibilities of measuring the HSP of different polymer films via surface measurements, such as contact angle has been thoroughly investigated. Knowing the HSP of a membrane material and the HSP of a specific foulant, facilitate the estimation of the HSP distance between the two, and therefore the strength of attachment to the surface. Contact angle measurements using fourteen different solvents on five different polymeric films were carried out using the sessile drop method. Solvents were ranked as good or bad solvents using different ranking method and ranking was used to calculate the HSP of each polymeric film. Results clearly indicate the absence of a direct relation between contact angle values of each film and the HSP distance between each polymer film and the solvents used. Therefore, estimating HSP via contact angle alone is not sufficient. However, it was found if the surface tensions and viscosities of the used solvents are taken in to the account in the analysis of the contact angle values, a prediction of the HSP from contact angle measurements is possible. This was carried out via training of a neural network model. The trained neural network model has three inputs, contact angle value, surface tension and viscosity of solvent used. The model is able to predict the HSP distance between the used solvent and the tested polymer (material). The HSP distance prediction is further used to estimate the total and individual HSP parameters of each tested material. The results showed an accuracy of about 90% for all the five studied filmsKeywords: surface characterization, hansen solubility parameter estimation, contact angle measurements, artificial neural network model, surface measurements
Procedia PDF Downloads 9423310 Risk Analysis in Road Transport of Dangerous Goods Using Complex Multi-Criteria Analysis Method
Authors: Zoran Masoničić, Siniša Dragutinović, Ivan Lazović
Abstract:
In the management and organization of the road transport of dangerous goods, in addition to the existing influential criteria and restrictions that apply to the road transport in general, it is necessary to include an additional criteria related to the safety of people and the environment, considering the danger that comes from the substances being transported. In that manner, the decision making process becomes very complex and rather challenging task that is inherent to the application of complex numerical multi-criteria analysis methods. In this paper some initial results of application of complex analysis method in decision making process are presented. Additionally, the method for minimization or even complete elimination of subjective element in the decision making process is provided. The results obtained can be used in order to point the direction towards some measures have to be applied in order to minimize or completely annihilate the influence of the risk source identified.Keywords: road transport, dangerous goods, risk analysis, risk evaluation
Procedia PDF Downloads 1623309 Effect of Hydraulic Diameter on Flow Boiling Instability in a Single Microtube with Vertical Upward Flow
Authors: Qian You, Ibrahim Hassan, Lyes Kadem
Abstract:
An experiment is conducted to fundamentally investigate flow oscillation characteristics in different sizes of single microtubes in vertical upward flow direction. Three microtubes have 0.889 mm, 0.533 mm, and 0.305 mm hydraulic diameters with 100 mm identical heated length. The mass flux of the working fluid FC-72 varies from 700 kg/m2•s to 1400 kg/m2•s, and the heat flux is uniformly applied on the tube surface up to 9.4 W/cm2. The subcooled inlet temperature is maintained around 24°C during the experiment. The effect of hydraulic diameter and mass flux are studied. The results showed that they have interactions on the flow oscillations occurrence and behaviors. The onset of flow instability (OFI), which is a threshold of unstable flow, usually appears in large microtube with diversified and sustained flow oscillations, while the transient point, which is the point when the flow turns from one stable state to another suddenly, is more observed in small microtube without characterized flow oscillations due to the bubble confinement. The OFI/transient point occurs early as hydraulic diameter reduces at a given mass flux. The increased mass flux can delay the OFI/transient point occurrence in large hydraulic diameter, but no significant effect in small size. Although the only transient point is observed in the smallest tube, it appears at small heat flux and is not sensitive to mass flux; hence, the smallest microtube is not recommended since increasing heat flux may cause local dryout.Keywords: flow boiling instability, hydraulic diameter effect, a single microtube, vertical upward flow
Procedia PDF Downloads 60023308 Comparison of Finite Difference Schemes for Numerical Study of Ripa Model
Authors: Sidrah Ahmed
Abstract:
The river and lakes flows are modeled mathematically by shallow water equations that are depth-averaged Reynolds Averaged Navier-Stokes equations under Boussinesq approximation. The temperature stratification dynamics influence the water quality and mixing characteristics. It is mainly due to the atmospheric conditions including air temperature, wind velocity, and radiative forcing. The experimental observations are commonly taken along vertical scales and are not sufficient to estimate small turbulence effects of temperature variations induced characteristics of shallow flows. Wind shear stress over the water surface influence flow patterns, heat fluxes and thermodynamics of water bodies as well. Hence it is crucial to couple temperature gradients with shallow water model to estimate the atmospheric effects on flow patterns. The Ripa system has been introduced to study ocean currents as a variant of shallow water equations with addition of temperature variations within the flow. Ripa model is a hyperbolic system of partial differential equations because all the eigenvalues of the system’s Jacobian matrix are real and distinct. The time steps of a numerical scheme are estimated with the eigenvalues of the system. The solution to Riemann problem of the Ripa model is composed of shocks, contact and rarefaction waves. Solving Ripa model with Riemann initial data with the central schemes is difficult due to the eigen structure of the system.This works presents the comparison of four different finite difference schemes for the numerical solution of Riemann problem for Ripa model. These schemes include Lax-Friedrichs, Lax-Wendroff, MacCormack scheme and a higher order finite difference scheme with WENO method. The numerical flux functions in both dimensions are approximated according to these methods. The temporal accuracy is achieved by employing TVD Runge Kutta method. The numerical tests are presented to examine the accuracy and robustness of the applied methods. It is revealed that Lax-Freidrichs scheme produces results with oscillations while Lax-Wendroff and higher order difference scheme produce quite better results.Keywords: finite difference schemes, Riemann problem, shallow water equations, temperature gradients
Procedia PDF Downloads 20323307 On Parameter Estimation of Simultaneous Linear Functional Relationship Model for Circular Variables
Authors: N. A. Mokhtar, A. G. Hussin, Y. Z. Zubairi
Abstract:
This paper proposes a new simultaneous simple linear functional relationship model by assuming equal error variances. We derive the maximum likelihood estimate of the parameters in the simultaneous model and the covariance. We show by simulation study the small bias values of the parameters suggest the suitability of the estimation method. As an illustration, the proposed simultaneous model is applied to real data of the wind direction and wave direction measured by two different instruments.Keywords: simultaneous linear functional relationship model, Fisher information matrix, parameter estimation, circular variables
Procedia PDF Downloads 36623306 A Survey on Fixed Point Iterations in Modular Function Spaces and an Application to Ode
Authors: Hudson Akewe
Abstract:
This research presents complementary results with wider applications on convergence and rate of convergence of classical fixed point theory in Banach spaces to the world of the theory of fixed points of mappings defined in classes of spaces of measurable functions, known in the literature as modular function spaces. The study gives a comprehensive survey of various iterative fixed point results for the classes of multivalued ρ-contractive-like, ρ-quasi-contractive-like, ρ-quasi-contractive, ρ-Zamfirescu and ρ-contraction mappings in the framework of modular function spaces. An example is presented to demonstrate the applicability of the implicit-type iterative schemes to the system of ordinary differential equations. Furthermore, numerical examples are given to show the rate of convergence of the various explicit Kirk-type and implicit Kirk-type iterative schemes under consideration. Our results complement the results obtained on normed and metric spaces in the literature. Also, our methods of proof serve as a guide to obtain several similar improved results for nonexpansive, pseudo-contractive, and accretive type mappings.Keywords: implicit Kirk-type iterative schemes, multivalued mappings, convergence results, fixed point
Procedia PDF Downloads 12823305 New Estimation in Autoregressive Models with Exponential White Noise by Using Reversible Jump MCMC Algorithm
Authors: Suparman Suparman
Abstract:
A white noise in autoregressive (AR) model is often assumed to be normally distributed. In application, the white noise usually do not follows a normal distribution. This paper aims to estimate a parameter of AR model that has a exponential white noise. A Bayesian method is adopted. A prior distribution of the parameter of AR model is selected and then this prior distribution is combined with a likelihood function of data to get a posterior distribution. Based on this posterior distribution, a Bayesian estimator for the parameter of AR model is estimated. Because the order of AR model is considered a parameter, this Bayesian estimator cannot be explicitly calculated. To resolve this problem, a method of reversible jump Markov Chain Monte Carlo (MCMC) is adopted. A result is a estimation of the parameter AR model can be simultaneously calculated.Keywords: autoregressive (AR) model, exponential white Noise, bayesian, reversible jump Markov Chain Monte Carlo (MCMC)
Procedia PDF Downloads 35523304 Kinetic Façade Design Using 3D Scanning to Convert Physical Models into Digital Models
Authors: Do-Jin Jang, Sung-Ah Kim
Abstract:
In designing a kinetic façade, it is hard for the designer to make digital models due to its complex geometry with motion. This paper aims to present a methodology of converting a point cloud of a physical model into a single digital model with a certain topology and motion. The method uses a Microsoft Kinect sensor, and color markers were defined and applied to three paper folding-inspired designs. Although the resulted digital model cannot represent the whole folding range of the physical model, the method supports the designer to conduct a performance-oriented design process with the rough physical model in the reduced folding range.Keywords: design media, kinetic facades, tangible user interface, 3D scanning
Procedia PDF Downloads 41323303 Development and Modeling of a Geographic Information System Solar Flux in Adrar, Algeria
Authors: D. Benatiallah, A. Benatiallah, K. Bouchouicha, A. Harouz
Abstract:
The development and operation of renewable energy known an important development in the world with significant growth potential. Estimate the solar radiation on terrestrial geographic locality is of extreme importance, firstly to choose the appropriate site where to place solar systems (solar power plants for electricity generation, for example) and also for the design and performance analysis of any system using solar energy. In addition, solar radiation measurements are limited to a few areas only in Algeria. Thus, we use theoretical approaches to assess the solar radiation on a given location. The Adrar region is one of the most favorable sites for solar energy use with a medium flow that exceeds 7 kWh / m2 / d and saddle of over 3500 hours per year. Our goal in this work focuses on the creation of a data bank for the given data in the energy field of the Adrar region for the period of the year and the month then the integration of these data into a geographic Information System (GIS) to estimate the solar flux on a location on the map.Keywords: Adrar, flow, GIS, deposit potential
Procedia PDF Downloads 37423302 An Overbooking Model for Car Rental Service with Different Types of Cars
Authors: Naragain Phumchusri, Kittitach Pongpairoj
Abstract:
Overbooking is a very useful revenue management technique that could help reduce costs caused by either undersales or oversales. In this paper, we propose an overbooking model for two types of cars that can minimize the total cost for car rental service. With two types of cars, there is an upgrade possibility for lower type to upper type. This makes the model more complex than one type of cars scenario. We have found that convexity can be proved in this case. Sensitivity analysis of the parameters is conducted to observe the effects of relevant parameters on the optimal solution. Model simplification is proposed using multiple linear regression analysis, which can help estimate the optimal overbooking level using appropriate independent variables. The results show that the overbooking level from multiple linear regression model is relatively close to the optimal solution (with the adjusted R-squared value of at least 72.8%). To evaluate the performance of the proposed model, the total cost was compared with the case where the decision maker uses a naïve method for the overbooking level. It was found that the total cost from optimal solution is only 0.5 to 1 percent (on average) lower than the cost from regression model, while it is approximately 67% lower than the cost obtained by the naïve method. It indicates that our proposed simplification method using regression analysis can effectively perform in estimating the overbooking level.Keywords: overbooking, car rental industry, revenue management, stochastic model
Procedia PDF Downloads 17223301 Simulating Economic Order Quantity and Reorder Point Policy for a Repairable Items Inventory System
Authors: Mojahid F. Saeed Osman
Abstract:
Repairable items inventory system is a management tool used to incorporate all information concerning inventory levels and movements for repaired and new items. This paper presents development of an effective simulation model for managing the inventory of repairable items for a production system where production lines send their faulty items to a repair shop considering the stochastic failure behavior and repair times. The developed model imitates the process of handling the on-hand inventory of repaired items and the replenishment of the inventory of new items using Economic Order Quantity and Reorder Point ordering policy in a flexible and risk-free environment. We demonstrate the appropriateness and effectiveness of the proposed simulation model using an illustrative case problem. The developed simulation model can be used as a reliable tool for estimating a healthy on-hand inventory of new and repaired items, backordered items, and downtime due to unavailability of repaired items, and validating and examining Economic Order Quantity and Reorder Point ordering policy, which would further be compared with other ordering strategies as future work.Keywords: inventory system, repairable items, simulation, maintenance, economic order quantity, reorder point
Procedia PDF Downloads 14423300 Mapping Tunnelling Parameters for Global Optimization in Big Data via Dye Laser Simulation
Authors: Sahil Imtiyaz
Abstract:
One of the biggest challenges has emerged from the ever-expanding, dynamic, and instantaneously changing space-Big Data; and to find a data point and inherit wisdom to this space is a hard task. In this paper, we reduce the space of big data in Hamiltonian formalism that is in concordance with Ising Model. For this formulation, we simulate the system using dye laser in FORTRAN and analyse the dynamics of the data point in energy well of rhodium atom. After mapping the photon intensity and pulse width with energy and potential we concluded that as we increase the energy there is also increase in probability of tunnelling up to some point and then it starts decreasing and then shows a randomizing behaviour. It is due to decoherence with the environment and hence there is a loss of ‘quantumness’. This interprets the efficiency parameter and the extent of quantum evolution. The results are strongly encouraging in favour of the use of ‘Topological Property’ as a source of information instead of the qubit.Keywords: big data, optimization, quantum evolution, hamiltonian, dye laser, fermionic computations
Procedia PDF Downloads 19423299 First-Person Point of View in Contrast to Globalisation in Somerset Maugham’s ‘Mr. Know-All’
Authors: Armel Mbon
Abstract:
This paper discusses the first-person point of view in Maugham's 'Mr. Know-All.' It particularly analyses the narrator's position in relation to the story told in this short story, with the intention of disclosing the latter's prejudice against Mr. Kelada, the protagonist, and, consequently, its hindrance to globalisation. It thus underlines the fact that this protagonist and other travellers are different colours, but one person on this ship epitomises globalisation. The general attitude of readers is that they are inclined to easily believe the narrator while forgetting that fiction is the work of a taler, a teller, but, first and foremost, a liar. The audience, whether it is disconnected from the setting or not, also tends to forget that "travellers from afar can lie with impunity." In fact, the nameless narrator in Maugham's short story has a persona that leaves a lot to be desired. He is prejudiced against Mr. Kelada, known as Mr. Know-All, as will be evidenced by the scrutiny of his diction. This paper finally purports to show that those who proclaim globalisation loudly are not ready to live together.Keywords: narrator, persona, point of view, diction, contrast, globalisation
Procedia PDF Downloads 9223298 Philosophical Foundations of Education at the Kazakh Languages by Aiding Communicative Methods
Authors: Duisenova Marzhan
Abstract:
This paper considers the looking from a philosophical point of view the interactive technology and tiered developing Kazakh language teaching primary school pupils through the method of linguistic communication, content and teaching methods formed in the education system. The values determined by the formation of new practical ways that could lead to a novel qualitative level and solving the problem. In the formation of the communicative competence of elementary school students would be to pay attention to other competencies. It helps to understand the motives and needs socialization of students, the development of their cognitive abilities and participate in language relations arising from different situations. Communicative competence is the potential of its own in pupils creative language activity. In this article, the Kazakh language teaching in primary school communicative method is presented. The purpose of learning communicative method, personal development, effective psychological development of the child, himself-education, expansion and growth of language skills and vocabulary, socialization of children, the adoption of the laws of life in the social environment, analyzed the development of vocabulary richness of the language that forms the erudition to ensure continued improvement of education of the child.Keywords: communicative, culture, training, process, method, primary, competence
Procedia PDF Downloads 33923297 Reliability Estimation of Bridge Structures with Updated Finite Element Models
Authors: Ekin Ozer
Abstract:
Assessment of structural reliability is essential for efficient use of civil infrastructure which is subjected hazardous events. Dynamic analysis of finite element models is a commonly used tool to simulate structural behavior and estimate its performance accordingly. However, theoretical models purely based on preliminary assumptions and design drawings may deviate from the actual behavior of the structure. This study proposes up-to-date reliability estimation procedures which engages actual bridge vibration data modifying finite element models for finite element model updating and performing reliability estimation, accordingly. The proposed method utilizes vibration response measurements of bridge structures to identify modal parameters, then uses these parameters to calibrate finite element models which are originally based on design drawings. The proposed method does not only show that reliability estimation based on updated models differs from the original models, but also infer that non-updated models may overestimate the structural capacity.Keywords: earthquake engineering, engineering vibrations, reliability estimation, structural health monitoring
Procedia PDF Downloads 22223296 Risk Assessment of Flood Defences by Utilising Condition Grade Based Probabilistic Approach
Authors: M. Bahari Mehrabani, Hua-Peng Chen
Abstract:
Management and maintenance of coastal defence structures during the expected life cycle have become a real challenge for decision makers and engineers. Accurate evaluation of the current condition and future performance of flood defence structures is essential for effective practical maintenance strategies on the basis of available field inspection data. Moreover, as coastal defence structures age, it becomes more challenging to implement maintenance and management plans to avoid structural failure. Therefore, condition inspection data are essential for assessing damage and forecasting deterioration of ageing flood defence structures in order to keep the structures in an acceptable condition. The inspection data for flood defence structures are often collected using discrete visual condition rating schemes. In order to evaluate future condition of the structure, a probabilistic deterioration model needs to be utilised. However, existing deterioration models may not provide a reliable prediction of performance deterioration for a long period due to uncertainties. To tackle the limitation, a time-dependent condition-based model associated with a transition probability needs to be developed on the basis of condition grade scheme for flood defences. This paper presents a probabilistic method for predicting future performance deterioration of coastal flood defence structures based on condition grading inspection data and deterioration curves estimated by expert judgement. In condition-based deterioration modelling, the main task is to estimate transition probability matrices. The deterioration process of the structure related to the transition states is modelled according to Markov chain process, and a reliability-based approach is used to estimate the probability of structural failure. Visual inspection data according to the United Kingdom Condition Assessment Manual are used to obtain the initial condition grade curve of the coastal flood defences. The initial curves then modified in order to develop transition probabilities through non-linear regression based optimisation algorithms. The Monte Carlo simulations are then used to evaluate the future performance of the structure on the basis of the estimated transition probabilities. Finally, a case study is given to demonstrate the applicability of the proposed method under no-maintenance and medium-maintenance scenarios. Results show that the proposed method can provide an effective predictive model for various situations in terms of available condition grading data. The proposed model also provides useful information on time-dependent probability of failure in coastal flood defences.Keywords: condition grading, flood defense, performance assessment, stochastic deterioration modelling
Procedia PDF Downloads 23323295 A Packet Loss Probability Estimation Filter Using Most Recent Finite Traffic Measurements
Authors: Pyung Soo Kim, Eung Hyuk Lee, Mun Suck Jang
Abstract:
A packet loss probability (PLP) estimation filter with finite memory structure is proposed to estimate the packet rate mean and variance of the input traffic process in real-time while removing undesired system and measurement noises. The proposed PLP estimation filter is developed under a weighted least square criterion using only the finite traffic measurements on the most recent window. The proposed PLP estimation filter is shown to have several inherent properties such as unbiasedness, deadbeat, robustness. A guideline for choosing appropriate window length is described since it can affect significantly the estimation performance. Using computer simulations, the proposed PLP estimation filter is shown to be superior to the Kalman filter for the temporarily uncertain system. One possible explanation for this is that the proposed PLP estimation filter can have greater convergence time of a filtered estimate as the window length M decreases.Keywords: packet loss probability estimation, finite memory filter, infinite memory filter, Kalman filter
Procedia PDF Downloads 67223294 Economic Load Dispatch with Valve-Point Loading Effect by Using Differential Evolution Immunized Ant Colony Optimization Technique
Authors: Nur Azzammudin Rahmat, Ismail Musirin, Ahmad Farid Abidin
Abstract:
Economic load dispatch is performed by the utilities in order to determine the best generation level at the most feasible operating cost. In order to guarantee satisfying energy delivery to the consumer, a precise calculation of generation level is required. In order to achieve accurate and practical solution, several considerations such as prohibited operating zones, valve-point effect and ramp-rate limit need to be taken into account. However, these considerations cause the optimization to become complex and difficult to solve. This research focuses on the valve-point effect that causes ripple in the fuel-cost curve. This paper also proposes Differential Evolution Immunized Ant Colony Optimization (DEIANT) in solving economic load dispatch problem with valve-point effect. Comparative studies involving DEIANT, EP and ACO are conducted on IEEE 30-Bus RTS for performance assessments. Results indicate that DEIANT is superior to the other compared methods in terms of calculating lower operating cost and power loss.Keywords: ant colony optimization (ACO), differential evolution (DE), differential evolution immunized ant colony optimization (DEIANT), economic load dispatch (ELD)
Procedia PDF Downloads 44623293 Armenian Refugees in Early 20th C Japan: Quantitative Analysis on Their Number Based on Japanese Historical Data with the Comparison of a Foreign Historical Data
Authors: Meline Mesropyan
Abstract:
At the beginning of the 20th century, Japan served as a transit point for Armenian refugees fleeing the 1915 Genocide. However, research on Armenian refugees in Japan is sparse, and the Armenian Diaspora has never taken root in Japan. Consequently, Japan has not been considered a relevant research site for studying Armenian refugees. The primary objective of this study is to shed light on the number of Armenian refugees who passed through Japan between 1915 and 1930. Quantitative analyses will be conducted based on newly uncovered Japanese archival documents. Subsequently, the Japanese data will be compared to American immigration data to estimate the potential number of refugees in Japan during that period. This under-researched area is relevant to both the Armenian Diaspora and refugee studies in Japan. By clarifying the number of refugees, this study aims to enhance understanding of Japan's treatment of refugees and the extent of humanitarian efforts conducted by organizations and individuals in Japan, contributing to the broader field of historical refugee studies.Keywords: Armenian genocide, Armenian refugees, Japanese statistics, number of refugees
Procedia PDF Downloads 5723292 Physical-Mechanical Characteristics of Monocrystalline Si1-xGex(X 0,02) Solid Solutions
Authors: I. Kurashvili, A. Sichinava, G. Bokuchava, G. Darsavelidze
Abstract:
Si-Ge solid solutions (bulk poly- and monocrystalline samples, thin films) are characterized by high perspectives for application in semiconductor devices, in particular, optoelectronics and microelectronics. In this light complex studying of structural state of the defects and structural-sensitive physical properties of Si-Ge solid solutions depending on the contents of Si and Ge components is very important. Present work deals with the investigations of microstructure, electrophysical characteristics, microhardness, internal friction and shear modulus of Si1-xGex(x≤0,02) bulk monocrystals conducted at a room temperatures. Si-Ge bulk crystals were obtained by Czochralski method in [111] crystallographic direction. Investigated monocrystalline Si-Ge samples are characterized by p-type conductivity and carriers concentration 5.1014-1.1015cm-3, dislocation density 5.103-1.104cm-2, microhardness according to Vickers method 900-1200 Kg/mm2. Investigate samples are characterized with 0,5x0,5x(10-15) mm3 sizes, oriented along [111] direction at torsion oscillations ≈1Hz, multistage changing of internal friction and shear modulus has been revealed in an interval of strain amplitude of 10-5-5.10-3. Critical values of strain amplitude have been determined at which hysteretic changes of inelastic characteristics and microplasticity are observed. The critical strain amplitude and elasticity limit values are also determined. Tendency to decrease of dynamic mechanical characteristics is shown with increasing Ge content in Si-Ge solid solutions. Observed changes are discussed from the point of view of interaction of various dislocations with point defects and their complexes in a real structure of Si-Ge solid solutions.Keywords: Microhardness, internal friction, shear modulus, Monocrystalline
Procedia PDF Downloads 35223291 Application of Post-Stack and Pre-Stack Seismic Inversion for Prediction of Hydrocarbon Reservoirs in a Persian Gulf Gas Field
Authors: Nastaran Moosavi, Mohammad Mokhtari
Abstract:
Seismic inversion is a technique which has been in use for years and its main goal is to estimate and to model physical characteristics of rocks and fluids. Generally, it is a combination of seismic and well-log data. Seismic inversion can be carried out through different methods; we have conducted and compared post-stack and pre- stack seismic inversion methods on real data in one of the fields in the Persian Gulf. Pre-stack seismic inversion can transform seismic data to rock physics such as P-impedance, S-impedance and density. While post- stack seismic inversion can just estimate P-impedance. Then these parameters can be used in reservoir identification. Based on the results of inverting seismic data, a gas reservoir was detected in one of Hydrocarbon oil fields in south of Iran (Persian Gulf). By comparing post stack and pre-stack seismic inversion it can be concluded that the pre-stack seismic inversion provides a more reliable and detailed information for identification and prediction of hydrocarbon reservoirs.Keywords: density, p-impedance, s-impedance, post-stack seismic inversion, pre-stack seismic inversion
Procedia PDF Downloads 32323290 Fish Oil and Its Methyl Ester as an Alternate Fuel in the Direct Injection Diesel Engine
Authors: Pavan Pujar
Abstract:
Mackerel Fish oil was used as the raw material to produce the biodiesel in this study. The raw oil (RO) was collected from discarded fish products. This oil was filtered and heated to 110oC and made it moisture free. The filtered and moisture free RO was transesterified to produce biodiesel. The experimental results showed that oleic acid and lauric acid were the two major components of the fish oil biodiesel (FOB). Palmitic acid and linoleic acid were found approximately same in the quantity. The fuel properties kinematic viscosity, flash point, fire point, specific gravity, calorific value, cetane number, density, acid value, saponification value, iodine value, cloud point, pour point, ash content, Cu strip corrosion, carbon residue, API gravity were determined for FOB. A comparative study of the properties was carried out with RO and Neat diesel (ND). It was found that Cetane number was 59 for FOB which was more than RO, which showed 57. Blends (B20, B40, B60, B80: example: B20: 20% FOB + 80% ND) of FOB and ND were prepared on volume basis and comparative study was carried out with ND and FOB. Performance parameters BSFE, BSEC, A:F Ratio, Break thermal efficiency were analyzed and it was found that complete replacement of neat diesel (ND) is possible without any engine modifications.Keywords: fish oil biodiesel, raw oil, blends, performance parameters
Procedia PDF Downloads 41323289 Estimation Atmospheric parameters for Weather Study and Forecast over Equatorial Regions Using Ground-Based Global Position System
Authors: Asmamaw Yehun, Tsegaye Kassa, Addisu Hunegnaw, Martin Vermeer
Abstract:
There are various models to estimate the neutral atmospheric parameter values, such as in-suite and reanalysis datasets from numerical models. Accurate estimated values of the atmospheric parameters are useful for weather forecasting and, climate modeling and monitoring of climate change. Recently, Global Navigation Satellite System (GNSS) measurements have been applied for atmospheric sounding due to its robust data quality and wide horizontal and vertical coverage. The Global Positioning System (GPS) solutions that includes tropospheric parameters constitute a reliable set of data to be assimilated into climate models. The objective of this paper is, to estimate the neutral atmospheric parameters such as Wet Zenith Delay (WZD), Precipitable Water Vapour (PWV) and Total Zenith Delay (TZD) using six selected GPS stations in the equatorial regions, more precisely, the Ethiopian GPS stations from 2012 to 2015 observational data. Based on historic estimated GPS-derived values of PWV, we forecasted the PWV from 2015 to 2030. During data processing and analysis, we applied GAMIT-GLOBK software packages to estimate the atmospheric parameters. In the result, we found that the annual averaged minimum values of PWV are 9.72 mm for IISC and maximum 50.37 mm for BJCO stations. The annual averaged minimum values of WZD are 6 cm for IISC and maximum 31 cm for BDMT stations. In the long series of observations (from 2012 to 2015), we also found that there is a trend and cyclic patterns of WZD, PWV and TZD for all stations.Keywords: atmosphere, GNSS, neutral atmosphere, precipitable water vapour
Procedia PDF Downloads 6123288 Development of Enhanced Data Encryption Standard
Authors: Benjamin Okike
Abstract:
There is a need to hide information along the superhighway. Today, information relating to the survival of individuals, organizations, or government agencies is transmitted from one point to another. Adversaries are always on the watch along the superhighway to intercept any information that would enable them to inflict psychological ‘injuries’ to their victims. But with information encryption, this can be prevented completely or at worst reduced to the barest minimum. There is no doubt that so many encryption techniques have been proposed, and some of them are already being implemented. However, adversaries always discover loopholes on them to perpetuate their evil plans. In this work, we propose the enhanced data encryption standard (EDES) that would deploy randomly generated numbers as an encryption method. Each time encryption is to be carried out, a new set of random numbers would be generated, thereby making it almost impossible for cryptanalysts to decrypt any information encrypted with this newly proposed method.Keywords: encryption, enhanced data encryption, encryption techniques, information security
Procedia PDF Downloads 15023287 Direct CP Violation in Baryonic B-Hadron Decays
Authors: C. Q. Geng, Y. K. Hsiao
Abstract:
We study direct CP-violating asymmetries (CPAs) in the baryonic B decays of B- -> p\bar{p}M and Λb decays of Λb ®pM andΛb -> J/ΨpM with M=π-, K-,ρ-,K*- based on the generalized factorization method in the standard model (SM). In particular, we show that the CPAs in the vector modes of B-®p\bar{p}K* and Λb -> p K*- can be as large as 20%. We also discuss the simplest purely baryonic decays of Λb-> p\bar{p}n, p\bar{p}Λ, Λ\bar{p}Λ, and Λ\bar{Λ}Λ. We point out that some of CPAs are promising to be measured by the current as well as future B facilities.Keywords: CP violation, B decays, baryonic decays, Λb decays
Procedia PDF Downloads 25523286 Improving the Liquid Insulation Performance with Antioxidants
Authors: Helan Gethse J., Dhanya K., Muthuselvi G., Diana Hyden N., Samuel Pakianathan P.
Abstract:
Transformer oil is mostly used to keep the transformer cool. It functions as a cooling agent. Mineral oil has long been used in transformers. Mineral oil has a high dielectric strength, which allows it to withstand high temperatures. Mineral oil's main disadvantage is that it is not environmentally friendly and can be dangerous to the environment. The features of breakdown voltage (BDV), viscosity, flash point, and fire point are measured and reported in this study, and the characteristics of olive oil are compared to the characteristics of mineral oil.Keywords: antioxidants, transformer oil, mineral oil, olive oil
Procedia PDF Downloads 15023285 Online Learning for Modern Business Models: Theoretical Considerations and Algorithms
Authors: Marian Sorin Ionescu, Olivia Negoita, Cosmin Dobrin
Abstract:
This scientific communication reports and discusses learning models adaptable to modern business problems and models specific to digital concepts and paradigms. In the PAC (probably approximately correct) learning model approach, in which the learning process begins by receiving a batch of learning examples, the set of learning processes is used to acquire a hypothesis, and when the learning process is fully used, this hypothesis is used in the prediction of new operational examples. For complex business models, a lot of models should be introduced and evaluated to estimate the induced results so that the totality of the results are used to develop a predictive rule, which anticipates the choice of new models. In opposition, for online learning-type processes, there is no separation between the learning (training) and predictive phase. Every time a business model is approached, a test example is considered from the beginning until the prediction of the appearance of a model considered correct from the point of view of the business decision. After choosing choice a part of the business model, the label with the logical value "true" is known. Some of the business models are used as examples of learning (training), which helps to improve the prediction mechanisms for future business models.Keywords: machine learning, business models, convex analysis, online learning
Procedia PDF Downloads 14023284 Forecasting Market Share of Electric Vehicles in Taiwan Using Conjoint Models and Monte Carlo Simulation
Authors: Li-hsing Shih, Wei-Jen Hsu
Abstract:
Recently, the sale of electrical vehicles (EVs) has increased dramatically due to maturing technology development and decreasing cost. Governments of many countries have made regulations and policies in favor of EVs due to their long-term commitment to net zero carbon emissions. However, due to uncertain factors such as the future price of EVs, forecasting the future market share of EVs is a challenging subject for both the auto industry and local government. This study tries to forecast the market share of EVs using conjoint models and Monte Carlo simulation. The research is conducted in three phases. (1) A conjoint model is established to represent the customer preference structure on purchasing vehicles while five product attributes of both EV and internal combustion engine vehicles (ICEV) are selected. A questionnaire survey is conducted to collect responses from Taiwanese consumers and estimate the part-worth utility functions of all respondents. The resulting part-worth utility functions can be used to estimate the market share, assuming each respondent will purchase the product with the highest total utility. For example, attribute values of an ICEV and a competing EV are given respectively, two total utilities of the two vehicles of a respondent are calculated and then knowing his/her choice. Once the choices of all respondents are known, an estimate of market share can be obtained. (2) Among the attributes, future price is the key attribute that dominates consumers’ choice. This study adopts the assumption of a learning curve to predict the future price of EVs. Based on the learning curve method and past price data of EVs, a regression model is established and the probability distribution function of the price of EVs in 2030 is obtained. (3) Since the future price is a random variable from the results of phase 2, a Monte Carlo simulation is then conducted to simulate the choices of all respondents by using their part-worth utility functions. For instance, using one thousand generated future prices of an EV together with other forecasted attribute values of the EV and an ICEV, one thousand market shares can be obtained with a Monte Carlo simulation. The resulting probability distribution of the market share of EVs provides more information than a fixed number forecast, reflecting the uncertain nature of the future development of EVs. The research results can help the auto industry and local government make more appropriate decisions and future action plans.Keywords: conjoint model, electrical vehicle, learning curve, Monte Carlo simulation
Procedia PDF Downloads 6923283 A Comparative Study of the Maximum Power Point Tracking Methods for PV Systems Using Boost Converter
Authors: M. Doumi, A. Miloudi, A.G. Aissaoui, K. Tahir, C. Belfedal, S. Tahir
Abstract:
The studies on the photovoltaic system are extensively increasing because of a large, secure, essentially exhaustible and broadly available resource as a future energy supply. However, the output power induced in the photovoltaic modules is influenced by an intensity of solar cell radiation, temperature of the solar cells and so on. Therefore, to maximize the efficiency of the photovoltaic system, it is necessary to track the maximum power point of the PV array, for this Maximum Power Point Tracking (MPPT) technique is used. These algorithms are based on the Perturb-Observe, Conductance-Increment and the Fuzzy Logic methods. These techniques vary in many aspects as: simplicity, convergence speed, digital or analogical implementation, sensors required, cost, range of effectiveness, and in other aspects. This paper presents a comparative study of three widely-adopted MPPT algorithms; their performance is evaluated on the energy point of view, by using the simulation tool Simulink®, considering different solar irradiance variations. MPPT using fuzzy logic shows superior performance and more reliable control to the other methods for this application.Keywords: photovoltaic system, MPPT, perturb and observe (P&O), incremental conductance (INC), Fuzzy Logic (FLC)
Procedia PDF Downloads 411