Search results for: linear programming
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4023

Search results for: linear programming

2793 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU

Authors: Ali Abdul Kadhim, Fue Lien

Abstract:

Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.

Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model

Procedia PDF Downloads 189
2792 Development of a Direct Immunoassay for Human Ferritin Using Diffraction-Based Sensing Method

Authors: Joel Ballesteros, Harriet Jane Caleja, Florian Del Mundo, Cherrie Pascual

Abstract:

Diffraction-based sensing was utilized in the quantification of human ferritin in blood serum to provide an alternative to label-based immunoassays currently used in clinical diagnostics and researches. The diffraction intensity was measured by the diffractive optics technology or dotLab™ system. Two methods were evaluated in this study: direct immunoassay and direct sandwich immunoassay. In the direct immunoassay, human ferritin was captured by human ferritin antibodies immobilized on an avidin-coated sensor while the direct sandwich immunoassay had an additional step for the binding of a detector human ferritin antibody on the analyte complex. Both methods were repeatable with coefficient of variation values below 15%. The direct sandwich immunoassay had a linear response from 10 to 500 ng/mL which is wider than the 100-500 ng/mL of the direct immunoassay. The direct sandwich immunoassay also has a higher calibration sensitivity with value 0.002 Diffractive Intensity (ng mL-1)-1) compared to the 0.004 Diffractive Intensity (ng mL-1)-1 of the direct immunoassay. The limit of detection and limit of quantification values of the direct immunoassay were found to be 29 ng/mL and 98 ng/mL, respectively, while the direct sandwich immunoassay has a limit of detection (LOD) of 2.5 ng/mL and a limit of quantification (LOQ) of 8.2 ng/mL. In terms of accuracy, the direct immunoassay had a percent recovery of 88.8-93.0% in PBS while the direct sandwich immunoassay had 94.1 to 97.2%. Based on the results, the direct sandwich immunoassay is a better diffraction-based immunoassay in terms of accuracy, LOD, LOQ, linear range, and sensitivity. The direct sandwich immunoassay was utilized in the determination of human ferritin in blood serum and the results are validated by Chemiluminescent Magnetic Immunoassay (CMIA). The calculated Pearson correlation coefficient was 0.995 and the p-values of the paired-sample t-test were less than 0.5 which show that the results of the direct sandwich immunoassay was comparable to that of CMIA and could be utilized as an alternative analytical method.

Keywords: biosensor, diffraction, ferritin, immunoassay

Procedia PDF Downloads 335
2791 A Design System for Complex Profiles of Machine Members Using a Synthetic Curve

Authors: N. Sateesh, C. S. P. Rao, K. Satyanarayana, C. Rajashekar

Abstract:

This paper proposes a development of a CAD/CAM system for complex profiles of various machine members using a synthetic curve i.e. B-spline. Conventional methods in designing and manufacturing of complex profiles are tedious and time consuming. Even programming those on a computer numerical control (CNC) machine can be a difficult job because of the complexity of the profiles. The system developed provides graphical and numerical representation B-spline profile for any given input. In this paper, the system is applicable to represent a cam profile with B-spline and attempt is made to improve the follower motion.

Keywords: plate-cams, cam profile, b-spline, computer numerical control (CNC), computer aided design and computer aided manufacturing (CAD/CAM), R-D-R-D (rise-dwell-return-dwell)

Procedia PDF Downloads 592
2790 Weyl Type Theorem and the Fuglede Property

Authors: M. H. M. Rashid

Abstract:

Given H a Hilbert space and B(H) the algebra of bounded linear operator in H, let δAB denote the generalized derivation defined by A and B. The main objective of this article is to study Weyl type theorems for generalized derivation for (A,B) satisfying a couple of Fuglede.

Keywords: Fuglede Property, Weyl’s theorem, generalized derivation, Aluthge transform

Procedia PDF Downloads 118
2789 Experimental Studies of the Reverse Load-Unloading Effect on the Mechanical, Linear and Nonlinear Elastic Properties of n-AMg6/C60 Nanocomposite

Authors: Aleksandr I. Korobov, Natalia V. Shirgina, Aleksey I. Kokshaiskiy, Vyacheslav M. Prokhorov

Abstract:

The paper presents the results of an experimental study of the effect of reverse mechanical load-unloading on the mechanical, linear, and nonlinear elastic properties of n-AMg6/C60 nanocomposite. Samples for experimental studies of n-AMg6/C60 nanocomposite were obtained by grinding AMg6 polycrystalline alloy in a planetary mill with 0.3 wt % of C60 fullerite in an argon atmosphere. The resulting product consisted of 200-500-micron agglomerates of nanoparticles. X-ray coherent scattering (CSL) method has shown that the average nanoparticle size is 40-60 nm. The resulting preform was extruded at high temperature. Modifications of C60 fullerite interferes the process of recrystallization at grain boundaries. In the samples of n-AMg6/C60 nanocomposite, the load curve is measured: the dependence of the mechanical stress σ on the strain of the sample ε under its multi-cycle load-unloading process till its destruction. The hysteresis dependence σ = σ(ε) was observed, and insignificant residual strain ε < 0.005 were recorded. At σ≈500 MPa and ε≈0.025, the sample was destroyed. The destruction of the sample was fragile. Microhardness was measured before and after destruction of the sample. It was found that the loading-unloading process led to an increase in its microhardness. The effect of the reversible mechanical stress on the linear and nonlinear elastic properties of the n-AMg6/C60 nanocomposite was studied experimentally by ultrasonic method on the automated complex Ritec RAM-5000 SNAP SYSTEM. In the n-AMg6/C60 nanocomposite, the velocities of the longitudinal and shear bulk waves were measured with the pulse method, and all the second-order elasticity coefficients and their dependence on the magnitude of the reversible mechanical stress applied to the sample were calculated. Studies of nonlinear elastic properties of the n-AMg6/C60 nanocomposite at reversible load-unloading of the sample were carried out with the spectral method. At arbitrary values of the strain of the sample (up to its breakage), the dependence of the amplitude of the second longitudinal acoustic harmonic at a frequency of 2f = 10MHz on the amplitude of the first harmonic at a frequency f = 5MHz of the acoustic wave is measured. Based on the results of these measurements, the values of the nonlinear acoustic parameter in the n-AMg6/C60 nanocomposite sample at different mechanical stress were determined. The obtained results can be used in solid-state physics, materials science, for development of new techniques for nondestructive testing of structural materials using methods of nonlinear acoustic diagnostics. This study was supported by the Russian Science Foundation (project №14-22-00042).

Keywords: nanocomposite, generation of acoustic harmonics, nonlinear acoustic parameter, hysteresis

Procedia PDF Downloads 133
2788 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters

Authors: Trevor C. Brown, David J. Miron

Abstract:

Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.

Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics

Procedia PDF Downloads 216
2787 Prediction of Terrorist Activities in Nigeria using Bayesian Neural Network with Heterogeneous Transfer Functions

Authors: Tayo P. Ogundunmade, Adedayo A. Adepoju

Abstract:

Terrorist attacks in liberal democracies bring about a few pessimistic results, for example, sabotaged public support in the governments they target, disturbing the peace of a protected environment underwritten by the state, and a limitation of individuals from adding to the advancement of the country, among others. Hence, seeking for techniques to understand the different factors involved in terrorism and how to deal with those factors in order to completely stop or reduce terrorist activities is the topmost priority of the government in every country. This research aim is to develop an efficient deep learning-based predictive model for the prediction of future terrorist activities in Nigeria, addressing low-quality prediction accuracy problems associated with the existing solution methods. The proposed predictive AI-based model as a counterterrorism tool will be useful by governments and law enforcement agencies to protect the lives of individuals in society and to improve the quality of life in general. A Heterogeneous Bayesian Neural Network (HETBNN) model was derived with Gaussian error normal distribution. Three primary transfer functions (HOTTFs), as well as two derived transfer functions (HETTFs) arising from the convolution of the HOTTFs, are namely; Symmetric Saturated Linear transfer function (SATLINS ), Hyperbolic Tangent transfer function (TANH), Hyperbolic Tangent sigmoid transfer function (TANSIG), Symmetric Saturated Linear and Hyperbolic Tangent transfer function (SATLINS-TANH) and Symmetric Saturated Linear and Hyperbolic Tangent Sigmoid transfer function (SATLINS-TANSIG). Data on the Terrorist activities in Nigeria gathered through questionnaires for the purpose of this study were used. Mean Square Error (MSE), Mean Absolute Error (MAE) and Test Error are the forecast prediction criteria. The results showed that the HETFs performed better in terms of prediction and factors associated with terrorist activities in Nigeria were determined. The proposed predictive deep learning-based model will be useful to governments and law enforcement agencies as an effective counterterrorism mechanism to understand the parameters of terrorism and to design strategies to deal with terrorism before an incident actually happens and potentially causes the loss of precious lives. The proposed predictive AI-based model will reduce the chances of terrorist activities and is particularly helpful for security agencies to predict future terrorist activities.

Keywords: activation functions, Bayesian neural network, mean square error, test error, terrorism

Procedia PDF Downloads 148
2786 Planktivorous Fish Schooling Responses to Current at Natural and Artificial Reefs

Authors: Matthew Holland, Jason Everett, Martin Cox, Iain Suthers

Abstract:

High spatial-resolution distribution of planktivorous reef fish can reveal behavioural adaptations to optimise the balance between feeding success and predator avoidance. We used a multi-beam echosounder to record bathymetry and the three-dimensional distribution of fish schools associated with natural and artificial reefs. We utilised generalised linear models to assess the distribution, orientation, and aggregation of fish schools relative to the structure, vertical relief, and currents. At artificial reefs, fish schooled more closely to the structure and demonstrated a preference for the windward side, particularly when exposed to strong currents. Similarly, at natural reefs fish demonstrated a preference for windward aspects of bathymetry, particularly when associated with high vertical relief. Our findings suggest that under conditions with stronger current velocity, fish can exercise their preference to remain close to structure for predator avoidance, while still receiving an adequate supply of zooplankton delivered by the current. Similarly, when current velocity is low, fish tend to disperse for better access to zooplankton. As artificial reefs are generally deployed with the goal of creating productivity rather than simply attracting fish from elsewhere, we advise that future artificial reefs be designed as semi-linear arrays perpendicular to the prevailing current, with multiple tall towers. This will facilitate the conversion of dispersed zooplankton into energy for higher trophic levels, enhancing reef productivity and fisheries.

Keywords: artificial reef, current, forage fish, multi-beam, planktivorous fish, reef fish, schooling

Procedia PDF Downloads 141
2785 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings

Authors: Jude K. Safo

Abstract:

Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.

Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics

Procedia PDF Downloads 54
2784 Nonlinear Finite Element Analysis of Optimally Designed Steel Angelina™ Beams

Authors: Ferhat Erdal, Osman Tunca, Serkan Tas, Serdar Carbas

Abstract:

Web-expanded steel beams provide an easy and economical solution for the systems having longer structural members. The main goal of manufacturing these beams is to increase the moment of inertia and section modulus, which results in greater strength and rigidity. Until recently, there were two common types of open web-expanded beams: with hexagonal openings, also called castellated beams, and beams with circular openings referred to as cellular beams, until the generation of sinusoidal web-expanded beams. In the present research, the optimum design of a new generation beams, namely sinusoidal web-expanded beams, will be carried out and the design results will be compared with castellated and cellular beam solutions. Thanks to a reduced fabrication process and substantial material savings, the web-expanded beam with sinusoidal holes (Angelina™ Beam) meets the economic requirements of steel design problems while ensuring optimum safety. The objective of this research is to carry out non-linear finite element analysis (FEA) of the web-expanded beam with sinusoidal holes. The FE method has been used to predict their entire response to increasing values of external loading until they lose their load carrying capacity. FE model of each specimen that is utilized in the experimental studies is carried out. These models are used to simulate the experimental work to verify of test results and to investigate the non-linear behavior of failure modes such as web-post buckling, shear buckling and vierendeel bending of beams.

Keywords: steel structures, web-expanded beams, angelina beam, optimum design, failure modes, finite element analysis

Procedia PDF Downloads 265
2783 Intelligent Control of Bioprocesses: A Software Application

Authors: Mihai Caramihai, Dan Vasilescu

Abstract:

The main research objective of the experimental bioprocess analyzed in this paper was to obtain large biomass quantities. The bioprocess is performed in 100 L Bioengineering bioreactor with 42 L cultivation medium made of peptone, meat extract and sodium chloride. The reactor was equipped with pH, temperature, dissolved oxygen, and agitation controllers. The operating parameters were 37 oC, 1.2 atm, 250 rpm and air flow rate of 15 L/min. The main objective of this paper is to present a case study to demonstrate that intelligent control, describing the complexity of the biological process in a qualitative and subjective manner as perceived by human operator, is an efficient control strategy for this kind of bioprocesses. In order to simulate the bioprocess evolution, an intelligent control structure, based on fuzzy logic has been designed. The specific objective is to present a fuzzy control approach, based on human expert’ rules vs. a modeling approach of the cells growth based on bioprocess experimental data. The kinetic modeling may represent only a small number of bioprocesses for overall biosystem behavior while fuzzy control system (FCS) can manipulate incomplete and uncertain information about the process assuring high control performance and provides an alternative solution to non-linear control as it is closer to the real world. Due to the high degree of non-linearity and time variance of bioprocesses, the need of control mechanism arises. BIOSIM, an original developed software package, implements such a control structure. The simulation study has showed that the fuzzy technique is quite appropriate for this non-linear, time-varying system vs. the classical control method based on a priori model.

Keywords: intelligent, control, fuzzy model, bioprocess optimization

Procedia PDF Downloads 304
2782 From Two-Way to Multi-Way: A Comparative Study for Map-Reduce Join Algorithms

Authors: Marwa Hussien Mohamed, Mohamed Helmy Khafagy

Abstract:

Map-Reduce is a programming model which is widely used to extract valuable information from enormous volumes of data. Map-reduce designed to support heterogeneous datasets. Apache Hadoop map-reduce used extensively to uncover hidden pattern like data mining, SQL, etc. The most important operation for data analysis is joining operation. But, map-reduce framework does not directly support join algorithm. This paper explains and compares two-way and multi-way map-reduce join algorithms for map reduce also we implement MR join Algorithms and show the performance of each phase in MR join algorithms. Our experimental results show that map side join and map merge join in two-way join algorithms has the longest time according to preprocessing step sorting data and reduce side cascade join has the longest time at Multi-Way join algorithms.

Keywords: Hadoop, MapReduce, multi-way join, two-way join, Ubuntu

Procedia PDF Downloads 467
2781 Vibration Absorption Strategy for Multi-Frequency Excitation

Authors: Der Chyan Lin

Abstract:

Since the early introduction by Ormondroyd and Den Hartog, vibration absorber (VA) has become one of the most commonly used vibration mitigation strategies. The strategy is most effective for a primary plant subjected to a single frequency excitation. For continuous systems, notable advances in vibration absorption in the multi-frequency system were made. However, the efficacy of the VA strategy for systems under multi-frequency excitation is not well understood. For example, for an N degrees-of-freedom (DOF) primary-absorber system, there are N 'peak' frequencies of large amplitude vibration per every new excitation frequency. In general, the usable range for vibration absorption can be greatly reduced as a result. Frequency modulated harmonic excitation is a commonly seen multi-frequency excitation example: f(t) = cos(ϖ(t)t) where ϖ(t)=ω(1+α sin⁡(δt)). It is known that f(t) has a series expansion given by the Bessel function of the first kind, which implies an infinity of forcing frequencies in the frequency modulated harmonic excitation. For an SDOF system of natural frequency ωₙ subjected to f(t), it can be shown that amplitude peaks emerge at ω₍ₚ,ₖ₎=(ωₙ ± 2kδ)/(α ∓ 1),k∈Z; i.e., there is an infinity of resonant frequencies ω₍ₚ,ₖ₎, k∈Z, making the use of VA strategy ineffective. In this work, we propose an absorber frequency placement strategy for SDOF vibration systems subjected to frequency-modulated excitation. An SDOF linear mass-spring system coupled to lateral absorber systems is used to demonstrate the ideas. Although the mechanical components are linear, the governing equations for the coupled system are nonlinear. We show using N identical absorbers, for N ≫ 1, that (a) there is a cluster of N+1 natural frequencies around every natural absorber frequency, and (b) the absorber frequencies can be moved away from the plant's resonance frequency (ω₀) as N increases. Moreover, we also show the bandwidth of the VA performance increases with N. The derivations of the clustering and bandwidth widening effect will be given, and the superiority of the proposed strategy will be demonstrated via numerical experiments.

Keywords: Bessel function, bandwidth, frequency modulated excitation, vibration absorber

Procedia PDF Downloads 139
2780 The Efficiency Analysis in the Health Sector: Marmara Region

Authors: Hale Kirer Silva Lecuna, Beyza Aydin

Abstract:

Health is one of the main components of human capital and sustainable development, and it is very important for economic growth. Health economics, which is an indisputable part of the science of economics, has five stages in general. These are health and development, financing of health services, economic regulation in the health, allocation of resources and efficiency of health services. A well-developed and efficient health sector plays a major role by increasing the level of development of countries. The most crucial pillars of the health sector are the hospitals that are divided into public and private. The main purpose of the hospitals is to provide more efficient services. Therefore the aim is to meet patients’ satisfaction by increasing the service quality. Health-related studies in Turkey date back to the Ottoman and Seljuk Empires. In the near past, Turkey applied 'Health Sector Transformation Programs' under different titles between 2003 and 2010. Our aim in this paper is to measure how effective these transformation programs are for the health sector, to see how much they can increase the efficiency of hospitals over the years, to see the return of investments, to make comments and suggestions on the results, and to provide a new reference for the literature. Within this framework, the public and private hospitals in Balıkesir, Bilecik, Bursa, Çanakkale, Edirne, Istanbul, Kirklareli, Kocaeli, Sakarya, Tekirdağ, Yalova will be examined by using Data Envelopment Analysis (DEA) for the years between 2000 and 2019. DEA is a linear programming-based technique, which gives relatively good results in multivariate studies. DEA basically estimates an efficiency frontier and make a comparison. Constant returns to scale and variable returns to scale are two most commonly used DEA methods. Both models are divided into two as input and output-oriented. To analyze the data, the number of personnel, number of specialist physicians, number of practitioners, number of beds, number of examinations will be used as input variables; and the number of surgeries, in-patient ratio, and crude mortality rate as output variables. 11 hospitals belonging to the Marmara region were included in the study. It is seen that these hospitals worked effectively only in 7 provinces (Balıkesir, Bilecik, Bursa, Edirne, İstanbul, Kırklareli, Yalova) for the year 2001 when no transformation program was implemented. After the transformation program was implemented, for example, in 2014 and 2016, 10 hospitals (Balıkesir, Bilecik, Bursa, Çanakkale, Edirne, İstanbul, Kocaeli, Kırklareli, Tekirdağ, Yalova) were found to be effective. In 2015, ineffective results were observed for Sakarya, Tekirdağ and Yalova. However, since these values are closer to 1 after the transformation program, we can say that the transformation program has positive effects. For Sakarya alone, no effective results have been achieved in any year. When we look at the results in general, it shows that the transformation program has a positive effect on the effectiveness of hospitals.

Keywords: data envelopment analysis, efficiency, health sector, Marmara region

Procedia PDF Downloads 114
2779 The Relationship between Land Use Factors and Feeling of Happiness at the Neighbourhood Level

Authors: M. Moeinaddini, Z. Asadi-Shekari, Z. Sultan, M. Zaly Shah

Abstract:

Happiness can be related to everything that can provide a feeling of satisfaction or pleasure. This study tries to consider the relationship between land use factors and feeling of happiness at the neighbourhood level. Land use variables (beautiful and attractive neighbourhood design, availability and quality of shopping centres, sufficient recreational spaces and facilities, and sufficient daily service centres) are used as independent variables and the happiness score is used as the dependent variable in this study. In addition to the land use variables, socio-economic factors (gender, race, marital status, employment status, education, and income) are also considered as independent variables. This study uses the Oxford happiness questionnaire to estimate happiness score of more than 300 people living in six neighbourhoods. The neighbourhoods are selected randomly from Skudai neighbourhoods in Johor, Malaysia. The land use data were obtained by adding related questions to the Oxford happiness questionnaire. The strength of the relationship in this study is found using generalised linear modelling (GLM). The findings of this research indicate that increase in happiness feeling is correlated with an increasing income, more beautiful and attractive neighbourhood design, sufficient shopping centres, recreational spaces, and daily service centres. The results show that all land use factors in this study have significant relationship with happiness but only income, among socio-economic factors, can affect happiness significantly. Therefore, land use factors can affect happiness in Skudai more than socio-economic factors.

Keywords: neighbourhood land use, neighbourhood design, happiness, socio-economic factors, generalised linear modelling

Procedia PDF Downloads 138
2778 Thermoluminescence Characteristic of Nanocrystalline BaSO4 Doped with Europium

Authors: Kanika S. Raheja, A. Pandey, Shaila Bahl, Pratik Kumar, S. P. Lochab

Abstract:

The subject of undertaking for this paper is the study of BaSO4 nanophosphor doped with Europium in which mainly the concentration of the rare earth impurity Eu (0.05, 0.1, 0.2, 0.5, and 1 mol %) has been varied. A comparative study of the thermoluminescence(TL) properties of the given nanophosphor has also been done using a well-known standard dosimetry material i.e. TLD-100.Firstly, a number of samples were prepared successfully by the chemical co-precipitation method. The whole lot was then compared to a well established standard material (TLD-100) for its TL sensitivity property. BaSO4:Eu ( 0.2 mol%) showed the highest sensitivity out of the lot. It was also found that when compared to the standard TLD-100, BaSo4:Eu (0.2mol%) showed surprisingly high sensitivity for a large range of doses. The TL response curve for all prepared samples has also been studied over a wide range of doses i.e 10Gy to 2kGy for gamma radiation. Almost all the samples of BaSO4:Eu showed a remarkable linearity for a broad range of doses, which is a characteristic feature of a fine TL dosimeter. The graph remained linear even beyond 1kGy for gamma radiation. Thus, the given nanophosphor has been successfully optimised for the concentration of the dopant material to achieve its highest TL sensitivity. Further, the comparative study with the standard material revealed that the current optimised sample shows an astonishingly better TL sensitivity and a phenomenal linear response curve for an incredibly wide range of doses for gamma radiation (Co-60) as compared to the standard TLD-100, which makes the current optimised BaSo4:Eu quite promising as an efficient gamma radiation dosimeter. Lastly, the present phosphor has been optimised for its annealing temperature to acquire the best results while also studying its fading and reusability properties.

Keywords: gamma radiation, nanoparticles, radiation dosimetry, thermoluminescence

Procedia PDF Downloads 416
2777 Development of a Sensitive Electrochemical Sensor Based on Carbon Dots and Graphitic Carbon Nitride for the Detection of 2-Chlorophenol and Arsenic

Authors: Theo H. G. Moundzounga

Abstract:

Arsenic and 2-chlorophenol are priority pollutants that pose serious health threats to humans and ecology. An electrochemical sensor, based on graphitic carbon nitride (g-C₃N₄) and carbon dots (CDs), was fabricated and used for the determination of arsenic and 2-chlorophenol. The g-C₃N₄/CDs nanocomposite was prepared via microwave irradiation heating method and was dropped-dried on the surface of the glassy carbon electrode (GCE). Transmission electron microscopy (TEM), X-ray diffraction (XRD), photoluminescence (PL), Fourier transform infrared spectroscopy (FTIR), UV-Vis diffuse reflectance spectroscopy (UV-Vis DRS) were used for the characterization of structure and morphology of the nanocomposite. Electrochemical characterization was done by electrochemical impedance spectroscopy (EIS) and cyclic voltammetry (CV). The electrochemical behaviors of arsenic and 2-chlorophenol on different electrodes (GCE, CDs/GCE, and g-C₃N₄/CDs/GCE) was investigated by differential pulse voltammetry (DPV). The results demonstrated that the g-C₃N₄/CDs/GCE significantly enhanced the oxidation peak current of both analytes. The analytes detection sensitivity was greatly improved, suggesting that this new modified electrode has great potential in the determination of trace level of arsenic and 2-chlorophenol. Experimental conditions which affect the electrochemical response of arsenic and 2-chlorophenol were studied, the oxidation peak currents displayed a good linear relationship to concentration for 2-chlorophenol (R²=0.948, n=5) and arsenic (R²=0.9524, n=5), with a linear range from 0.5 to 2.5μM for 2-CP and arsenic and a detection limit of 2.15μM and 0.39μM respectively. The modified electrode was used to determine arsenic and 2-chlorophenol in spiked tap and effluent water samples by the standard addition method, and the results were satisfying. According to the measurement, the new modified electrode is a good alternative as chemical sensor for determination of other phenols.

Keywords: electrochemistry, electrode, limit of detection, sensor

Procedia PDF Downloads 125
2776 Comparison of the Existing Damage Indices in Steel Moment-Resisting Frame Structures

Authors: Hamid Kazemi, Abbasali Sadeghi

Abstract:

Assessment of seismic behavior of frame structures is just done for evaluating life and financial damages or lost. The new structural seismic behavior assessment methods have been proposed, so it is necessary to define a formulation as a damage index, which the damage amount has been quantified and qualified. In this paper, four new steel moment-resisting frames with intermediate ductility and different height (2, 5, 8, and 12-story) with regular geometry and simple rectangular plan were supposed and designed. The three existing groups’ damage indices were studied, each group consisting of local index (Drift, Maximum Roof Displacement, Banon Failure, Kinematic, Banon Normalized Cumulative Rotation, Cumulative Plastic Rotation and Ductility), global index (Roufaiel and Meyer, Papadopoulos, Sozen, Rosenblueth, Ductility and Base Shear), and story (Banon Failure and Inter-story Rotation). The necessary parameters for these damage indices have been calculated under the effect of far-fault ground motion records by Non-linear Dynamic Time History Analysis. Finally, prioritization of damage indices is defined based on more conservative values in terms of more damageability rate. The results show that the selected damage index has an important effect on estimation of the damage state. Also, failure, drift, and Rosenblueth damage indices are more conservative indices respectively for local, story and global damage indices.

Keywords: damage index, far-fault ground motion records, non-linear time history analysis, SeismoStruct software, steel moment-resisting frame

Procedia PDF Downloads 279
2775 Comparison of Different Machine Learning Algorithms for Solubility Prediction

Authors: Muhammet Baldan, Emel Timuçin

Abstract:

Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.

Keywords: random forest, machine learning, comparison, feature extraction

Procedia PDF Downloads 21
2774 Efficient Utilization of Negative Half Wave of Regulator Rectifier Output to Drive Class D LED Headlamp

Authors: Lalit Ahuja, Nancy Das, Yashas Shetty

Abstract:

LED lighting has been increasingly adopted for vehicles in both domestic and foreign automotive markets. Although this miniaturized technology gives the best light output, low energy consumption, and cost-efficient solutions for driving, the same is the need of the hour. In this paper, we present a methodology for driving the highest class two-wheeler headlamp with regulator and rectifier (RR) output. Unlike usual LED headlamps, which are driven by a battery, regulator, and rectifier (RR) driven, a low-cost and highly efficient LED Driver Module (LDM) is proposed. The positive half of magneto output is regulated and used to charge batteries used for various peripherals. While conventionally, the negative half was used for operating bulb-based exterior lamps. But with advancements in LED-based headlamps, which are driven by a battery, this negative half pulse remained unused in most of the vehicles. Our system uses negative half-wave rectified DC output from RR to provide constant light output at all RPMs of the vehicle. With the negative rectified DC output of RR, we have the advantage of pulsating DC input which periodically goes to zero, thus helping us to generate a constant DC output equivalent to the required LED load, and with a change in RPM, additional active thermal bypass circuit help us to maintain the efficiency and thermal rise. The methodology uses the negative half wave output of the RR along with a linear constant current driver with significantly higher efficiency. Although RR output has varied frequency and duty cycles at different engine RPMs, the driver is designed such that it provides constant current to LEDs with minimal ripple. In LED Headlamps, a DC-DC switching regulator is usually used, which is usually bulky. But with linear regulators, we’re eliminating bulky components and improving the form factor. Hence, this is both cost-efficient and compact. Presently, output ripple-free amplitude drivers with fewer components and less complexity are limited to lower-power LED Lamps. The focus of current high-efficiency research is often on high LED power applications. This paper presents a method of driving LED load at both High Beam and Low Beam using the negative half wave rectified pulsating DC from RR with minimum components, maintaining high efficiency within the thermal limitations. Linear regulators are significantly inefficient, with efficiencies typically about 40% and reaching as low as 14%. This leads to poor thermal performance. Although they don’t require complex and bulky circuitry, powering high-power devices is difficult to realise with the same. But with the input being negative half wave rectified pulsating DC, this efficiency can be improved as this helps us to generate constant DC output equivalent to LED load minimising the voltage drop on the linear regulator. Hence, losses are significantly reduced, and efficiency as high as 75% is achieved. With a change in RPM, DC voltage increases, which can be managed by active thermal bypass circuitry, thus resulting in better thermal performance. Hence, the use of bulky and expensive heat sinks can be avoided. Hence, the methodology to utilize the unused negative pulsating DC output of RR to optimize the utilization of RR output power and provide a cost-efficient solution as compared to costly DC-DC drivers.

Keywords: class D LED headlamp, regulator and rectifier, pulsating DC, low cost and highly efficient, LED driver module

Procedia PDF Downloads 49
2773 Cessna Citation X Business Aircraft Stability Analysis Using Linear Fractional Representation LFRs Model

Authors: Yamina Boughari, Ruxandra Mihaela Botez, Florian Theel, Georges Ghazi

Abstract:

Clearance of flight control laws of a civil aircraft is a long and expensive process in the Aerospace industry. Thousands of flight combinations in terms of speeds, altitudes, gross weights, centers of gravity and angles of attack have to be investigated, and proved to be safe. Nonetheless, in this method, a worst flight condition can be easily missed, and its missing would lead to a critical situation. Definitively, it would be impossible to analyze a model because of the infinite number of cases contained within its flight envelope, that might require more time, and therefore more design cost. Therefore, in industry, the technique of the flight envelope mesh is commonly used. For each point of the flight envelope, the simulation of the associated model ensures the satisfaction or not of specifications. In order to perform fast, comprehensive and effective analysis, other varying parameters models were developed by incorporating variations, or uncertainties in the nominal models, known as Linear Fractional Representation LFR models; these LFR models were able to describe the aircraft dynamics by taking into account uncertainties over the flight envelope. In this paper, the LFRs models are developed using the speeds and altitudes as varying parameters; The LFR models were built using several flying conditions expressed in terms of speeds and altitudes. The use of such a method has gained a great interest by the aeronautical companies that have seen a promising future in the modeling, and particularly in the design and certification of control laws. In this research paper, we will focus on the Cessna Citation X open loop stability analysis. The data are provided by a Research Aircraft Flight Simulator of Level D, that corresponds to the highest level flight dynamics certification; this simulator was developed by CAE Inc. and its development was based on the requirements of research at the LARCASE laboratory. The acquisition of these data was used to develop a linear model of the airplane in its longitudinal and lateral motions, and was further used to create the LFR’s models for 12 XCG /weights conditions, and thus the whole flight envelope using a friendly Graphical User Interface developed during this study. Then, the LFR’s models are analyzed using Interval Analysis method based upon Lyapunov function, and also the ‘stability and robustness analysis’ toolbox. The results were presented under the form of graphs, thus they have offered good readability, and were easily exploitable. The weakness of this method stays in a relatively long calculation, equal to about four hours for the entire flight envelope.

Keywords: flight control clearance, LFR, stability analysis, robustness analysis

Procedia PDF Downloads 336
2772 Airport Investment Risk Assessment under Uncertainty

Authors: Elena M. Capitanul, Carlos A. Nunes Cosenza, Walid El Moudani, Felix Mora Camino

Abstract:

The construction of a new airport or the extension of an existing one requires massive investments and many times public private partnerships were considered in order to make feasible such projects. One characteristic of these projects is uncertainty with respect to financial and environmental impacts on the medium to long term. Another one is the multistage nature of these types of projects. While many airport development projects have been a success, some others have turned into a nightmare for their promoters. This communication puts forward a new approach for airport investment risk assessment. The approach takes explicitly into account the degree of uncertainty in activity levels prediction and proposes milestones for the different stages of the project for minimizing risk. Uncertainty is represented through fuzzy dual theory and risk management is performed using dynamic programming. An illustration of the proposed approach is provided.

Keywords: airports, fuzzy logic, risk, uncertainty

Procedia PDF Downloads 391
2771 Leveraging Power BI for Advanced Geotechnical Data Analysis and Visualization in Mining Projects

Authors: Elaheh Talebi, Fariba Yavari, Lucy Philip, Lesley Town

Abstract:

The mining industry generates vast amounts of data, necessitating robust data management systems and advanced analytics tools to achieve better decision-making processes in the development of mining production and maintaining safety. This paper highlights the advantages of Power BI, a powerful intelligence tool, over traditional Excel-based approaches for effectively managing and harnessing mining data. Power BI enables professionals to connect and integrate multiple data sources, ensuring real-time access to up-to-date information. Its interactive visualizations and dashboards offer an intuitive interface for exploring and analyzing geotechnical data. Advanced analytics is a collection of data analysis techniques to improve decision-making. Leveraging some of the most complex techniques in data science, advanced analytics is used to do everything from detecting data errors and ensuring data accuracy to directing the development of future project phases. However, while Power BI is a robust tool, specific visualizations required by geotechnical engineers may have limitations. This paper studies the capability to use Python or R programming within the Power BI dashboard to enable advanced analytics, additional functionalities, and customized visualizations. This dashboard provides comprehensive tools for analyzing and visualizing key geotechnical data metrics, including spatial representation on maps, field and lab test results, and subsurface rock and soil characteristics. Advanced visualizations like borehole logs and Stereonet were implemented using Python programming within the Power BI dashboard, enhancing the understanding and communication of geotechnical information. Moreover, the dashboard's flexibility allows for the incorporation of additional data and visualizations based on the project scope and available data, such as pit design, rock fall analyses, rock mass characterization, and drone data. This further enhances the dashboard's usefulness in future projects, including operation, development, closure, and rehabilitation phases. Additionally, this helps in minimizing the necessity of utilizing multiple software programs in projects. This geotechnical dashboard in Power BI serves as a user-friendly solution for analyzing, visualizing, and communicating both new and historical geotechnical data, aiding in informed decision-making and efficient project management throughout various project stages. Its ability to generate dynamic reports and share them with clients in a collaborative manner further enhances decision-making processes and facilitates effective communication within geotechnical projects in the mining industry.

Keywords: geotechnical data analysis, power BI, visualization, decision-making, mining industry

Procedia PDF Downloads 73
2770 Hydromagnetic Linear Instability Analysis of Giesekus Fluids in Taylor-Couette Flow

Authors: K. Godazandeh, K. Sadeghy

Abstract:

In the present study, the effect of magnetic field on the hydrodynamic instability of Taylor-Couette flow between two concentric rotating cylinders has been numerically investigated. At the beginning the basic flow has been solved using continuity, Cauchy equations (with regards to Lorentz force) and the constitutive equations of a viscoelastic model called "Giesekus" model. Small perturbations, considered to be normal mode, have been superimposed to the basic flow and the unsteady perturbation equations have been derived consequently. Neglecting non-linear terms, the general eigenvalue problem obtained has been solved using pseudo spectral method (combination of Chebyshev polynomials). The objective of the calculations is to study the effect of magnetic fields on the onset of first mode of instability (axisymmetric mode) for different dimensionless parameters of the flow. The results show that the stability picture is highly influenced by the magnetic field. When magnetic field increases, it first has a destabilization effect which changes to stabilization effect due to more increase of magnetic fields. Therefor there is a critical magnetic number (Hartmann number) for instability of Taylor-Couette flow. Also, the effect of magnetic field is more dominant in large gaps. Also based on the results obtained, magnetic field shows a more considerable effect on the stability at higher Weissenberg numbers (at higher elasticity), while the "mobility factor" changes show no dominant role on the intense of suction and injection effect on the flow's instability.

Keywords: magnetic field, Taylor-Couette flow, Giesekus model, pseudo spectral method, Chebyshev polynomials, Hartmann number, Weissenberg number, mobility factor

Procedia PDF Downloads 373
2769 Rule-Of-Mixtures: Predicting the Bending Modulus of Unidirectional Fiber Reinforced Dental Composites

Authors: Niloofar Bahramian, Mohammad Atai, Mohammad Reza Naimi-Jamal

Abstract:

Rule of mixtures is the simple analytical model is used to predict various properties of composites before design. The aim of this study was to demonstrate the benefits and limitations of the Rule-of-Mixtures (ROM) for predicting bending modulus of a continuous and unidirectional fiber reinforced composites using in dental applications. The Composites were fabricated from light curing resin (with and without silica nanoparticles) and modified and non-modified fibers. Composite samples were divided into eight groups with ten specimens for each group. The bending modulus (flexural modulus) of samples was determined from the slope of the initial linear region of stress-strain curve on 2mm×2mm×25mm specimens with different designs: fibers corona treatment time (0s, 5s, 7s), fibers silane treatment (0%wt, 2%wt), fibers volume fraction (41%, 33%, 25%) and nanoparticles incorporation in resin (0%wt, 10%wt, 15%wt). To study the fiber and matrix interface after fracture, single edge notch beam (SENB) method and scanning electron microscope (SEM) were used. SEM also was used to show the nanoparticles dispersion in resin. Experimental results of bending modulus for composites made of both physical (corona) and chemical (silane) treated fibers were in reasonable agreement with linear ROM estimates, but untreated fibers or non-optimized treated fibers and poor nanoparticles dispersion did not correlate as well with ROM results. This study shows that the ROM is useful to predict the mechanical behavior of unidirectional dental composites but fiber-resin interface and quality of nanoparticles dispersion play important role in ROM accurate predictions.

Keywords: bending modulus, fiber reinforced composite, fiber treatment, rule-of-mixtures

Procedia PDF Downloads 260
2768 Analyzing the Practicality of Drawing Inferences in Automation of Commonsense Reasoning

Authors: Chandan Hegde, K. Ashwini

Abstract:

Commonsense reasoning is the simulation of human ability to make decisions during the situations that we encounter every day. It has been several decades since the introduction of this subfield of artificial intelligence, but it has barely made some significant progress. The modern computing aids also have remained impotent in this regard due to the absence of a strong methodology towards commonsense reasoning development. Among several accountable reasons for the lack of progress, drawing inference out of commonsense knowledge-base stands out. This review paper emphasizes on a detailed analysis of representation of reasoning uncertainties and feasible prospects of programming aids for drawing inferences. Also, the difficulties in deducing and systematizing commonsense reasoning and the substantial progress made in reasoning that influences the study have been discussed. Additionally, the paper discusses the possible impacts of an effective inference technique in commonsense reasoning.

Keywords: artificial intelligence, commonsense reasoning, knowledge base, uncertainty in reasoning

Procedia PDF Downloads 170
2767 Climate Changes in Albania and Their Effect on Cereal Yield

Authors: Lule Basha, Eralda Gjika

Abstract:

This study is focused on analyzing climate change in Albania and its potential effects on cereal yields. Initially, monthly temperature and rainfalls in Albania were studied for the period 1960-2021. Climacteric variables are important variables when trying to model cereal yield behavior, especially when significant changes in weather conditions are observed. For this purpose, in the second part of the study, linear and nonlinear models explaining cereal yield are constructed for the same period, 1960-2021. The multiple linear regression analysis and lasso regression method are applied to the data between cereal yield and each independent variable: average temperature, average rainfall, fertilizer consumption, arable land, land under cereal production, and nitrous oxide emissions. In our regression model, heteroscedasticity is not observed, data follow a normal distribution, and there is a low correlation between factors, so we do not have the problem of multicollinearity. Machine-learning methods, such as random forest, are used to predict cereal yield responses to climacteric and other variables. Random Forest showed high accuracy compared to the other statistical models in the prediction of cereal yield. We found that changes in average temperature negatively affect cereal yield. The coefficients of fertilizer consumption, arable land, and land under cereal production are positively affecting production. Our results show that the Random Forest method is an effective and versatile machine-learning method for cereal yield prediction compared to the other two methods.

Keywords: cereal yield, climate change, machine learning, multiple regression model, random forest

Procedia PDF Downloads 74
2766 Improved Performance of AlGaN/GaN HEMTs Using N₂/NH₃ Pretreatment before Passivation

Authors: Yifan Gao

Abstract:

Owing to the high breakdown field, high saturation drift velocity, 2DEG with high density and mobility and so on, AlGaN/GaN HEMTs have been widely used in high-frequency and high-power applications. To acquire a higher power often means higher breakdown voltage and higher drain current. Surface leakage current is usually the key issue affecting the breakdown voltage and power performance. In this work, we have performed in-situ N₂/NH₃ pretreatment before the passivation to suppress the surface leakage and achieve device performance enhancement. The AlGaN/GaN HEMT used in this work was grown on a 3-in. SiC substrate, whose epitaxial structure consists of a 3.5-nm GaN cap layer, a 25-nm Al₀.₂₅GaN barrier layer, a 1-nm AlN layer, a 400-nm i-GaN layer and a buffer layer. In order to analyze the mechanism for the N-based pretreatment, the details are measured by XPS analysis. It is found that the intensity of Ga-O bonds is decreasing and the intensity of Ga-N bonds is increasing, which means with the supplement of N, the dangling bonds on the surface are indeed reduced with the forming of Ga-N bonds, reducing the surface states. The surface states have a great influence on the leakage current, and improved surface states represent a better off-state of the device. After the N-based pretreatment, the breakdown voltage of the device with Lₛ𝒹=6 μm increased from 93V to 170V, which increased by 82.8%. Moreover, for HEMTs with Lₛ𝒹 of 6-μm, we can obtain a peak output power (Pout) of 12.79W/mm, power added efficiency (PAE) of 49.84% and a linear gain of 20.2 dB at 60V under 3.6GHz. Comparing the result with the reference 6-μm device, Pout is increased by 16.5%. Meanwhile, PAE and the linear gain also have a slight increase. The experimental results indicate that using N₂/NH₃ pretreatment before passivation is an attractive approach to achieving power performance enhancement.

Keywords: AlGaN/GaN HEMT, N-based pretreatment, output power, passivation

Procedia PDF Downloads 300
2765 Engineering Economic Analysis of Implementing a Materials Recovery Facility in Jamaica: A Green Industry Approach towards a Sustainable Developing Economy

Authors: Damian Graham, Ashleigh H. Hall, Damani R. Sulph, Michael A. James, Shawn B. Vassell

Abstract:

This paper assesses the design and feasibility of a Materials Recovery Facility (MRF) in Jamaica as a possible green industry approach to the nation’s economic and solid waste management problems. Jamaica is a developing nation that is vulnerable to climate change that can affect its blue economy and tourism on which it is heavily reliant. Jamaica’s National Solid Waste Management Authority (NSWMA) collects only a fraction of all the solid waste produced annually which is then transported to dumpsites. The remainder is either burnt by the population or disposed of illegally. These practices negatively impact the environment, threaten the sustainability of economic growth from blue economy and tourism and its waste management system is predominantly a cost centre. The implementation of an MRF could boost the manufacturing sector, contribute to economic growth, and be a catalyst in creating a green industry with multiple downstream value chains with supply chain linkages. Globally, there is a trend to reuse and recycle that created an international market for recycled solid waste. MRFs enable the efficient sorting of solid waste into desired recoverable materials thus providing a gateway for entrance to the international trading of recycled waste. Research into the current state and effort to improve waste management in Jamaica in contrast with the similar and more advanced territories are outlined. The study explores the concept of green industrialization and its applicability to vulnerable small state economies like Jamaica. The study highlights the possible contributions and benefits derived from MRFs as a seeding factory that can anchor the reverse and forward logistics of other green industries as part of a logistic-cantered economy. Further, the study showcases an engineering economic analysis that assesses the viability of the implementation of an MRF in Jamaica. This research outlines the potential cost of constructing and operating an MRF and provides a realistic cash flow estimate to establish a baseline for profitability. The approach considers quantitative and qualitative data, assumptions, and modelling using industrial engineering tools and techniques that are outlined. Techniques of facility planning, system analysis and operations research with a focus on linear programming techniques are expressed. Approaches to overcome some implementation challenges including policy, technology and public education are detailed. The results of this study present a reasonable judgment of the prospects of incorporating an MRF to improve Jamaica’s solid waste management and contribute to socioeconomic and environmental benefits and an alternate pathway for economic sustainability.

Keywords: engineering-economic analysis, facility design, green industry, MRF, manufacturing, plant layout, solid-waste management, sustainability, waste disposal

Procedia PDF Downloads 208
2764 Evaluation of Short-Term Load Forecasting Techniques Applied for Smart Micro-Grids

Authors: Xiaolei Hu, Enrico Ferrera, Riccardo Tomasi, Claudio Pastrone

Abstract:

Load Forecasting plays a key role in making today's and future's Smart Energy Grids sustainable and reliable. Accurate power consumption prediction allows utilities to organize in advance their resources or to execute Demand Response strategies more effectively, which enables several features such as higher sustainability, better quality of service, and affordable electricity tariffs. It is easy yet effective to apply Load Forecasting at larger geographic scale, i.e. Smart Micro Grids, wherein the lower available grid flexibility makes accurate prediction more critical in Demand Response applications. This paper analyses the application of short-term load forecasting in a concrete scenario, proposed within the EU-funded GreenCom project, which collect load data from single loads and households belonging to a Smart Micro Grid. Three short-term load forecasting techniques, i.e. linear regression, artificial neural networks, and radial basis function network, are considered, compared, and evaluated through absolute forecast errors and training time. The influence of weather conditions in Load Forecasting is also evaluated. A new definition of Gain is introduced in this paper, which innovatively serves as an indicator of short-term prediction capabilities of time spam consistency. Two models, 24- and 1-hour-ahead forecasting, are built to comprehensively compare these three techniques.

Keywords: short-term load forecasting, smart micro grid, linear regression, artificial neural networks, radial basis function network, gain

Procedia PDF Downloads 448