Search results for: performance predicting formula
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13784

Search results for: performance predicting formula

8894 An Adjoint-Based Method to Compute Derivatives with Respect to Bed Boundary Positions in Resistivity Measurements

Authors: Mostafa Shahriari, Theophile Chaumont-Frelet, David Pardo

Abstract:

Resistivity measurements are used to characterize the Earth’s subsurface. They are categorized into two different groups: (a) those acquired on the Earth’s surface, for instance, controlled source electromagnetic (CSEM) and Magnetotellurics (MT), and (b) those recorded with borehole logging instruments such as Logging-While-Drilling (LWD) devices. LWD instruments are mostly used for geo-steering purposes, i.e., to adjust dip and azimuthal angles of a well trajectory to drill along a particular geological target. Modern LWD tools measure all nine components of the magnetic field corresponding to three orthogonal transmitter and receiver orientations. In order to map the Earth’s subsurface and perform geo-steering, we invert measurements using a gradient-based method that utilizes the derivatives of the recorded measurements with respect to the inversion variables. For resistivity measurements, these inversion variables are usually the constant resistivity value of each layer and the bed boundary positions. It is well-known how to compute derivatives with respect to the constant resistivity value of each layer using semi-analytic or numerical methods. However, similar formulas for computing the derivatives with respect to bed boundary positions are unavailable. The main contribution of this work is to provide an adjoint-based formulation for computing derivatives with respect to the bed boundary positions. The key idea to obtain the aforementioned adjoint state formulations for the derivatives is to separate the tangential and normal components of the field and treat them differently. This formulation allows us to compute the derivatives faster and more accurately than with traditional finite differences approximations. In the presentation, we shall first derive a formula for computing the derivatives with respect to the bed boundary positions for the potential equation. Then, we shall extend our formulation to 3D Maxwell’s equations. Finally, by considering a 1D domain and reducing the dimensionality of the problem, which is a common practice in the inversion of resistivity measurements, we shall derive a formulation to compute the derivatives of the measurements with respect to the bed boundary positions using a 1.5D variational formulation. Then, we shall illustrate the accuracy and convergence properties of our formulations by comparing numerical results with the analytical derivatives for the potential equation. For the 1.5D Maxwell’s system, we shall compare our numerical results based on the proposed adjoint-based formulation vs those obtained with a traditional finite difference approach. Numerical results shall show that our proposed adjoint-based technique produces enhanced accuracy solutions while its cost is negligible, as opposed to the finite difference approach that requires the solution of one additional problem per derivative.

Keywords: inverse problem, bed boundary positions, electromagnetism, potential equation

Procedia PDF Downloads 165
8893 Image Segmentation Using Active Contours Based on Anisotropic Diffusion

Authors: Shafiullah Soomro

Abstract:

Active contour is one of the image segmentation techniques and its goal is to capture required object boundaries within an image. In this paper, we propose a novel image segmentation method by using an active contour method based on anisotropic diffusion feature enhancement technique. The traditional active contour methods use only pixel information to perform segmentation, which produces inaccurate results when an image has some noise or complex background. We use Perona and Malik diffusion scheme for feature enhancement, which sharpens the object boundaries and blurs the background variations. Our main contribution is the formulation of a new SPF (signed pressure force) function, which uses global intensity information across the regions. By minimizing an energy function using partial differential framework the proposed method captures semantically meaningful boundaries instead of catching uninterested regions. Finally, we use a Gaussian kernel which eliminates the problem of reinitialization in level set function. We use several synthetic and real images from different modalities to validate the performance of the proposed method. In the experimental section, we have found the proposed method performance is better qualitatively and quantitatively and yield results with higher accuracy compared to other state-of-the-art methods.

Keywords: active contours, anisotropic diffusion, level-set, partial differential equations

Procedia PDF Downloads 147
8892 CFD Modelling and Thermal Performance Analysis of Ventilated Double Skin Roof Structure

Authors: A. O. Idris, J. Virgone, A. I. Ibrahim, D. David, E. Vergnault

Abstract:

In hot countries, the major challenge is the air conditioning. The increase in energy consumption by air conditioning stems from the need to live in more comfortable buildings, which is understandable. But in Djibouti, one of the countries with the most expensive electricity in the world, this need is exacerbated by an architecture that is inappropriate and unsuitable for climatic conditions. This paper discusses the design of the roof which is the surface receiving the most solar radiation. The roof determines the general behavior of the building. The study presents Computational Fluid Dynamics (CFD) modeling and analysis of the energy performance of a double skin ventilated roof. The particularity of this study is that it considers the climate of Djibouti characterized by hot and humid conditions in winter and very hot and humid in summer. Roof simulations are carried out using the Ansys Fluent software to characterize the flow and the heat transfer induced in the ventilated roof in steady state. This modeling is carried out by comparing the influence of several parameters such as the internal emissivity of the upper surface, the thickness of the insulation of the roof and the thickness of the ventilated channel on heat gain through the roof. The energy saving potential compared to the current construction in Djibouti is also presented.

Keywords: building, double skin roof, CFD, thermo-fluid analysis, energy saving, forced convection, natural convection

Procedia PDF Downloads 248
8891 Second Language Perception of Japanese /Cju/ and /Cjo/ Sequences by Mandarin-Speaking Learners of Japanese

Authors: Yili Liu, Honghao Ren, Mariko Kondo

Abstract:

In the field of second language (L2) speech learning, it is well-known that that learner’s first language (L1) phonetic and phonological characteristics will be transferred into their L2 production and perception, which lead to foreign accent. For L1 Mandarin learners of Japanese, the confusion of /u/ and /o/ in /CjV/ sequences has been observed in their utterance frequently. L1 transfer is considered to be the cause of this issue, however, other factors which influence the identification of /Cju/ and /Cjo/ sequences still under investigation. This study investigates the perception of Japanese /Cju/ and /Cjo/ units by L1 Mandarin learners of Japanese. It further examined whether learners’ proficiency, syllable position, phonetic features of preceding consonants and background noise affect learners’ performance in perception. Fifty-two Mandarin-speaking learners of Japanese and nine native Japanese speakers were recruited to participate in an identification task. Learners were divided into beginner, intermediate and advanced level according to their Japanese proficiency. The average correct rate was used to evaluate learners’ perceptual performance. Furthermore, the comparison of the correct rate between learners’ groups and the control group was conducted as well to examine learners’ nativelikeness. Results showed that background noise tends to pose an adverse effect on distinguishing /u/ and /o/ in /CjV/ sequences. Secondly, Japanese proficiency has no influence on learners’ perceptual performance in the quiet and in background noise. Then all learners did not reach a native-like level without the distraction of noise. Beginner level learners performed less native-like, although higher level learners appeared to have achieved nativelikeness in the multi-talker babble noise. Finally, syllable position tends to affect distinguishing /Cju/ and /Cjo/ only under the noisy condition. Phonetic features of preceding consonants did not impact learners’ perception in any listening conditions. Findings in this study can give an insight into a further understanding of Japanese vowel acquisition by L1 Mandarin learners of Japanese. In addition, this study indicates that L1 transfer is not the only explanation for the confusion of /u/ and /o/ in /CjV/ sequences, factors such as listening condition and syllable position are also needed to take into consideration in future research. It also suggests the importance of perceiving speech in a noisy environment, which is close to the actual conversation required more attention to pedagogy.

Keywords: background noise, Chinese learners of Japanese, /Cju/ and /Cjo/ sequences, second language perception

Procedia PDF Downloads 150
8890 Electro-Oxidation of Glycerol Using Nickel Deposited Carbon Ceramic Electrode and Product Analysis Using High Performance Liquid Chromatography

Authors: Mulatu Kassie Birhanu

Abstract:

Electro-oxidation of glycerol is an important process to convert the less price glycerol into other expensive (essential) and energy-rich chemicals. In this study, nickel was electro-deposited on laboratory-made carbon ceramic electrode (CCE) substrate using electrochemical techniques that is cyclic voltammetry (CV) to prepare an electro-catalyst (Ni/CCE) for electro-oxidation of glycerol. Carbon ceramic electrode was prepared from graphite and methyl tri-methoxy silane (MTMOS) through the processes called hydrolysis and condensation with methanol in acidic media (HCl) by a sol-gel technique. Physico-chemical characterization of bare CCE and modified (deposited) CCE (Ni/CCE) was measured and evaluated by Fourier Transform Infrared spectroscopy (FTIR), Scanning Electron Microscopy (SEM) and X-ray diffraction (XRD). Electro-oxidation of glycerol was performed in 0.1 M glycerol in alkaline media (0.5 M NaOH). High-Performance Liquid Chromatography (HPLC) technique was used to identify and determine the concentration of glycerol, reaction intermediates and oxidized products of glycerol after its electro-oxidation is performed. The conversion (%) of electro-oxidation of glycerol during 9-hour oxidation was 73% and 36% at 1.8V and 1.6V vs. RHE, respectively. Formate, oxalate, glycolate and glycerate are the main oxidation products of glycerol with selectivity (%) of 75%, 8.6%, 1.1% and 0.95 % at 1.8 V vs. RHE and 55.4%, 2.2%, 1.0% and 0.6% at 1.6 V vs. RHE respectively. The result indicates that formate is the main product in the electro-oxidation of glycerol on Ni/CCE using the indicated applied potentials.

Keywords: carbon-ceramic electrode, electrodeposition, electro-oxidation, Methyltrimethoxysilane

Procedia PDF Downloads 216
8889 Reclaiming Corporate Social Responsibility: A Research Agenda for Socio-Industrial Interdependence

Authors: Leah Ritchie

Abstract:

By many accounts, the most recent economic recession and subsequent lack-luster recovery has demonstrated that corporate social responsibility is in a state of crisis. This crisis represents an opportunity for CSR scholars to play a role in restoring long-term economic growth and consumer confidence. In its current state however, CSR may not be in a position to facilitate positive change. In an attempt to remain relevant, the field has shifted toward a performance-based agenda that demonstrates in practical terms, how CSR can positively affect the financial and strategic performance of the firm. This paper argues that if CSR is to play a central role in helping to create a more equitable balance of power between industry and society, it must demonstrate the symbiotic nature of the relationship between these two entities, not just in terms of compartmentalized strategic and financial gain for the firm, but also toward maintaining a 'do no harm' imperative. Given the evidence that harm done to society is ultimately turned back on the firm, this is not simply a moralistic imperative. In order to affect change, CSR must also create an activist agenda to raise consciousness among the general citizenry toward mobilizing, uncovering, and repairing breeches in the implicit social contract between business and society.

Keywords: corporate social responsibility, multiple stakeholder view, economic recession, housing crisis

Procedia PDF Downloads 202
8888 Modeling and Analysis Of Occupant Behavior On Heating And Air Conditioning Systems In A Higher Education And Vocational Training Building In A Mediterranean Climate

Authors: Abderrahmane Soufi

Abstract:

The building sector is the largest consumer of energy in France, accounting for 44% of French consumption. To reduce energy consumption and improve energy efficiency, France implemented an energy transition law targeting 40% energy savings by 2030 in the tertiary building sector. Building simulation tools are used to predict the energy performance of buildings but the reliability of these tools is hampered by discrepancies between the real and simulated energy performance of a building. This performance gap lies in the simplified assumptions of certain factors, such as the behavior of occupants on air conditioning and heating, which is considered deterministic when setting a fixed operating schedule and a fixed interior comfort temperature. However, the behavior of occupants on air conditioning and heating is stochastic, diverse, and complex because it can be affected by many factors. Probabilistic models are an alternative to deterministic models. These models are usually derived from statistical data and express occupant behavior by assuming a probabilistic relationship to one or more variables. In the literature, logistic regression has been used to model the behavior of occupants with regard to heating and air conditioning systems by considering univariate logistic models in residential buildings; however, few studies have developed multivariate models for higher education and vocational training buildings in a Mediterranean climate. Therefore, in this study, occupant behavior on heating and air conditioning systems was modeled using logistic regression. Occupant behavior related to the turn-on heating and air conditioning systems was studied through experimental measurements collected over a period of one year (June 2023–June 2024) in three classrooms occupied by several groups of students in engineering schools and professional training. Instrumentation was provided to collect indoor temperature and indoor relative humidity in 10-min intervals. Furthermore, the state of the heating/air conditioning system (off or on) and the set point were determined. The outdoor air temperature, relative humidity, and wind speed were collected as weather data. The number of occupants, age, and sex were also considered. Logistic regression was used for modeling an occupant turning on the heating and air conditioning systems. The results yielded a proposed model that can be used in building simulation tools to predict the energy performance of teaching buildings. Based on the first months (summer and early autumn) of the investigations, the results illustrate that the occupant behavior of the air conditioning systems is affected by the indoor relative humidity and temperature in June, July, and August and by the indoor relative humidity, temperature, and number of occupants in September and October. Occupant behavior was analyzed monthly, and univariate and multivariate models were developed.

Keywords: occupant behavior, logistic regression, behavior model, mediterranean climate, air conditioning, heating

Procedia PDF Downloads 49
8887 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour

Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling

Abstract:

Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.

Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model

Procedia PDF Downloads 84
8886 Design and Implementation of LabVIEW Based Relay Autotuning Controller for Level Setup

Authors: Manoj M. Sarode, Sharad P. Jadhav, Mukesh D. Patil, Pushparaj S. Suryawanshi

Abstract:

Even though the PID controller is widely used in industrial process, tuning of PID parameters are not easy. It is a time consuming and requires expert people. Another drawback of PID controller is that process dynamics might change over time. This can happen due to variation of the process load, normal wear and tear etc. To compensate for process behavior change over time, expert users are required to recalibrate the PID gains. Implementation of model based controllers usually needs a process model. Identification of process model is time consuming job and no guaranty of model accuracy. If the identified model is not accurate, performance of the controller may degrade. Model based controllers are quite expensive and the whole procedure for the implementation is sometimes tedious. To eliminate such issues Autotuning PID controller becomes vital element. Software based Relay Feedback Autotuning Controller proves to be efficient, upgradable and maintenance free controller. In Relay Feedback Autotune controller PID parameters can be achieved with a very short span of time. This paper presents the real time implementation of LabVIEW based Relay Feedback Autotuning PID controller. It is successfully developed and implemented to control level of a laboratory setup. Its performance is analyzed for different setpoints and found satisfactorily.

Keywords: autotuning, PID, liquid level control, recalibrate, labview, controller

Procedia PDF Downloads 380
8885 Effect of Synthetic L-Lysine and DL-Methionine Amino Acids on Performance of Broiler Chickens

Authors: S. M. Ali, S. I. Mohamed

Abstract:

Reduction of feed cost for broiler production is at most importance in decreasing the cost of production. The objectives of this study were to evaluate the use of synthetic amino acids (L-lysine – DL-methionine) instead of super concentrate and groundnut cake versus meat powder as protein sources. A total of 180 male broiler chicks (Cobb – strain) at 15 day of age (DOA) were selected according to their average body weight (380 g) from a broiler chicks flock at Elbashair Farm. The chicks were randomly divided into six groups of 30 chicks. Each group was further sub divided into three replicates with 10 birds. Six experimental diets were formulated. The first diet contained groundnut cake and super concentrate as the control (GNC + C); in the second diet, meat powder and super concentrate (MP + C) were used. The third diet contained groundnut cake and amino acids (GNC + AA); the forth diet contained meat powder and amino acids (MP + AA). The fifth diet contained groundnut cake, meat powder and super concentrate (GNC + MP + C) and the sixth diet contained groundnut cake, meat powder and amino acids (GNC + MP + AA). The formulated rations were randomly assigned for the different sub groups in a completely randomized design of six treatments and three replicates. Weekly feed intake, body weight and mortality were recorded and body weight gain and feed conversion ratio were calculated. At the end of the experiment (49 DOA), nine birds from each treatment were slaughtered. Live body weight, carcass weight, head, shank, and some internal organs (gizzard, heart, liver, small intestine, and abdominal fat pad) weights were taken. For the overall experimental period the (GNC + C +MP) consumed significantly (P≤0.01) the highest cumulative feed while the (MP + AA) group consumed the lowest amount of feed. The (GNC + C) and the (GNC + AA) groups had the heaviest live body weight while (MP + AA) had the lowest live body weight. The overall FCR was significantly (P≤0.01) the best for (GNC + AA) group while the (MP + AA) reported the worst FCR. However, the (GNC + AA) had significantly (P≤0.01) the lowest AFP. The (GNC + MP + Con) group had the highest dressing % while the (MP + AA) group had the lowest dressing %. It is concluded that amino acids can be used instead of super concentrate in broiler feeding with perfect performance and less cost and that meat powder is not advisable to be used with amino acids.

Keywords: broiler chickens, DL-lysine, methionine, performance

Procedia PDF Downloads 255
8884 Numerical Investigation of Fiber-Reinforced Polymer (FRP) Panels Resistance to Blast Loads

Authors: Sameh Ahmed, Khaled Galal

Abstract:

Fiber-reinforced polymer (FRP) sandwich panels are increasingly making their way into structural engineering applications. One of these applications is the blast mitigation. This is attributed to FRP ability of absorbing considerable amount of energy relative to their low density. In this study, FRP sandwich panels are numerically studied using an explicit finite element code ANSYS AUTODYN. The numerical model is then validated with the experimental field tests in the literature. The inner core configurations that have been studied in the experimental field tests were formed from different orientations of the honeycomb shape. On the other hand, the conducted numerical study has proposed a new core configuration. The new core configuration is formulated from a combination of woven and honeycomb shapes. Throughout this study, two performance parameters are considered; the amount of the energy absorbed by the panels and the peak deformation of the panels. Following, a parametric study has been conducted with more variations of the studied parameters to examine the enhancement of the panels' performance. It is found that the numerical results have shown a good agreement with the experimental measurements. Furthermore, the analyses have revealed that using the proposed core configuration obviously enhances the FRP panels’ behavior when subjected to blast loads.

Keywords: blast load, fiber reinforced polymers, finite element modeling, sandwich panels

Procedia PDF Downloads 298
8883 Flashover Voltage of Silicone Insulating Surface Covered by Water Drops under AC Voltage

Authors: Fatiha Aouabed, Abdelhafid Bayadi, Rabah Boudissa

Abstract:

Nowadays, silicone rubber insulation materials are widely used in high voltage outdoor insulation systems as they can combat pollution flashover problems. The difference in pollution flashover performance of silicone rubber and other insulating materials is due to the way that water wets their surfaces. It resides as discrete drops on silicone rubber, and the mechanism of flashover is due to the breakdown of the air between the water drops and the distortion of these drops in the direction of the electric field which brings the insulation to degradation and failure. The main objective of this work is to quantify the effect of different types of water drops arrangements, their position and dry bands width on the flashover voltage of the silicone insulating surface with non-uniform electric field systems. The tests were carried out on a rectangular sample under AC voltage. A rod-rod electrode system is used. The findings of this work indicate that the performance of the samples decreases with the presence of water drops on their surfaces. Further, these experimental findings show that there is a limiting number of rows from which the flashover voltage of the insulation is minimal and constant. This minimum is a function of the distance between two successive rows. Finally, it is concluded that the system withstand voltage increases when the row of droplets on the electrode axis is removed.

Keywords: contamination, flashover, testing, silicone rubber insulators, surface wettability, water droplets

Procedia PDF Downloads 429
8882 An Enhanced Hybrid Backoff Technique for Minimizing the Occurrence of Collision in Mobile Ad Hoc Networks

Authors: N. Sabiyath Fatima, R. K. Shanmugasundaram

Abstract:

In Mobile Ad-hoc Networks (MANETS), every node performs both as transmitter and receiver. The existing backoff models do not exactly forecast the performance of the wireless network. Also, the existing models experience elevated packet collisions. Every time a collision happens, the station’s contention window (CW) is doubled till it arrives at the utmost value. The main objective of this paper is to diminish collision by means of contention window Multiplicative Increase Decrease Backoff (CWMIDB) scheme. The intention of rising CW is to shrink the collision possibility by distributing the traffic into an outsized point in time. Within wireless Ad hoc networks, the CWMIDB algorithm dynamically controls the contention window of the nodes experiencing collisions. During packet communication, the backoff counter is evenly selected from the given choice of [0, CW-1]. At this point, CW is recognized as contention window and its significance lies on the amount of unsuccessful transmission that had happened for the packet. On the initial transmission endeavour, CW is put to least amount value (C min), if transmission effort fails, subsequently the value gets doubled, and once more the value is set to least amount on victorious broadcast. CWMIDB is simulated inside NS2 environment and its performance is compared with Binary Exponential Backoff Algorithm. The simulation results show improvement in transmission probability compared to that of the existing backoff algorithm.

Keywords: backoff, contention window, CWMIDB, MANET

Procedia PDF Downloads 261
8881 Use of Machine Learning in Data Quality Assessment

Authors: Bruno Pinto Vieira, Marco Antonio Calijorne Soares, Armando Sérgio de Aguiar Filho

Abstract:

Nowadays, a massive amount of information has been produced by different data sources, including mobile devices and transactional systems. In this scenario, concerns arise on how to maintain or establish data quality, which is now treated as a product to be defined, measured, analyzed, and improved to meet consumers' needs, which is the one who uses these data in decision making and companies strategies. Information that reaches low levels of quality can lead to issues that can consume time and money, such as missed business opportunities, inadequate decisions, and bad risk management actions. The step of selecting, identifying, evaluating, and selecting data sources with significant quality according to the need has become a costly task for users since the sources do not provide information about their quality. Traditional data quality control methods are based on user experience or business rules limiting performance and slowing down the process with less than desirable accuracy. Using advanced machine learning algorithms, it is possible to take advantage of computational resources to overcome challenges and add value to companies and users. In this study, machine learning is applied to data quality analysis on different datasets, seeking to compare the performance of the techniques according to the dimensions of quality assessment. As a result, we could create a ranking of approaches used, besides a system that is able to carry out automatically, data quality assessment.

Keywords: machine learning, data quality, quality dimension, quality assessment

Procedia PDF Downloads 133
8880 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 105
8879 Optimization of Wind Off-Grid System for Remote Area: Egyptian Application

Authors: Marwa M. Ibrahim

Abstract:

The objective of this research is to study the technical and economic performance of wind/diesel/battery (W/D/B) off-grid system supplying a small remote gathering of four families using the HOMER software package. The second objective is to study the effect of wind energy system on the cost of generated electricity considering the cost of reducing CO₂ emissions as external benefit of wind turbines, no pollutant emission through the operational phase. The system consists of a small wind turbine, battery storage, and diesel generator. The electrical energy is to cater to the basic needs for which the daily load pattern is estimated at 8 kW peak. Net Present Cost (NPC) and Cost of Energy (COE) are used as economic criteria, while the measure of performance is % of power shortage. Technical and economic parameters are defined to estimate the feasibility of the system under study. Optimum system configurations are estimated for the selected site in Egypt. Using HOMER software, the simulation results shows that W/D/B systems are economical for the assumed community site as the price of generated electricity is about 0.285 $/kWh, without taking external benefits into considerations and 0.221 if CO₂ emissions taken into consideration W/D/B systems are more economical than alone diesel system as the COE is 0.432 $/kWh for diesel alone.

Keywords: renewable energy, hybrid energy system, on-off grid system, simulation, optimization and environmental impacts

Procedia PDF Downloads 93
8878 Modeling the Performance of Natural Sand-Bentonite Barriers after Infiltration with Polar and Non-Polar Hydrocarbon Leachates

Authors: Altayeb Qasem, Mousa Bani Baker, Amani Nawafleh

Abstract:

The complexity of the sand-bentonite liner barrier system calls for an adequate model that reflects the conditions depending on the barrier materials and the characteristics of the permeates which lead to hydraulic conductivity changes when liners infiltrated with polar, no-polar, miscible and immiscible liquids. This paper is dedicated to developing a model for evaluating the hydraulic conductivity in the form of a simple indicator for the compatibility of the liner versus leachate. Based on two liner compositions (95% sand: 5% bentonite; and 90% sand: 10% bentonite), two pressures (40 kPa and 100 kPa), and three leachates: water, ethanol and biofuel. Two characteristics of the leacahtes were used: viscosity of permeate and its octanol-water partitioning coefficient (Kow). Three characteristics of the liners mixtures were evaluated which had impact on the hydraulic conductivity of the liner system: the initial content of bentonite (%), the free swelling index, and the shrinkage limit of the initial liner’s mixture. Engineers can use this modest tool to predict a potential liner failure in sand-bentonite barriers.

Keywords: liner performance, sand-bentonite barriers, viscosity, free swelling index, shrinkage limit, octanol-water partitioning coefficient, hydraulic conductivity, theoretical modeling

Procedia PDF Downloads 397
8877 Optimizing the Performance of Thermoelectric for Cooling Computer Chips Using Different Types of Electrical Pulses

Authors: Saleh Alshehri

Abstract:

Thermoelectric technology is currently being used in many industrial applications for cooling, heating and generating electricity. This research mainly focuses on using thermoelectric to cool down high-speed computer chips at different operating conditions. A previously developed and validated three-dimensional model for optimizing and assessing the performance of cascaded thermoelectric and non-cascaded thermoelectric is used in this study to investigate the possibility of decreasing the hotspot temperature of computer chip. Additionally, a test assembly is built and tested at steady-state and transient conditions. The obtained optimum thermoelectric current at steady-state condition is used to conduct a number of pulsed tests (i.e. transient tests) with different shapes to cool the computer chips hotspots. The results of the steady-state tests showed that at hotspot heat rate of 15.58 W (5.97 W/cm2), using thermoelectric current of 4.5 A has resulted in decreasing the hotspot temperature at open circuit condition (89.3 °C) by 50.1 °C. Maximum and minimum hotspot temperatures have been affected by ON and OFF duration of the electrical current pulse. Maximum hotspot temperature was resulted by longer OFF pulse period. In addition, longer ON pulse period has generated the minimum hotspot temperature.

Keywords: thermoelectric generator, TEG, thermoelectric cooler, TEC, chip hotspots, electronic cooling

Procedia PDF Downloads 124
8876 Comparing Remote Sensing and in Situ Analyses of Test Wheat Plants as Means for Optimizing Data Collection in Precision Agriculture

Authors: Endalkachew Abebe Kebede, Bojin Bojinov, Andon Vasilev Andonov, Orhan Dengiz

Abstract:

Remote sensing has a potential application in assessing and monitoring the plants' biophysical properties using the spectral responses of plants and soils within the electromagnetic spectrum. However, only a few reports compare the performance of different remote sensing sensors against in-situ field spectral measurement. The current study assessed the potential applications of open data source satellite images (Sentinel 2 and Landsat 9) in estimating the biophysical properties of the wheat crop on a study farm found in the village of OvchaMogila. A Landsat 9 (30 m resolution) and Sentinel-2 (10 m resolution) satellite images with less than 10% cloud cover have been extracted from the open data sources for the period of December 2021 to April 2022. An Unmanned Aerial Vehicle (UAV) has been used to capture the spectral response of plant leaves. In addition, SpectraVue 710s Leaf Spectrometer was used to measure the spectral response of the crop in April at five different locations within the same field. The ten most common vegetation indices have been selected and calculated based on the reflectance wavelength range of remote sensing tools used. The soil samples have been collected in eight different locations within the farm plot. The different physicochemical properties of the soil (pH, texture, N, P₂O₅, and K₂O) have been analyzed in the laboratory. The finer resolution images from the UAV and the Leaf Spectrometer have been used to validate the satellite images. The performance of different sensors has been compared based on the measured leaf spectral response and the extracted vegetation indices using the five sampling points. A scatter plot with the coefficient of determination (R2) and Root Mean Square Error (RMSE) and the correlation (r) matrix prepared using the corr and heatmap python libraries have been used for comparing the performance of Sentinel 2 and Landsat 9 VIs compared to the drone and SpectraVue 710s spectrophotometer. The soil analysis revealed the study farm plot is slightly alkaline (8.4 to 8.52). The soil texture of the study farm is dominantly Clay and Clay Loam.The vegetation indices (VIs) increased linearly with the growth of the plant. Both the scatter plot and the correlation matrix showed that Sentinel 2 vegetation indices have a relatively better correlation with the vegetation indices of the Buteo dronecompared to the Landsat 9. The Landsat 9 vegetation indices somewhat align better with the leaf spectrometer. Generally, the Sentinel 2 showed a better performance than the Landsat 9. Further study with enough field spectral sampling and repeated UAV imaging is required to improve the quality of the current study.

Keywords: landsat 9, leaf spectrometer, sentinel 2, UAV

Procedia PDF Downloads 88
8875 Budget and the Performance of Public Enterprises: A Study of Selected Public Enterprises in Nasarawa State Nigeria (2009-2013)

Authors: Dalhatu, Musa Yusha’u, Shuaibu Sidi Safiyanu, Haliru Musa Hussaini

Abstract:

This study examined budget and performance of public enterprises in Nasarawa State, Nigeria in a period of 2009-2013. The study utilized secondary sources of data obtained from four selected parastatals’ budget allocation and revenue generation for the period under review. The simple correlation coefficient was used to analyze the extent of the relationship between budget allocation and revenue generation of the parastatals. Findings revealed varying results. There was positive (0.21) and weak correlation between expenditure and revenue of Nasarawa Investment and Property Development Company (NIPDC). However, the study further revealed that there was strong and weak negative relationship in the revenue and expenditure of the following parastatals over the period under review. Viz: Nasarawa State Water Board, -0.27 (weak), Nasarawa State Broadcasting Service, -0.52 (Strong) and Nasarawa State College of Agriculture, -0.36 (weak). The study therefore, recommends that government should increase its investments in NIPDC to enhance efficiency and profitability. It also recommends that government should strengthen its fiscal responsibility, accountability and transparency in public parastatals.

Keywords: budget, public enterprises, revenue, enterprise

Procedia PDF Downloads 234
8874 Demographic Profile, Risk Factors and In-hospital Outcomes of Acute Coronary Syndrome (ACS) in Young Population, in Pakistan-Single Center Real World Experience

Authors: Asma Qudrat, Abid Ullah, Rafi Ullah, Ali Raza, Shah Zeb, Syed Ali Shan Ul-Haq, Shahkar Ahmed Shah, Attiya Hameed Khan, Saad Zaheer, Umama Qasim, Kiran Jamal, Zahoor khan

Abstract:

Objectives: Coronary artery disease (CAD) is the major public health issue associated with high mortality and morbidity rate worldwide. Young patients with ACS have unique characteristics with different demographic profiles and risk factors. The precise diagnosis and early risk stratification is important in guiding treatment and predicting the prognosis of young patients with ACS. To evaluate the associated demographics, risk factors, and outcomes profile of ACS in young age patients. Methods: The research follow a retrospective design, the single centre study of patients diagnosis with the first event of ACS in young age (>18 and <40) were included. Data collection included demographic profiles, risk factors, and in-hospital outcomes of young ACS patients. The patient’s data was retrieved through Electronic Medical Records (EMR) of Peshawar Institute of Cardiology (PIC), and all characteristic were assessed. Results: In this study, 77% were male, and 23% were female patients. The risk factors were assessed with CAD and shown significant results (P < 0.01). The most common presentation was STEMI, with (45%) most in ACS young patients. The angiographic pattern showed single vessel disease (SVD) in 49%, double vessel disease (DVD) in 17% and triple vessel disease (TVD) was found in 10%, and Left Artery Disease (LAD) (54%) was present to be the most common involved artery. Conclusion: It is concluded that the male sex was predominant in ACS young age patients. SVD was the common coronary angiographic finding. Risk factors showed significant results towards CAD and common presentations.

Keywords: coronary artery disease, Non-ST elevation myocardial infarction, ST elevation myocardial infarction, unstable angina, acute coronary syndrome

Procedia PDF Downloads 143
8873 Evaluation and Compression of Different Language Transformer Models for Semantic Textual Similarity Binary Task Using Minority Language Resources

Authors: Ma. Gracia Corazon Cayanan, Kai Yuen Cheong, Li Sha

Abstract:

Training a language model for a minority language has been a challenging task. The lack of available corpora to train and fine-tune state-of-the-art language models is still a challenge in the area of Natural Language Processing (NLP). Moreover, the need for high computational resources and bulk data limit the attainment of this task. In this paper, we presented the following contributions: (1) we introduce and used a translation pair set of Tagalog and English (TL-EN) in pre-training a language model to a minority language resource; (2) we fine-tuned and evaluated top-ranking and pre-trained semantic textual similarity binary task (STSB) models, to both TL-EN and STS dataset pairs. (3) then, we reduced the size of the model to offset the need for high computational resources. Based on our results, the models that were pre-trained to translation pairs and STS pairs can perform well for STSB task. Also, having it reduced to a smaller dimension has no negative effect on the performance but rather has a notable increase on the similarity scores. Moreover, models that were pre-trained to a similar dataset have a tremendous effect on the model’s performance scores.

Keywords: semantic matching, semantic textual similarity binary task, low resource minority language, fine-tuning, dimension reduction, transformer models

Procedia PDF Downloads 190
8872 Meteosat Second Generation Image Compression Based on the Radon Transform and Linear Predictive Coding: Comparison and Performance

Authors: Cherifi Mehdi, Lahdir Mourad, Ameur Soltane

Abstract:

Image compression is used to reduce the number of bits required to represent an image. The Meteosat Second Generation satellite (MSG) allows the acquisition of 12 image files every 15 minutes. Which results a large databases sizes. The transform selected in the images compression should contribute to reduce the data representing the images. The Radon transform retrieves the Radon points that represent the sum of the pixels in a given angle for each direction. Linear predictive coding (LPC) with filtering provides a good decorrelation of Radon points using a Predictor constitute by the Symmetric Nearest Neighbor filter (SNN) coefficients, which result losses during decompression. Finally, Run Length Coding (RLC) gives us a high and fixed compression ratio regardless of the input image. In this paper, a novel image compression method based on the Radon transform and linear predictive coding (LPC) for MSG images is proposed. MSG image compression based on the Radon transform and the LPC provides a good compromise between compression and quality of reconstruction. A comparison of our method with other whose two based on DCT and one on DWT bi-orthogonal filtering is evaluated to show the power of the Radon transform in its resistibility against the quantization noise and to evaluate the performance of our method. Evaluation criteria like PSNR and the compression ratio allows showing the efficiency of our method of compression.

Keywords: image compression, radon transform, linear predictive coding (LPC), run lengthcoding (RLC), meteosat second generation (MSG)

Procedia PDF Downloads 399
8871 Artificial Bee Colony Optimization for SNR Maximization through Relay Selection in Underlay Cognitive Radio Networks

Authors: Babar Sultan, Kiran Sultan, Waseem Khan, Ijaz Mansoor Qureshi

Abstract:

In this paper, a novel idea for the performance enhancement of secondary network is proposed for Underlay Cognitive Radio Networks (CRNs). In Underlay CRNs, primary users (PUs) impose strict interference constraints on the secondary users (SUs). The proposed scheme is based on Artificial Bee Colony (ABC) optimization for relay selection and power allocation to handle the highlighted primary challenge of Underlay CRNs. ABC is a simple, population-based optimization algorithm which attains global optimum solution by combining local search methods (Employed and Onlooker Bees) and global search methods (Scout Bees). The proposed two-phase relay selection and power allocation algorithm aims to maximize the signal-to-noise ratio (SNR) at the destination while operating in an underlying mode. The proposed algorithm has less computational complexity and its performance is verified through simulation results for a different number of potential relays, different interference threshold levels and different transmit power thresholds for the selected relays.

Keywords: artificial bee colony, underlay spectrum sharing, cognitive radio networks, amplify-and-forward

Procedia PDF Downloads 562
8870 Frailty Models for Modeling Heterogeneity: Simulation Study and Application to Quebec Pension Plan

Authors: Souad Romdhane, Lotfi Belkacem

Abstract:

When referring to actuarial analysis of lifetime, only models accounting for observable risk factors have been developed. Within this context, Cox proportional hazards model (CPH model) is commonly used to assess the effects of observable covariates as gender, age, smoking habits, on the hazard rates. These covariates may fail to fully account for the true lifetime interval. This may be due to the existence of another random variable (frailty) that is still being ignored. The aim of this paper is to examine the shared frailty issue in the Cox proportional hazard model by including two different parametric forms of frailty into the hazard function. Four estimated methods are used to fit them. The performance of the parameter estimates is assessed and compared between the classical Cox model and these frailty models through a real-life data set from the Quebec Pension Plan and then using a more general simulation study. This performance is investigated in terms of the bias of point estimates and their empirical standard errors in both fixed and random effect parts. Both the simulation and the real dataset studies showed differences between classical Cox model and shared frailty model.

Keywords: life insurance-pension plan, survival analysis, risk factors, cox proportional hazards model, multivariate failure-time data, shared frailty, simulations study

Procedia PDF Downloads 346
8869 Dynamics Characterizations of Dielectric Electro- Active Polymer Pull Actuator for Vibration Control

Authors: A. M. Wahab, E. Rustighi

Abstract:

Elastomeric dielectric material has recently become a new alternative for actuator technology. The characteristics of dielectric elastomers placed between two electrodes to withstand large strain when electrodes are charged has attracted the attention of many researcher to study this material for actuator technology. Thus, in the past few years Danfoss Ventures A/S has established their own dielectric electro-active polymer (DEAP), which was called PolyPower. The main objective of this work was to investigate the dynamic characteristics for vibration control of a PolyPower actuator folded in ‘pull’ configuration. A range of experiments was carried out on the folded actuator including passive (without electrical load) and active (with electrical load) testing. For both categories static and dynamic testing have been done to determine the behavior of folded DEAP actuator. Voltage-Strain experiments show that the DEAP folded actuator is a non-linear system. It is also shown that the voltage supplied has no effect on the natural frequency. Finally, varying AC voltage with different amplitude and frequency shows the parameters that influence the performance of DEAP folded actuator. As a result, the actuator performance dominated by the frequency dependence of the elastic response and was less influenced by dielectric properties.

Keywords: dielectric electro-active polymer, pull actuator, static, dynamic, electromechanical

Procedia PDF Downloads 240
8868 Effect of Supplementing Different Sources and Levels of Phytase Enzyme to Diets on Productive Performance for Broiler Chickens

Authors: Sunbul Jassim Hamodi, Muna Khalid Khudayer, Firas Muzahem Hussein

Abstract:

The experiment was conducted to study the effect of supplement sources of Phytase enzyme (bacterial, fungal, enzymes mixture) using levels (250, 500, 750) FTY/ kg feed to diets compared with control on the performance for one thousand fifty broiler chicks (Ross 308) from 1day old with initial weight 39.78 gm till 42 days. The study involved 10 treatments, three replicates per treatment (35 chicks/replicate). Treatments were as follows: T1: control diet (without any addition). T2: added bacterial phytase enzyme 250FTY/ kg feed. T3: added bacterial phytase enzyme 500FTY/ kg feed. T4: added bacterial phytase enzyme 750FTY/ kg feed. T5: added fungal phytase enzyme 250FTY/ kg feed. T6: added fungal phytase enzyme 500FTY/ kg feed. T7: added fungal phytase enzyme 750FTY/ kg feed. T8 added enzymes mixture 250U/ kg feed. T9: added enzymes mixture 500U/ kg feed. T10: added enzymes mixture 750U/ kg feed. The results revealed that supplementing 750 U from enzymes mixture to broiler diet increased significantly (p <0.05) body weight compared with (250 FTY bacterial phytase/Kgfeed), (750 FTY bacterial phytase/Kg feed), (750FTY fungal phytase/Kgfeed) at 6 weeks, also supplemented different sources and levels from phytase enzyme improved a cumulative weight gain for (500 FTY bacterial phytase/Kgfeed), (250FTY fungal phytase/Kgfeed), (500FTY fungal phytase/Kgfeed), (250 Uenzymes mixture/Kgfeed), (500 Uenzymes mixture/Kgfeed) and (750 U enzymes mixture/Kgfeed) treatments compared with (750 FTY fungal phytase/Kgfeed)treatment, about accumulative feed consumption (500 FTY fungal phytase/Kgfeed) and (250 Uenzymes mixture/Kgfeed) increased significantly compared with control group and (750FTY fungal phytase/Kgfeed) during 1-6 weeks. There were significantly improved in cumulative feed conversion for (500U enzymes mixture/Kgfeed) compared with the worse feed conversion ratio that recorded in (250 FTY bacterial phytase/Kgfeed). No significant differences between treatments in internal organs relative weights, carcass cuts, dressing percentage and production index. Mortality was increased in (750FTY fungal phytase/Kgfeed) compared with other treatments.

Keywords: phytase, phytic acid, broiler, productive performance

Procedia PDF Downloads 282
8867 Study on the Pavement Structural Performance of Highways in the North China Region Based on Pavement Distress and Ground Penetrating Radar

Authors: Mingwei Yi, Liujie Guo, Zongjun Pan, Xiang Lin, Xiaoming Yi

Abstract:

With the rapid expansion of road construction mileage in China, the scale of road maintenance needs has concurrently escalated. As the service life of roads extends, the design of pavement repair and maintenance emerges as a crucial component in preserving the excellent performance of the pavement. The remaining service life of asphalt pavement structure is a vital parameter in the lifecycle maintenance design of asphalt pavements. Based on an analysis of pavement structural integrity, this study introduces a characterization and assessment of the remaining life of existing asphalt pavement structures. It proposes indicators such as the transverse crack spacing and the length of longitudinal cracks. The transverse crack spacing decreases with an increase in maintenance intervals and with the extended use of semi-rigid base layer structures, although this trend becomes less pronounced after maintenance intervals exceed 4 years. The length of longitudinal cracks increases with longer maintenance intervals, but this trend weakens after five years. This system can support the enhancement of standardization and scientific design in highway maintenance decision-making processes.

Keywords: structural integrity, highways, pavement evaluation, asphalt concrete pavement

Procedia PDF Downloads 49
8866 The Ultimate Scaling Limit of Monolayer Material Field-Effect-Transistors

Authors: Y. Lu, L. Liu, J. Guo

Abstract:

Monolayer graphene and dichaclogenide semiconductor materials attract extensive research interest for potential nanoelectronics applications. The ultimate scaling limit of double gate MoS2 Field-Effect-Transistors (FETs) with a monolayer thin body is examined and compared with ultra-thin-body Si FETs by using self-consistent quantum transport simulation in the presence of phonon scattering. Modelling of phonon scattering, quantum mechanical effects, and self-consistent electrostatics allows us to accurately assess the performance potential of monolayer MoS2 FETs. The results revealed that monolayer MoS2 FETs show 52% smaller Drain Induced Barrier Lowering (DIBL) and 13% Smaller Sub-Threshold Swing (SS) than 3 nm-thick-body Si FETs at a channel length of 10 nm with the same gating. With a requirement of SS<100mV/dec, the scaling limit of monolayer MoS2 FETs is assessed to be 5 nm, comparing with 8nm of the ultra-thin-body Si counterparts due to the monolayer thin body and higher effective mass which reduces direct source-to-drain tunnelling. By comparing with the ITRS target for high performance logic devices of 2023; double gate monolayer MoS2 FETs can fulfil the ITRS requirements.

Keywords: nanotransistors, monolayer 2D materials, quantum transport, scaling limit

Procedia PDF Downloads 219
8865 Analysis of Sea Waves Characteristics and Assessment of Potential Wave Power in Egyptian Mediterranean Waters

Authors: Ahmed A. El-Gindy, Elham S. El-Nashar, Abdallah Nafaa, Sameh El-Kafrawy

Abstract:

The generation of energy from marine energy became one of the most preferable resources since it is a clean source and friendly to environment. Egypt has long shores along Mediterranean with important cities that need energy resources with significant wave energy. No detailed studies have been done on wave energy distribution in the Egyptian waters. The objective of this paper is to assess the energy wave power available in the Egyptian waters for the choice of the most suitable devices to be used in this area. This paper deals the characteristics and power of the offshore waves in the Egyptian waters. Since the field observations of waves are not frequent and need much technical work, the European Centre for Medium-Range Weather Forecasts (ECMWF) interim reanalysis data in Mediterranean, with a grid size 0.75 degree, which is a relatively course grid, are considered in the present study for preliminary assessment of sea waves characteristics and power. The used data covers the period from 2012 to 2014. The data used are significant wave height (swh), mean wave period (mwp) and wave direction taken at six hourly intervals, at seven chosen stations, and at grid points covering the Egyptian waters. The wave power (wp) formula was used to calculate energy flux. Descriptive statistical analysis including monthly means and standard deviations of the swh, mwp, and wp. The percentiles of wave heights and their corresponding power are done, as a tool of choice of the best technology suitable for the site. The surfer is used to show spatial distributions of wp. The analysis of data at chosen 7 stations determined the potential of wp off important Egyptian cities. Offshore of Al Saloum and Marsa Matruh, the highest wp occurred in January and February (16.93-18.05) ± (18.08-22.12) kw/m while the lowest occurred in June and October (1.49-1.69) ± (1.45-1.74) kw/m. In front of Alexandria and Rashid, the highest wp occurred in January and February (16.93-18.05) ± (18.08-22.12) kw/m while the lowest occurred in June and September (1.29-2.01) ± (1.31-1.83) kw/m. In front of Damietta and Port Said, the highest wp occurred in February (14.29-17.61) ± (21.61-27.10) kw/m and the lowest occurred in June (0.94-0.96) ± (0.71-0.72) kw/m. In winter, the probabilities of waves higher than 0.8 m in percentage were, at Al Saloum and Marsa Matruh (76.56-80.33) ± (11.62-12.05), at Alexandria and Rashid (73.67-74.79) ± (16.21-18.59) and at Damietta and Port Said (66.28-68.69) ± (17.88-17.90). In spring, the percentiles were, at Al Saloum and Marsa Matruh, (48.17-50.92) ± (5.79-6.56), at Alexandria and Rashid, (39.38-43.59) ± (9.06-9.34) and at Damietta and Port Said, (31.59-33.61) ± (10.72-11.25). In summer, the probabilities were, at Al Saloum and Marsa Matruh (57.70-66.67) ± (4.87-6.83), at Alexandria and Rashid (59.96-65.13) ± (9.14-9.35) and at Damietta and Port Said (46.38-49.28) ± (10.89-11.47). In autumn, the probabilities were, at Al Saloum and Marsa Matruh (58.75-59.56) ± (2.55-5.84), at Alexandria and Rashid (47.78-52.13) ± (3.11-7.08) and at Damietta and Port Said (41.16-42.52) ± (7.52-8.34).

Keywords: distribution of sea waves energy, Egyptian Mediterranean waters, waves characteristics, waves power

Procedia PDF Downloads 175