Search results for: mean absolute error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2260

Search results for: mean absolute error

430 Nonlinear Estimation Model for Rail Track Deterioration

Authors: M. Karimpour, L. Hitihamillage, N. Elkhoury, S. Moridpour, R. Hesami

Abstract:

Rail transport authorities around the world have been facing a significant challenge when predicting rail infrastructure maintenance work for a long period of time. Generally, maintenance monitoring and prediction is conducted manually. With the restrictions in economy, the rail transport authorities are in pursuit of improved modern methods, which can provide precise prediction of rail maintenance time and location. The expectation from such a method is to develop models to minimize the human error that is strongly related to manual prediction. Such models will help them in understanding how the track degradation occurs overtime under the change in different conditions (e.g. rail load, rail type, rail profile). They need a well-structured technique to identify the precise time that rail tracks fail in order to minimize the maintenance cost/time and secure the vehicles. The rail track characteristics that have been collected over the years will be used in developing rail track degradation prediction models. Since these data have been collected in large volumes and the data collection is done both electronically and manually, it is possible to have some errors. Sometimes these errors make it impossible to use them in prediction model development. This is one of the major drawbacks in rail track degradation prediction. An accurate model can play a key role in the estimation of the long-term behavior of rail tracks. Accurate models increase the track safety and decrease the cost of maintenance in long term. In this research, a short review of rail track degradation prediction models has been discussed before estimating rail track degradation for the curve sections of Melbourne tram track system using Adaptive Network-based Fuzzy Inference System (ANFIS) model.

Keywords: ANFIS, MGT, prediction modeling, rail track degradation

Procedia PDF Downloads 292
429 Green Extraction Technologies of Flavonoids Containing Pharmaceuticals

Authors: Lamzira Ebralidze, Aleksandre Tsertsvadze, Dali Berashvili, Aliosha Bakuridze

Abstract:

Nowadays, there is an increasing demand for biologically active substances from vegetable, animal, and mineral resources. In terms of the use of natural compounds, pharmaceutical, cosmetic, and nutrition industry has big interest. The biggest drawback of conventional extraction methods is the need to use a large volume of organic extragents. The removal of the organic solvent is a multi-stage process. And their absolute removal cannot be achieved, and they still appear in the final product as impurities. A large amount of waste containing organic solvent damages not only human health but also has the harmful effects of the environment. Accordingly, researchers are focused on improving the extraction methods, which aims to minimize the use of organic solvents and energy sources, using alternate solvents and renewable raw materials. In this context, green extraction principles were formed. Green Extraction is a need of today’s environment. Green Extraction is the concept, and it totally corresponds to the challenges of the 21st century. The extraction of biologically active compounds based on green extraction principles is vital from the view of preservation and maintaining biodiversity. Novel technologies of green extraction are known, such as "cold methods" because during the extraction process, the temperature is relatively lower, and it doesn’t have a negative impact on the stability of plant compounds. Novel technologies provide great opportunities to reduce or replace the use of organic toxic solvents, the efficiency of the process, enhance excretion yield, and improve the quality of the final product. The objective of the research is the development of green technologies of flavonoids containing preparations. Methodology: At the first stage of the research, flavonoids containing preparations (Tincture Herba Leonuri, flamine, rutine) were prepared based on conventional extraction methods: maceration, bismaceration, percolation, repercolation. At the same time, the same preparations were prepared based on green technologies, microwave-assisted, UV extraction methods. Product quality characteristics were evaluated by pharmacopeia methods. At the next stage of the research technological - economic characteristics and cost efficiency of products prepared based on conventional and novel technologies were determined. For the extraction of flavonoids, water is used as extragent. Surface-active substances are used as co-solvent in order to reduce surface tension, which significantly increases the solubility of polyphenols in water. Different concentrations of water-glycerol mixture, cyclodextrin, ionic solvent were used for the extraction process. In vitro antioxidant activity will be studied by the spectrophotometric method, using DPPH (2,2-diphenyl-1- picrylhydrazyl) as an antioxidant assay. The advantage of green extraction methods is also the possibility of obtaining higher yield in case of low temperature, limitation extraction process of undesirable compounds. That is especially important for the extraction of thermosensitive compounds and maintaining their stability.

Keywords: extraction, green technologies, natural resources, flavonoids

Procedia PDF Downloads 105
428 Computational Fluid Dynamics Simulations and Analysis of Air Bubble Rising in a Column of Liquid

Authors: Baha-Aldeen S. Algmati, Ahmed R. Ballil

Abstract:

Multiphase flows occur widely in many engineering and industrial processes as well as in the environment we live in. In particular, bubbly flows are considered to be crucial phenomena in fluid flow applications and can be studied and analyzed experimentally, analytically, and computationally. In the present paper, the dynamic motion of an air bubble rising within a column of liquid is numerically simulated using an open-source CFD modeling tool 'OpenFOAM'. An interface tracking numerical algorithm called MULES algorithm, which is built-in OpenFOAM, is chosen to solve an appropriate mathematical model based on the volume of fluid (VOF) numerical method. The bubbles initially have a spherical shape and starting from rest in the stagnant column of liquid. The algorithm is initially verified against numerical results and is also validated against available experimental data. The comparison revealed that this algorithm provides results that are in a very good agreement with the 2D numerical data of other CFD codes. Also, the results of the bubble shape and terminal velocity obtained from the 3D numerical simulation showed a very good qualitative and quantitative agreement with the experimental data. The simulated rising bubbles yield a very small percentage of error in the bubble terminal velocity compared with the experimental data. The obtained results prove the capability of OpenFOAM as a powerful tool to predict the behavior of rising characteristics of the spherical bubbles in the stagnant column of liquid. This will pave the way for a deeper understanding of the phenomenon of the rise of bubbles in liquids.

Keywords: CFD simulations, multiphase flows, OpenFOAM, rise of bubble, volume of fluid method, VOF

Procedia PDF Downloads 97
427 Protective Effect of Levetiracetam on Aggravation of Memory Impairment in Temporal Lobe Epilepsy by Phenytoin

Authors: Asher John Mohan, Krishna K. L.

Abstract:

Objectives: (1) To assess the extent of memory impairment induced by Phenytoin (PHT) at normal and reduced dose on temporal lobe epileptic mice. (2) To evaluate the protective effect of Levetiracetam (LEV) on aggravation of memory impairment in temporal lobe epileptic mice by PHT. Materials and Methods: Albino mice of either sex (n=36) were used for the study for a period of 64 days. Convulsions were induced by intraperitoneal administration of pilocarpine 280 mg/kg on every 6th day. Radial arm maze (RAM) was employed to evaluate the memory impairment activity on every 7th day. The anticonvulsant and memory impairment activity were assessed in PHT normal and reduced doses both alone and in combination with LEV. RAM error scores and convulsive scores were the parameters considered for this study. Brain acetylcholine esterase and glutamate were determined along with histopathological studies of frontal cortex. Results: Administration of PHT for 64 days on mice has shown aggravation of memory impairment activity on temporal lobe epileptic mice. Although the reduction in PHT dose was found to decrease the degree of memory impairment the same decreased the anticonvulsant potency. The combination with LEV not only brought about the correction of impaired memory but also replaced the loss of potency due to the reduction of the dose of the antiepileptic drug employed. These findings were confirmed with enzyme and neurotransmitter levels in addition to histopathological studies. Conclusion: This study thus builds a foundation in combining a nootropic anticonvulsant with an antiepileptic drug to curb the adverse effect of memory impairment associated with temporal lobe epilepsy. However further extensive research is a must for the practical incorporation of this approach into disease therapy.

Keywords: anti-epileptic drug, Phenytoin, memory impairment, Pilocarpine

Procedia PDF Downloads 291
426 Autonomous Flight Control for Multirotor by Alternative Input Output State Linearization with Nested Saturations

Authors: Yong Eun Yoon, Eric N. Johnson, Liling Ren

Abstract:

Multirotor is one of the most popular types of small unmanned aircraft systems and has already been used in many areas including transport, military, surveillance, and leisure. Together with its popularity, the needs for proper flight control is growing because in most applications it is required to conduct its missions autonomously, which is in many aspects based on autonomous flight control. There have been many studies about the flight control for multirotor, but there is still room for enhancements in terms of performance and efficiency. This paper presents an autonomous flight control method for multirotor based on alternative input output linearization coupled with nested saturations. With alternative choice of the output of the multirotor flight control system, we can reduce computational cost regarding Lie algebra, and the linearized system can be stabilized with the introduction of nested saturations with real poles of our own design. Stabilization of internal dynamics is also based on the nested saturations and accompanies the determination of part of desired states. In particular, outer control loops involving state variables which originally are not included in the output of the flight control system is naturally rendered through this internal dynamics stabilization. We can also observe that desired tilting angles are determined by error dynamics from outer loops. Simulation results show that in any tracking situations multirotor stabilizes itself with small time constants, preceded by tuning process for control parameters with relatively low degree of complexity. Future study includes control of piecewise linear behavior of multirotor with actuator saturations, and the optimal determination of desired states while tracking multiple waypoints.

Keywords: automatic flight control, input output linearization, multirotor, nested saturations

Procedia PDF Downloads 201
425 6 DOF Cable-Driven Haptic Robot for Rendering High Axial Force with Low Off-Axis Impedance

Authors: Naghmeh Zamani, Ashkan Pourkand, David Grow

Abstract:

This paper presents the design and mechanical model of a hybrid impedance/admittance haptic device optimized for applications, like bone drilling, spinal awl probe use, and other surgical techniques were high force is required in the tool-axial direction, and low impedance is needed in all other directions. The performance levels required cannot be satisfied by existing, off-the-shelf haptic devices. This design may allow critical improvements in simulator fidelity for surgery training. The device consists primarily of two low-mass (carbon fiber) plates with a rod passing through them. Collectively, the device provides 6 DOF. The rod slides through a bushing in the top plate and it is connected to the bottom plate with a universal joint, constrained to move in only 2 DOF, allowing axial torque display the user’s hand. The two parallel plates are actuated and located by means of four cables pulled by motors. The forward kinematic equations are derived to ensure that the plates orientation remains constant. The corresponding equations are solved using the Newton-Raphson method. The static force/torque equations are also presented. Finally, we present the predicted distribution of location error, cables velocity, cable tension, force and torque for the device. These results and preliminary hardware fabrication indicate that this design may provide a revolutionary approach for haptic display of many surgical procedures by means of an architecture that allows arbitrary workspace scaling. Scaling of the height and width can be scaled arbitrarily.

Keywords: cable direct driven robot, haptics, parallel plates, bone drilling

Procedia PDF Downloads 233
424 Seepage Analysis through Earth Dam Embankment: Case Study of Batu Dam

Authors: Larifah Mohd Sidik, Anuar Kasa

Abstract:

In recent years, the demands for raw water are increasing along with the growth of the economy and population. Hence, the need for the construction and operation of dams is one of the solutions for the management of water resources problems. The stability of the embankment should be taken into consideration to evaluate the safety of retaining water. The safety of the dam is mostly based on numerous measurable components, for instance, seepage flowrate, pore water pressure and deformation of the embankment. Seepage and slope stability is the primary and most important reason to ascertain the overall safety behavior of the dams. This research study was conducted to evaluate static condition seepage and slope stability performances of Batu dam which is located in Kuala Lumpur capital city. The numerical solution Geostudio-2012 software was employed to analyse the seepage using finite element method, SEEP/W and slope stability using limit equilibrium method, SLOPE/W for three different cases of reservoir level operations; normal and flooded condition. Results of seepage analysis using SEEP/W were utilized as parental input for the analysis of SLOPE/W. Sensitivity analysis on hydraulic conductivity of material was done and calibrated to minimize the relative error of simulation SEEP/W, where the comparison observed field data and predicted value were also carried out. In seepage analysis, such as leakage flow rate, pore water distribution and location of a phreatic line are determined using the SEEP/W. The result of seepage analysis shows the clay core effectively lowered the phreatic surface and no piping failure is shown in the result. Hence, the total seepage flux was acceptable and within the permissible limit.

Keywords: earth dam, dam safety, seepage, slope stability, pore water pressure

Procedia PDF Downloads 191
423 Microstructure Evolution and Modelling of Shear Forming

Authors: Karla D. Vazquez-Valdez, Bradley P. Wynne

Abstract:

In the last decades manufacturing needs have been changing, leading to the study of manufacturing methods that were underdeveloped, such as incremental forming processes like shear forming. These processes use rotating tools in constant local contact with the workpiece, which is often also rotating, to generate shape. This means much lower loads to forge large parts and no need for expensive special tooling. Potential has already been established by demonstrating manufacture of high-value products, e.g., turbine and satellite parts, with high dimensional accuracy from difficult to manufacture materials. Thus, huge opportunities exist for these processes to replace the current method of manufacture for a range of high value components, e.g., eliminating lengthy machining, reducing material waste and process times; or the manufacture of a complicated shape without the development of expensive tooling. However, little is known about the exact deformation conditions during processing and why certain materials are better than others for shear forming, leading to a lot of trial and error before production. Three alloys were used for this study: Ti-54M, Jethete M154, and IN718. General Microscopy and Electron Backscatter Diffraction (EBSD) were used to measure strains and orientation maps during shear forming. A Design of Experiments (DOE) analysis was also made in order to understand the impact of process parameters in the properties of the final workpieces. Such information was the key to develop a reliable Finite Element Method (FEM) model that closely resembles the deformation paths of this process. Finally, the potential of these three materials to be shear spun was studied using the FEM model and their Forming Limit Diagram (FLD) which led to the development of a rough methodology for testing the shear spinnability of various metals.

Keywords: shear forming, damage, principal strains, forming limit diagram

Procedia PDF Downloads 137
422 Numerical Study of Elastic Performances of Sandwich Beam with Carbon-Fibre Reinforced Skins

Authors: Soukaina Ounss, Hamid Mounir, Abdellatif El Marjani

Abstract:

Sandwich materials with composite reinforced skins are mostly required in advanced construction applications with a view to ensure resistant structures. Their lightweight, their high flexural stiffness and their optimal thermal insulation make them a suitable solution to obtain efficient structures with performing rigidity and optimal energy safety. In this paper, the mechanical behavior of a sandwich beam with composite skins reinforced by unidirectional carbon fibers is investigated numerically through analyzing the impact of reinforcements specifications on the longitudinal elastic modulus in order to select the adequate sandwich configuration that has an interesting rigidity and an accurate convergence to the analytical approach which is proposed to verify performed numerical simulations. Therefore, concerned study starts by testing flexion performances of skins with various fibers orientations and volume fractions to determine those to use in sandwich beam. For that, the combination of a reinforcement inclination of 30° and a volume ratio of 60% is selected with the one with 60° of fibers orientation and 40% of volume fraction, this last guarantees to chosen skins an important rigidity with an optimal fibers concentration and a great enhance in convergence to analytical results in the sandwich model for the reason of the crucial core role as transverse shear absorber. Thus, a resistant sandwich beam is elaborated from a face-sheet constituted from two layers of previous skins with fibers oriented in 60° and an epoxy core; concerned beam has a longitudinal elastic modulus of 54 Gpa (gigapascal) that equals to the analytical value by a negligible error of 2%.

Keywords: fibers orientation, fibers volume ratio, longitudinal elastic modulus, sandwich beam

Procedia PDF Downloads 126
421 The Cost of Beauty: Insecurity and Profit

Authors: D. Cole, S. Mahootian, P. Medlock

Abstract:

This research contributes to existing knowledge of the complexities surrounding women’s relationship to beauty standards by examining their lived experiences. While there is much academic work on the effects of culturally imposed and largely unattainable beauty standards, the arguments tend to fall into two paradigms. On the one hand is the radical feminist perspective that argues that women are subjected to absolute oppression within the patriarchal system in which beauty standards have been constructed. This position advocates for a complete restructuring of social institutions to liberate women from all types of oppression. On the other hand, there are liberal feminist arguments that focus on choice, arguing that women’s agency in how to present themselves is empowerment. These arguments center around what women do within the patriarchal system in order to liberate themselves. However, there is very little research on the lived experiences of women negotiating these two realms: the complex negotiation between the pressure to adhere to cultural beauty standards and the agency of self-expression and empowerment. By exploring beauty standards through the intersection of societal messages (including macro-level processes such as social media and advertising as well as smaller-scale interactions such as families and peers) and lived experiences, this study seeks to provide a nuanced understanding of how women navigate and negotiate their own presentation and sense of self-identity. Current research sees a rise in incidents of body dysmorphia, depression and anxiety since the advent of social media. Approximately 91% of women are unhappy with their bodies and resort to dieting to achieve their ideal body shape, but only 5% of women naturally possess the body type often portrayed by Americans in movies and media. It is, therefore, crucial we begin talking about the processes that are affecting self-image and mental health. A question that arises is that, given these negative effects, why do companies continue to advertise and target women with standards that very few could possibly attain? One obvious answer is that keeping beauty standards largely unattainable enables the beauty and fashion industries to make large profits by promising products and procedures that will bring one up to “standard”. The creation of dissatisfaction for some is profit for others. This research utilizes qualitative methods: interviews, questionnaires, and focus groups to investigate women’s relationships to beauty standards and empowerment. To this end, we reached out to potential participants through a video campaign on social media: short clips on Instagram, Facebook, and TikTok and a longer clip on YouTube inviting users to take part in the study. Participants are asked to react to images, videos, and other beauty-related texts. The findings of this research have implications for policy development, advocacy and interventions aimed at promoting healthy inclusivity and empowerment of women.

Keywords: women, beauty, consumerism, social media

Procedia PDF Downloads 20
420 Factor Influencing Pharmacist Engagement and Turnover Intention in Thai Community Pharmacist: A Structural Equation Modelling Approach

Authors: T. Nakpun, T. Kanjanarach, T. Kittisopee

Abstract:

Turnover of community pharmacist can affect continuity of patient care and most importantly the quality of care and also the costs of a pharmacy. It was hypothesized that organizational resources, job characteristics, and social supports had direct effect on pharmacist turnover intention, and indirect effect on pharmacist turnover intention via pharmacist engagement. This research aimed to study influencing factors on pharmacist engagement and pharmacist turnover intention by testing the proposed structural hypothesized model to explain the relationship among organizational resources, job characteristics, and social supports that effect on pharmacist turnover intention and pharmacist engagement in Thai community pharmacists. A cross sectional study design with self-administered questionnaire was conducted in 209 Thai community pharmacists. Data were analyzed using Structural Equation Modeling technique with analysis of a moment structures AMOS program. The final model showed that only organizational resources had significant negative direct effect on pharmacist turnover intention (β =-0.45). Job characteristics and social supports had significant positive relationship with pharmacist engagement (β = 0.44, and 0.55 respectively). Pharmacist engagement had significant negative relationship with pharmacist turnover intention (β = - 0.24). Thus, job characteristics and social supports had significant negative indirect effect on turnover intention via pharmacist engagement (β =-0.11 and -0.13, respectively). The model fit the data well (χ2/ degree of freedom (DF) = 2.12, the goodness of fit index (GFI)=0.89, comparative fit index (CFI) = 0.94 and root mean square error of approximation (RMSEA) = 0.07). This study can be concluded that organizational resources were the most important factor because it had direct effect on pharmacist turnover intention. Job characteristics and social supports were also help decrease pharmacist turnover intention via pharmacist engagement.

Keywords: community pharmacist, influencing factor, turnover intention, work engagement

Procedia PDF Downloads 165
419 Real Time Classification of Political Tendency of Twitter Spanish Users based on Sentiment Analysis

Authors: Marc Solé, Francesc Giné, Magda Valls, Nina Bijedic

Abstract:

What people say on social media has turned into a rich source of information to understand social behavior. Specifically, the growing use of Twitter social media for political communication has arisen high opportunities to know the opinion of large numbers of politically active individuals in real time and predict the global political tendencies of a specific country. It has led to an increasing body of research on this topic. The majority of these studies have been focused on polarized political contexts characterized by only two alternatives. Unlike them, this paper tackles the challenge of forecasting Spanish political trends, characterized by multiple political parties, by means of analyzing the Twitters Users political tendency. According to this, a new strategy, named Tweets Analysis Strategy (TAS), is proposed. This is based on analyzing the users tweets by means of discovering its sentiment (positive, negative or neutral) and classifying them according to the political party they support. From this individual political tendency, the global political prediction for each political party is calculated. In order to do this, two different strategies for analyzing the sentiment analysis are proposed: one is based on Positive and Negative words Matching (PNM) and the second one is based on a Neural Networks Strategy (NNS). The complete TAS strategy has been performed in a Big-Data environment. The experimental results presented in this paper reveal that NNS strategy performs much better than PNM strategy to analyze the tweet sentiment. In addition, this research analyzes the viability of the TAS strategy to obtain the global trend in a political context make up by multiple parties with an error lower than 23%.

Keywords: political tendency, prediction, sentiment analysis, Twitter

Procedia PDF Downloads 210
418 Crop Leaf Area Index (LAI) Inversion and Scale Effect Analysis from Unmanned Aerial Vehicle (UAV)-Based Hyperspectral Data

Authors: Xiaohua Zhu, Lingling Ma, Yongguang Zhao

Abstract:

Leaf Area Index (LAI) is a key structural characteristic of crops and plays a significant role in precision agricultural management and farmland ecosystem modeling. However, LAI retrieved from different resolution data contain a scaling bias due to the spatial heterogeneity and model non-linearity, that is, there is scale effect during multi-scale LAI estimate. In this article, a typical farmland in semi-arid regions of Chinese Inner Mongolia is taken as the study area, based on the combination of PROSPECT model and SAIL model, a multiple dimensional Look-Up-Table (LUT) is generated for multiple crops LAI estimation from unmanned aerial vehicle (UAV) hyperspectral data. Based on Taylor expansion method and computational geometry model, a scale transfer model considering both difference between inter- and intra-class is constructed for scale effect analysis of LAI inversion over inhomogeneous surface. The results indicate that, (1) the LUT method based on classification and parameter sensitive analysis is useful for LAI retrieval of corn, potato, sunflower and melon on the typical farmland, with correlation coefficient R2 of 0.82 and root mean square error RMSE of 0.43m2/m-2. (2) The scale effect of LAI is becoming obvious with the decrease of image resolution, and maximum scale bias is more than 45%. (3) The scale effect of inter-classes is higher than that of intra-class, which can be corrected efficiently by the scale transfer model established based Taylor expansion and Computational geometry. After corrected, the maximum scale bias can be reduced to 1.2%.

Keywords: leaf area index (LAI), scale effect, UAV-based hyperspectral data, look-up-table (LUT), remote sensing

Procedia PDF Downloads 419
417 Adaptive Motion Compensated Spatial Temporal Filter of Colonoscopy Video

Authors: Nidhal Azawi

Abstract:

Colonoscopy procedure is widely used in the world to detect an abnormality. Early diagnosis can help to heal many patients. Because of the unavoidable artifacts that exist in colon images, doctors cannot detect a colon surface precisely. The purpose of this work is to improve the visual quality of colonoscopy videos to provide better information for physicians by removing some artifacts. This work complements a series of work consisting of three previously published papers. In this paper, Optic flow is used for motion compensation, and then consecutive images are aligned/registered to integrate some information to create a new image that has or reveals more information than the original one. Colon images have been classified into informative and noninformative images by using a deep neural network. Then, two different strategies were used to treat informative and noninformative images. Informative images were treated by using Lucas Kanade (LK) with an adaptive temporal mean/median filter, whereas noninformative images are treated by using Lucas Kanade with a derivative of Gaussian (LKDOG) with adaptive temporal median images. A comparison result showed that this work achieved better results than that results in the state- of- the- art strategies for the same degraded colon images data set, which consists of 1000 images. The new proposed algorithm reduced the error alignment by about a factor of 0.3 with a 100% successfully image alignment ratio. In conclusion, this algorithm achieved better results than the state-of-the-art approaches in case of enhancing the informative images as shown in the results section; also, it succeeded to convert the non-informative images that have very few details/no details because of the blurriness/out of focus or because of the specular highlight dominate significant amount of an image to informative images.

Keywords: optic flow, colonoscopy, artifacts, spatial temporal filter

Procedia PDF Downloads 89
416 Positron Emission Tomography Parameters as Predictors of Pathologic Response and Nodal Clearance in Patients with Stage IIIA NSCLC Receiving Trimodality Therapy

Authors: Andrea L. Arnett, Ann T. Packard, Yolanda I. Garces, Kenneth W. Merrell

Abstract:

Objective: Pathologic response following neoadjuvant chemoradiation (CRT) has been associated with improved overall survival (OS). Conflicting results have been reported regarding the pathologic predictive value of positron emission tomography (PET) response in patients with stage III lung cancer. The aim of this study was to evaluate the correlation between post-treatment PET response and pathologic response utilizing novel FDG-PET parameters. Methods: This retrospective study included patients with non-metastatic, stage IIIA (N2) NSCLC cancer treated with CRT followed by resection. All patients underwent PET prior to and after neoadjuvant CRT. Univariate analysis was utilized to assess correlations between PET response, nodal clearance, pCR, and near-complete pathologic response (defined as the microscopic residual disease or less). Maximal standard uptake value (SUV), standard uptake ratio (SUR) [normalized independently to the liver (SUR-L) and blood pool (SUR-BP)], metabolic tumor volume (MTV), and total lesion glycolysis (TLG) were measured pre- and post-chemoradiation. Results: A total of 44 patients were included for review. Median age was 61.9 years, and median follow-up was 2.6 years. Histologic subtypes included adenocarcinoma (72.2%) and squamous cell carcinoma (22.7%), and the majority of patients had the T2 disease (59.1%). The rate of pCR and near-complete pathologic response within the primary lesion was 28.9% and 44.4%, respectively. The average reduction in SUVmₐₓ was 9.2 units (range -1.9-32.8), and the majority of patients demonstrated some degree of favorable treatment response. SUR-BP and SUR-L showed a mean reduction of 4.7 units (range -0.1-17.3) and 3.5 units (range –1.7-12.6), respectively. Variation in PET response was not significantly associated with histologic subtype, concurrent chemotherapy type, stage, or radiation dose. No significant correlation was found between pathologic response and absolute change in MTV or TLG. Reduction in SUVmₐₓ and SUR were associated with increased rate of pathologic response (p ≤ 0.02). This correlation was not impacted by normalization of SUR to liver versus mediastinal blood pool. A threshold of > 75% decrease in SUR-L correlated with near-complete response, with a sensitivity of 57.9% and specificity of 85.7%, as well as positive and negative predictive values of 78.6% and 69.2%, respectively (diagnostic odds ratio [DOR]: 5.6, p=0.02). A threshold of >50% decrease in SUR was also significantly associated pathologic response (DOR 12.9, p=0.2), but specificity was substantially lower when utilizing this threshold value. No significant association was found between nodal PET parameters and pathologic nodal clearance. Conclusions: Our results suggest that treatment response to neoadjuvant therapy as assessed on PET imaging can be a predictor of pathologic response when evaluated via SUV and SUR. SUR parameters were associated with higher diagnostic odds ratios, suggesting improved predictive utility compared to SUVmₐₓ. MTV and TLG did not prove to be significant predictors of pathologic response but may warrant further investigation in a larger cohort of patients.

Keywords: lung cancer, positron emission tomography (PET), standard uptake ratio (SUR), standard uptake value (SUV)

Procedia PDF Downloads 207
415 Determination of Gold in Microelectronics Waste Pieces

Authors: S. I. Usenko, V. N. Golubeva, I. A. Konopkina, I. V. Astakhova, O. V. Vakhnina, A. A. Korableva, A. A. Kalinina, K. B. Zhogova

Abstract:

Gold can be determined in natural objects and manufactured articles of different origin. The up-to-date status of research and problems of high gold level determination in alloys and manufactured articles are described in detail in the literature. No less important is the task of this metal determination in minerals, process products and waste pieces. The latters, as objects of gold content chemical analysis, are most hard-to-study for two reasons: Because of high requirements to accuracy of analysis results and because of difference in chemical and phase composition. As a rule, such objects are characterized by compound, variable and very often unknown matrix composition that leads to unpredictable and uncontrolled effect on accuracy and other analytical characteristics of analysis technique. In this paper, the methods for the determination of gold are described, using flame atomic-absorption spectrophotometry and gravimetric analysis technique. The techniques are aimed at gold determination in a solution for gold etching (KJ+J2), in the technological mixture formed after cleaning stainless steel members of vacuum-deposit installation with concentrated nitric and hydrochloric acids as well as in gold-containing powder resulted from liquid wastes reprocessing. Optimal conditions for sample preparation and analysis of liquid and solid waste specimens of compound and variable matrix composition were chosen. The boundaries of relative resultant error were determined for the methods within the range of gold mass concentration from 0.1 to 30g/dm3 in the specimens of liquid wastes and mass fractions from 3 to 80% in the specimens of solid wastes.

Keywords: microelectronics waste pieces, gold, sample preparation, atomic-absorption spectrophotometry, gravimetric analysis technique

Procedia PDF Downloads 173
414 A Spatial Information Network Traffic Prediction Method Based on Hybrid Model

Authors: Jingling Li, Yi Zhang, Wei Liang, Tao Cui, Jun Li

Abstract:

Compared with terrestrial network, the traffic of spatial information network has both self-similarity and short correlation characteristics. By studying its traffic prediction method, the resource utilization of spatial information network can be improved, and the method can provide an important basis for traffic planning of a spatial information network. In this paper, considering the accuracy and complexity of the algorithm, the spatial information network traffic is decomposed into approximate component with long correlation and detail component with short correlation, and a time series hybrid prediction model based on wavelet decomposition is proposed to predict the spatial network traffic. Firstly, the original traffic data are decomposed to approximate components and detail components by using wavelet decomposition algorithm. According to the autocorrelation and partial correlation smearing and truncation characteristics of each component, the corresponding model (AR/MA/ARMA) of each detail component can be directly established, while the type of approximate component modeling can be established by ARIMA model after smoothing. Finally, the prediction results of the multiple models are fitted to obtain the prediction results of the original data. The method not only considers the self-similarity of a spatial information network, but also takes into account the short correlation caused by network burst information, which is verified by using the measured data of a certain back bone network released by the MAWI working group in 2018. Compared with the typical time series model, the predicted data of hybrid model is closer to the real traffic data and has a smaller relative root means square error, which is more suitable for a spatial information network.

Keywords: spatial information network, traffic prediction, wavelet decomposition, time series model

Procedia PDF Downloads 110
413 Symmetry Properties of Linear Algebraic Systems with Non-Canonical Scalar Multiplication

Authors: Krish Jhurani

Abstract:

The research paper presents an in-depth analysis of symmetry properties in linear algebraic systems under the operation of non-canonical scalar multiplication structures, specifically semirings, and near-rings. The objective is to unveil the profound alterations that occur in traditional linear algebraic structures when we replace conventional field multiplication with these non-canonical operations. In the methodology, we first establish the theoretical foundations of non-canonical scalar multiplication, followed by a meticulous investigation into the resulting symmetry properties, focusing on eigenvectors, eigenspaces, and invariant subspaces. The methodology involves a combination of rigorous mathematical proofs and derivations, supplemented by illustrative examples that exhibit these discovered symmetry properties in tangible mathematical scenarios. The core findings uncover unique symmetry attributes. For linear algebraic systems with semiring scalar multiplication, we reveal eigenvectors and eigenvalues. Systems operating under near-ring scalar multiplication disclose unique invariant subspaces. These discoveries drastically broaden the traditional landscape of symmetry properties in linear algebraic systems. With the application of these findings, potential practical implications span across various fields such as physics, coding theory, and cryptography. They could enhance error detection and correction codes, devise more secure cryptographic algorithms, and even influence theoretical physics. This expansion of applicability accentuates the significance of the presented research. The research paper thus contributes to the mathematical community by bringing forth perspectives on linear algebraic systems and their symmetry properties through the lens of non-canonical scalar multiplication, coupled with an exploration of practical applications.

Keywords: eigenspaces, eigenvectors, invariant subspaces, near-rings, non-canonical scalar multiplication, semirings, symmetry properties

Procedia PDF Downloads 78
412 Reducing the Imbalance Penalty Through Artificial Intelligence Methods Geothermal Production Forecasting: A Case Study for Turkey

Authors: Hayriye Anıl, Görkem Kar

Abstract:

In addition to being rich in renewable energy resources, Turkey is one of the countries that promise potential in geothermal energy production with its high installed power, cheapness, and sustainability. Increasing imbalance penalties become an economic burden for organizations since geothermal generation plants cannot maintain the balance of supply and demand due to the inadequacy of the production forecasts given in the day-ahead market. A better production forecast reduces the imbalance penalties of market participants and provides a better imbalance in the day ahead market. In this study, using machine learning, deep learning, and, time series methods, the total generation of the power plants belonging to Zorlu Natural Electricity Generation, which has a high installed capacity in terms of geothermal, was estimated for the first one and two weeks of March, then the imbalance penalties were calculated with these estimates and compared with the real values. These modeling operations were carried out on two datasets, the basic dataset and the dataset created by extracting new features from this dataset with the feature engineering method. According to the results, Support Vector Regression from traditional machine learning models outperformed other models and exhibited the best performance. In addition, the estimation results in the feature engineering dataset showed lower error rates than the basic dataset. It has been concluded that the estimated imbalance penalty calculated for the selected organization is lower than the actual imbalance penalty, optimum and profitable accounts.

Keywords: machine learning, deep learning, time series models, feature engineering, geothermal energy production forecasting

Procedia PDF Downloads 77
411 Improvement Performances of the Supersonic Nozzles at High Temperature Type Minimum Length Nozzle

Authors: W. Hamaidia, T. Zebbiche

Abstract:

This paper presents the design of axisymmetric supersonic nozzles, in order to accelerate a supersonic flow to the desired Mach number and that having a small weight, in the same time gives a high thrust. The concerned nozzle gives a parallel and uniform flow at the exit section. The nozzle is divided into subsonic and supersonic regions. The supersonic portion is independent to the upstream conditions of the sonic line. The subsonic portion is used to give a sonic flow at the throat. In this case, nozzle gives a uniform and parallel flow at the exit section. It’s named by minimum length Nozzle. The study is done at high temperature, lower than the dissociation threshold of the molecules, in order to improve the aerodynamic performances. Our aim consists of improving the performances both by the increase of exit Mach number and the thrust coefficient and by reduction of the nozzle's mass. The variation of the specific heats with the temperature is considered. The design is made by the Method of Characteristics. The finite differences method with predictor-corrector algorithm is used to make the numerical resolution of the obtained nonlinear algebraic equations. The application is for air. All the obtained results depend on three parameters which are exit Mach number, the stagnation temperature, the chosen mesh in characteristics. A numerical simulation of nozzle through Computational Fluid Dynamics-FASTRAN was done to determine and to confirm the necessary design parameters.

Keywords: flux supersonic flow, axisymmetric minimum length nozzle, high temperature, method of characteristics, calorically imperfect gas, finite difference method, trust coefficient, mass of the nozzle, specific heat at constant pressure, air, error

Procedia PDF Downloads 129
410 Enhancement of Primary User Detection in Cognitive Radio by Scattering Transform

Authors: A. Moawad, K. C. Yao, A. Mansour, R. Gautier

Abstract:

The detecting of an occupied frequency band is a major issue in cognitive radio systems. The detection process becomes difficult if the signal occupying the band of interest has faded amplitude due to multipath effects. These effects make it hard for an occupying user to be detected. This work mitigates the missed-detection problem in the context of cognitive radio in frequency-selective fading channel by proposing blind channel estimation method that is based on scattering transform. By initially applying conventional energy detection, the missed-detection probability is evaluated, and if it is greater than or equal to 50%, channel estimation is applied on the received signal followed by channel equalization to reduce the channel effects. In the proposed channel estimator, we modify the Morlet wavelet by using its first derivative for better frequency resolution. A mathematical description of the modified function and its frequency resolution is formulated in this work. The improved frequency resolution is required to follow the spectral variation of the channel. The channel estimation error is evaluated in the mean-square sense for different channel settings, and energy detection is applied to the equalized received signal. The simulation results show improvement in reducing the missed-detection probability as compared to the detection based on principal component analysis. This improvement is achieved at the expense of increased estimator complexity, which depends on the number of wavelet filters as related to the channel taps. Also, the detection performance shows an improvement in detection probability for low signal-to-noise scenarios over principal component analysis- based energy detection.

Keywords: channel estimation, cognitive radio, scattering transform, spectrum sensing

Procedia PDF Downloads 175
409 Robust Segmentation of Salient Features in Automatic Breast Ultrasound (ABUS) Images

Authors: Lamees Nasser, Yago Diez, Robert Martí, Joan Martí, Ibrahim Sadek

Abstract:

Automated 3D breast ultrasound (ABUS) screening is a novel modality in medical imaging because of its common characteristics shared with other ultrasound modalities in addition to the three orthogonal planes (i.e., axial, sagittal, and coronal) that are useful in analysis of tumors. In the literature, few automatic approaches exist for typical tasks such as segmentation or registration. In this work, we deal with two problems concerning ABUS images: nipple and rib detection. Nipple and ribs are the most visible and salient features in ABUS images. Determining the nipple position plays a key role in some applications for example evaluation of registration results or lesion follow-up. We present a nipple detection algorithm based on color and shape of the nipple, besides an automatic approach to detect the ribs. In point of fact, rib detection is considered as one of the main stages in chest wall segmentation. This approach consists of four steps. First, images are normalized in order to minimize the intensity variability for a given set of regions within the same image or a set of images. Second, the normalized images are smoothed by using anisotropic diffusion filter. Next, the ribs are detected in each slice by analyzing the eigenvalues of the 3D Hessian matrix. Finally, a breast mask and a probability map of regions detected as ribs are used to remove false positives (FP). Qualitative and quantitative evaluation obtained from a total of 22 cases is performed. For all cases, the average and standard deviation of the root mean square error (RMSE) between manually annotated points placed on the rib surface and detected points on rib borders are 15.1188 mm and 14.7184 mm respectively.

Keywords: Automated 3D Breast Ultrasound, Eigenvalues of Hessian matrix, Nipple detection, Rib detection

Procedia PDF Downloads 301
408 An Investigation of Item Bias in Free Boarding and Scholarship Examination in Turkey

Authors: Yeşim Özer Özkan, Fatma Büşra Fincan

Abstract:

Biased sample is a regression of an observation, design process and all of the specifications lead to tendency of a side or the situation of leaving from the objectivity. It is expected that, test items are answered by the students who come from different social groups and the same ability not to be different from each other. The importance of the expectation increases especially during student selection and placement examinations. For example, all of the test items should not be beneficial for just a male or female group. The aim of the research is an investigation of item bias whether or not the exam included in 2014 free boarding and scholarship examination in terms of gender variable. Data which belong to 5th, 6th, and 7th grade the secondary education students were obtained by the General Directorate of Measurement, Evaluation and Examination Services in Turkey. 20% students were selected randomly within 192090 students. Based on 38418 students’ exam paper were examined for determination item bias. Winsteps 3.8.1 package program was used to determine bias in analysis of data, according to Rasch Model in respect to gender variable. Mathematics items tests were examined in terms of gender bias. Firstly, confirmatory factor analysis was applied twenty-five math questions. After that, NFI, TLI, CFI, IFI, RFI, GFI, RMSEA, and SRMR were examined in order to be validity and values of goodness of fit. Modification index values of confirmatory factor analysis were examined and then some of the items were omitted because these items gave an error in terms of model conformity and conceptual. The analysis shows that in 2014 free boarding and scholarship examination exam does not include bias. This is an indication of the gender of the examination to be made in favor of or against different groups of students.

Keywords: gender, item bias, placement test, Rasch model

Procedia PDF Downloads 208
407 Inverse Prediction of Thermal Parameters of an Annular Hyperbolic Fin Subjected to Thermal Stresses

Authors: Ashis Mallick, Rajeev Ranjan

Abstract:

The closed form solution for thermal stresses in an annular fin with hyperbolic profile is derived using Adomian decomposition method (ADM). The conductive-convective fin with variable thermal conductivity is considered in the analysis. The nonlinear heat transfer equation is efficiently solved by ADM considering insulated convective boundary conditions at the tip of fin. The constant of integration in the solution is to be estimated using minimum decomposition error method. The solution of temperature field is represented in a polynomial form for convenience to use in thermo-elasticity equation. The non-dimensional thermal stress fields are obtained using the ADM solution of temperature field coupled with the thermo-elasticity solution. The influence of the various thermal parameters in temperature field and stress fields are presented. In order to show the accuracy of the ADM solution, the present results are compared with the results available in literature. The stress fields in fin with hyperbolic profile are compared with those of uniform thickness profile. Result shows that hyperbolic fin profile is better choice for enhancing heat transfer. Moreover, less thermal stresses are developed in hyperbolic profile as compared to rectangular profile. Next, Nelder-Mead based simplex search method is employed for the inverse estimation of unknown non-dimensional thermal parameters in a given stress fields. Owing to the correlated nature of the unknowns, the best combinations of the model parameters which are satisfying the predefined stress field are to be estimated. The stress fields calculated using the inverse parameters give a very good agreement with the stress fields obtained from the forward solution. The estimated parameters are suitable to use for efficient and cost effective fin designing.

Keywords: Adomian decomposition, inverse analysis, hyperbolic fin, variable thermal conductivity

Procedia PDF Downloads 299
406 Emerging Issues for Global Impact of Foreign Institutional Investors (FII) on Indian Economy

Authors: Kamlesh Shashikant Dave

Abstract:

The global financial crisis is rooted in the sub-prime crisis in U.S.A. During the boom years, mortgage brokers attracted by the big commission, encouraged buyers with poor credit to accept housing mortgages with little or no down payment and without credit check. A combination of low interest rates and large inflow of foreign funds during the booming years helped the banks to create easy credit conditions for many years. Banks lent money on the assumptions that housing price would continue to rise. Also the real estate bubble encouraged the demand for houses as financial assets .Banks and financial institutions later repackaged these debts with other high risk debts and sold them to worldwide investors creating financial instruments called collateral debt obligations (CDOs). With the rise in interest rate, mortgage payments rose and defaults among the subprime category of borrowers increased accordingly. Through the securitization of mortgage payments, a recession developed in the housing sector and consequently it was transmitted to the entire US economy and rest of the world. The financial credit crisis has moved the US and the global economy into recession. Indian economy has also affected by the spill over effects of the global financial crisis. Great saving habit among people, strong fundamentals, strong conservative and regulatory regime have saved Indian economy from going out of gear, though significant parts of the economy have slowed down. Industrial activity, particularly in the manufacturing and infrastructure sectors decelerated. The service sector too, slow in construction, transport, trade, communication, hotels and restaurants sub sectors. The financial crisis has some adverse impact on the IT sector. Exports had declined in absolute terms in October. Higher inputs costs and dampened demand have dented corporate margins while the uncertainty surrounding the crisis has affected business confidence. To summarize, reckless subprime lending, loose monetary policy of US, expansion of financial derivatives beyond acceptable norms and greed of Wall Street has led to this exceptional global financial and economic crisis. Thus, the global credit crisis of 2008 highlights the need to redesign both the global and domestic financial regulatory systems not only to properly address systematic risk but also to support its proper functioning (i.e financial stability).Such design requires: 1) Well managed financial institutions with effective corporate governance and risk management system 2) Disclosure requirements sufficient to support market discipline. 3)Proper mechanisms for resolving problem institution and 4) Mechanisms to protect financial services consumers in the event of financial institutions failure.

Keywords: FIIs, BSE, sensex, global impact

Procedia PDF Downloads 421
405 Application of Remote Sensing and In-Situ Measurements for Discharge Monitoring in Large Rivers: Case of Pool Malebo in the Congo River Basin

Authors: Kechnit Djamel, Ammarri Abdelhadi, Raphael Tshimang, Mark Trrig

Abstract:

One of the most important aspects of monitoring rivers is navigation. The variation of discharge in the river generally produces a change in available draft for a vessel, particularly in the low flow season, which can impact the navigable water path, especially when the water depth is less than the normal one, which allows safe navigation for boats. The water depth is related to the bathymetry of the channel as well as the discharge. For a seasonal update of the navigation maps, a daily discharge value is required. Many novel approaches based on earth observation and remote sensing have been investigated for large rivers. However, it should be noted that most of these approaches are not currently able to directly estimate river discharge. This paper discusses the application of remote sensing tools using the analysis of the reflectance value of MODIS imagery and is combined with field measurements for the estimation of discharge. This approach is applied in the lower reach of the Congo River (Pool Malebo) for the period between 2019 and 2021. The correlation obtained between the observed discharge observed in the gauging station and the reflectance ratio time series is 0.81. In this context, a Discharge Reflectance Model (DRM) was developed to express discharge as a function of reflectance. This model introduces a non-contact method that allows discharge monitoring using earth observation. DRM was validated by field measurements using ADCP, in different sections on the Pool Malebo, over two different periods (dry and wet seasons), as well as by the observed discharge in the gauging station. The observed error between the estimated and measured discharge values ranges from 1 to 8% for the ADCP and from (1% to 11%) for the gauging station. The study of the uncertainties will give us the possibility to judge the robustness of the DRM.

Keywords: discharge monitoring, navigation, MODIS, empiric, ADCP, Congo River

Procedia PDF Downloads 57
404 Multi-Impairment Compensation Based Deep Neural Networks for 16-QAM Coherent Optical Orthogonal Frequency Division Multiplexing System

Authors: Ying Han, Yuanxiang Chen, Yongtao Huang, Jia Fu, Kaile Li, Shangjing Lin, Jianguo Yu

Abstract:

In long-haul and high-speed optical transmission system, the orthogonal frequency division multiplexing (OFDM) signal suffers various linear and non-linear impairments. In recent years, researchers have proposed compensation schemes for specific impairment, and the effects are remarkable. However, different impairment compensation algorithms have caused an increase in transmission delay. With the widespread application of deep neural networks (DNN) in communication, multi-impairment compensation based on DNN will be a promising scheme. In this paper, we propose and apply DNN to compensate multi-impairment of 16-QAM coherent optical OFDM signal, thereby improving the performance of the transmission system. The trained DNN models are applied in the offline digital signal processing (DSP) module of the transmission system. The models can optimize the constellation mapping signals at the transmitter and compensate multi-impairment of the OFDM decoded signal at the receiver. Furthermore, the models reduce the peak to average power ratio (PAPR) of the transmitted OFDM signal and the bit error rate (BER) of the received signal. We verify the effectiveness of the proposed scheme for 16-QAM Coherent Optical OFDM signal and demonstrate and analyze transmission performance in different transmission scenarios. The experimental results show that the PAPR and BER of the transmission system are significantly reduced after using the trained DNN. It shows that the DNN with specific loss function and network structure can optimize the transmitted signal and learn the channel feature and compensate for multi-impairment in fiber transmission effectively.

Keywords: coherent optical OFDM, deep neural network, multi-impairment compensation, optical transmission

Procedia PDF Downloads 107
403 Monitoring Air Pollution Effects on Children for Supporting Public Health Policy: Preliminary Results of MAPEC_LIFE Project

Authors: Elisabetta Ceretti, Silvia Bonizzoni, Alberto Bonetti, Milena Villarini, Marco Verani, Maria Antonella De Donno, Sara Bonetta, Umberto Gelatti

Abstract:

Introduction: Air pollution is a global problem. In 2013, the International Agency for Research on Cancer (IARC) classified air pollution and particulate matter as carcinogenic to human. The study of the health effects of air pollution in children is very important because they are a high-risk group in terms of the health effects of air pollution and early exposure during childhood can increase the risk of developing chronic diseases in adulthood. The MAPEC_LIFE (Monitoring Air Pollution Effects on Children for supporting public health policy) is a project founded by EU Life+ Programme which intends to evaluate the associations between air pollution and early biological effects in children and to propose a model for estimating the global risk of early biological effects due to air pollutants and other factors in children. Methods: The study was carried out on 6-8-year-old children living in five Italian towns in two different seasons. Two biomarkers of early biological effects, primary DNA damage detected with the comet assay and frequency of micronuclei, were investigated in buccal cells of children. Details of children diseases, socio-economic status, exposures to other pollutants and life-style were collected using a questionnaire administered to children’s parents. Child exposure to urban air pollution was assessed by analysing PM0.5 samples collected in the school areas for PAHs and nitro-PAHs concentration, lung toxicity and in vitro genotoxicity on bacterial and human cells. Data on the chemical features of the urban air during the study period were obtained from the Regional Agency for Environmental Protection. The project created also the opportunity to approach the issue of air pollution with the children, trying to raise their awareness on air quality, its health effects and some healthy behaviors by means of an educational intervention in the schools. Results: 1315 children were recruited for the study and participate in the first sampling campaign in the five towns. The second campaign, on the same children, is still ongoing. The preliminary results of the tests on buccal mucosa cells of children will be presented during the conference as well as the preliminary data about the chemical composition and the toxicity and genotoxicity features of PM0.5 samples. The educational package was tested on 250 children of the primary school and showed to be very useful, improving children knowledge about air pollution and its effects and stimulating their interest. Conclusions: The associations between levels of air pollutants, air mutagenicity and biomarkers of early effects will be investigated. A tentative model to calculate the global absolute risk of having early biological effects for air pollution and other variables together will be proposed and may be useful to support policy-making and community interventions to protect children from possible health effects of air pollutants.

Keywords: air pollution exposure, biomarkers of early effects, children, public health policy

Procedia PDF Downloads 303
402 A Deep Learning Model with Greedy Layer-Wise Pretraining Approach for Optimal Syngas Production by Dry Reforming of Methane

Authors: Maryam Zarabian, Hector Guzman, Pedro Pereira-Almao, Abraham Fapojuwo

Abstract:

Dry reforming of methane (DRM) has sparked significant industrial and scientific interest not only as a viable alternative for addressing the environmental concerns of two main contributors of the greenhouse effect, i.e., carbon dioxide (CO₂) and methane (CH₄), but also produces syngas, i.e., a mixture of hydrogen (H₂) and carbon monoxide (CO) utilized by a wide range of downstream processes as a feedstock for other chemical productions. In this study, we develop an AI-enable syngas production model to tackle the problem of achieving an equivalent H₂/CO ratio [1:1] with respect to the most efficient conversion. Firstly, the unsupervised density-based spatial clustering of applications with noise (DBSAN) algorithm removes outlier data points from the original experimental dataset. Then, random forest (RF) and deep neural network (DNN) models employ the error-free dataset to predict the DRM results. DNN models inherently would not be able to obtain accurate predictions without a huge dataset. To cope with this limitation, we employ reusing pre-trained layers’ approaches such as transfer learning and greedy layer-wise pretraining. Compared to the other deep models (i.e., pure deep model and transferred deep model), the greedy layer-wise pre-trained deep model provides the most accurate prediction as well as similar accuracy to the RF model with R² values 1.00, 0.999, 0.999, 0.999, 0.999, and 0.999 for the total outlet flow, H₂/CO ratio, H₂ yield, CO yield, CH₄ conversion, and CO₂ conversion outputs, respectively.

Keywords: artificial intelligence, dry reforming of methane, artificial neural network, deep learning, machine learning, transfer learning, greedy layer-wise pretraining

Procedia PDF Downloads 58
401 The Quality Assessment of Seismic Reflection Survey Data Using Statistical Analysis: A Case Study of Fort Abbas Area, Cholistan Desert, Pakistan

Authors: U. Waqas, M. F. Ahmed, A. Mehmood, M. A. Rashid

Abstract:

In geophysical exploration surveys, the quality of acquired data holds significant importance before executing the data processing and interpretation phases. In this study, 2D seismic reflection survey data of Fort Abbas area, Cholistan Desert, Pakistan was taken as test case in order to assess its quality on statistical bases by using normalized root mean square error (NRMSE), Cronbach’s alpha test (α) and null hypothesis tests (t-test and F-test). The analysis challenged the quality of the acquired data and highlighted the significant errors in the acquired database. It is proven that the study area is plain, tectonically least affected and rich in oil and gas reserves. However, subsurface 3D modeling and contouring by using acquired database revealed high degrees of structural complexities and intense folding. The NRMSE had highest percentage of residuals between the estimated and predicted cases. The outcomes of hypothesis testing also proved the biasness and erraticness of the acquired database. Low estimated value of alpha (α) in Cronbach’s alpha test confirmed poor reliability of acquired database. A very low quality of acquired database needs excessive static correction or in some cases, reacquisition of data is also suggested which is most of the time not feasible on economic grounds. The outcomes of this study could be used to assess the quality of large databases and to further utilize as a guideline to establish database quality assessment models to make much more informed decisions in hydrocarbon exploration field.

Keywords: Data quality, Null hypothesis, Seismic lines, Seismic reflection survey

Procedia PDF Downloads 114