Search results for: precision feed
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2040

Search results for: precision feed

750 Prediction of Oil Recovery Factor Using Artificial Neural Network

Authors: O. P. Oladipo, O. A. Falode

Abstract:

The determination of Recovery Factor is of great importance to the reservoir engineer since it relates reserves to the initial oil in place. Reserves are the producible portion of reservoirs and give an indication of the profitability of a field Development. The core objective of this project is to develop an artificial neural network model using selected reservoir data to predict Recovery Factors (RF) of hydrocarbon reservoirs and compare the model with a couple of the existing correlations. The type of Artificial Neural Network model developed was the Single Layer Feed Forward Network. MATLAB was used as the network simulator and the network was trained using the supervised learning method, Afterwards, the network was tested with input data never seen by the network. The results of the predicted values of the recovery factors of the Artificial Neural Network Model, API Correlation for water drive reservoirs (Sands and Sandstones) and Guthrie and Greenberger Correlation Equation were obtained and compared. It was noted that the coefficient of correlation of the Artificial Neural Network Model was higher than the coefficient of correlations of the other two correlation equations, thus making it a more accurate prediction tool. The Artificial Neural Network, because of its accurate prediction ability is helpful in the correct prediction of hydrocarbon reservoir factors. Artificial Neural Network could be applied in the prediction of other Petroleum Engineering parameters because it is able to recognise complex patterns of data set and establish a relationship between them.

Keywords: recovery factor, reservoir, reserves, artificial neural network, hydrocarbon, MATLAB, API, Guthrie, Greenberger

Procedia PDF Downloads 420
749 Thermoluminescent Response of Nanocrystalline BaSO4:Eu to 85 MeV Carbon Beams

Authors: Shaila Bahl, S. P. Lochab, Pratik Kumar

Abstract:

Nanotechnology and nanomaterials have attracted researchers from different fields, especially from the field of luminescence. Recent studies on various luminescent nanomaterials have shown their relevance in dosimetry of ionizing radiations for the measurements of high doses using the Thermoluminescence (TL) technique, where the conventional microcrystalline phosphors saturate. Ion beams have been used for diagnostic and therapeutic purposes due to their favorable profile of dose deposition at the end of the range known as the Bragg peak. While dealing with human beings, doses from these beams need to be measured with great precision and accuracy. Henceforth detailed investigations of suitable thermoluminescent dosimeters (TLD) for dose verification in ion beam irradiation are required. This paper investigates the TL response of nanocrystalline BaSO4 doped with Eu to 85 MeV carbon beam. The synthesis was done using Co-precipitation technique by mixing Barium chloride and ammonium sulphate solutions. To investigate the crystallinity and particle size, analytical techniques such as X-ray diffraction (XRD) and Transmission electron microscopy (TEM) were used which revealed the average particle sizes to 45 nm with orthorhombic structure. Samples in pellet form were irradiated by 85 MeV carbon beam in the fluence range of 1X1010-5X1013. TL glow curves of the irradiated samples show two prominent glow peaks at around 460 K and 495 K. The TL response is linear up to 1X1013 fluence after which saturation was observed. The wider linear TL response of nanocrystalline BaSO4: Eu and low fading make it a superior candidate as a dosimeter to be used for detecting the doses of carbon beam.

Keywords: radiation, dosimetry, carbon ions, thermoluminescence

Procedia PDF Downloads 276
748 Light-Weight Network for Real-Time Pose Estimation

Authors: Jianghao Hu, Hongyu Wang

Abstract:

The effective and efficient human pose estimation algorithm is an important task for real-time human pose estimation on mobile devices. This paper proposes a light-weight human key points detection algorithm, Light-Weight Network for Real-Time Pose Estimation (LWPE). LWPE uses light-weight backbone network and depthwise separable convolutions to reduce parameters and lower latency. LWPE uses the feature pyramid network (FPN) to fuse the high-resolution, semantically weak features with the low-resolution, semantically strong features. In the meantime, with multi-scale prediction, the predicted result by the low-resolution feature map is stacked to the adjacent higher-resolution feature map to intermediately monitor the network and continuously refine the results. At the last step, the key point coordinates predicted in the highest-resolution are used as the final output of the network. For the key-points that are difficult to predict, LWPE adopts the online hard key points mining strategy to focus on the key points that hard predicting. The proposed algorithm achieves excellent performance in the single-person dataset selected in the AI (artificial intelligence) challenge dataset. The algorithm maintains high-precision performance even though the model only contains 3.9M parameters, and it can run at 225 frames per second (FPS) on the generic graphics processing unit (GPU).

Keywords: depthwise separable convolutions, feature pyramid network, human pose estimation, light-weight backbone

Procedia PDF Downloads 140
747 Determination of MDA by HPLC in Blood of Levofloxacin Treated Rats

Authors: D. S. Mohale, A. P. Dewani, A. S.tripathi, A. V. Chandewar

Abstract:

Present work demonstrates the applicability of high-performance liquid chromatography (HPLC) with UV-Vis detection for the quantification of malondialdehyde as malondialdehyde-thiobarbituric acid complex (MDA-TBA) in-vivo in rats. The HPLC method for MDA-TBA was achieved by isocratic mode on a reverse-phase C18 column (250mm×4.6mm) at a flow rate of 1.0mLmin−1 followed by detection at 532 nm. The chromatographic conditions were optimized by varying the concentration and pH of water followed by changes in percentage of organic phase optimal mobile phase consisted of mixture of water (0.2% triethylamine pH adjusted to 2.3 by ortho-phosphoric acid) and acetonitrile in ratio (80:20v/v). The retention time of MDA-TBA complex was 3.7 min. The developed method was sensitive as limit of detection and quantification (LOD and LOQ) for MDA-TBA complex were (standard deviation and slope of calibration curve) 110 ng/ml and 363 ng/ml respectively. Calibration studies were done by spiking MDA into rat plasma at concentrations ranging from 500 to 1000 ng/ml. The precision of developed method measured in terms of relative standard deviations for intra-day and inter-day studies was 1.6–5.0% and 1.9–3.6% respectively. The HPLC method was applied for monitoring MDA levels in rats subjected to chronic treatment of levofloxacin (LEV) (5mg/kg/day) for 21 days. Results were compared by findings in control group rats. Mean peak areas of both study groups was subjected for statistical treatment to unpaired student t-test to find p-values. The p value was <0.001 indicating significant results and suggesting increased MDA levels in rats subjected to chronic treatment of LEV of 21 days.

Keywords: malondialdehyde-thiobarbituric acid complex, levofloxacin, HPLC, oxidative stress

Procedia PDF Downloads 321
746 Using Predictive Analytics to Identify First-Year Engineering Students at Risk of Failing

Authors: Beng Yew Low, Cher Liang Cha, Cheng Yong Teoh

Abstract:

Due to a lack of continual assessment or grade related data, identifying first-year engineering students in a polytechnic education at risk of failing is challenging. Our experience over the years tells us that there is no strong correlation between having good entry grades in Mathematics and the Sciences and excelling in hardcore engineering subjects. Hence, identifying students at risk of failure cannot be on the basis of entry grades in Mathematics and the Sciences alone. These factors compound the difficulty of early identification and intervention. This paper describes the development of a predictive analytics model in the early detection of students at risk of failing and evaluates its effectiveness. Data from continual assessments conducted in term one, supplemented by data of student psychological profiles such as interests and study habits, were used. Three classification techniques, namely Logistic Regression, K Nearest Neighbour, and Random Forest, were used in our predictive model. Based on our findings, Random Forest was determined to be the strongest predictor with an Area Under the Curve (AUC) value of 0.994. Correspondingly, the Accuracy, Precision, Recall, and F-Score were also highest among these three classifiers. Using this Random Forest Classification technique, students at risk of failure could be identified at the end of term one. They could then be assigned to a Learning Support Programme at the beginning of term two. This paper gathers the results of our findings. It also proposes further improvements that can be made to the model.

Keywords: continual assessment, predictive analytics, random forest, student psychological profile

Procedia PDF Downloads 117
745 Soil Macronutrients Sensing for Precision Agriculture Purpose Using Fourier Transform Infrared Spectroscopy

Authors: Hossein Navid, Maryam Adeli Khadem, Shahin Oustan, Mahmoud Zareie

Abstract:

Among the nutrients needed by the plants, three elements containing nitrate, phosphorus and potassium are more important. The objective of this research was measuring these nutrient amounts in soil using Fourier transform infrared spectroscopy in range of 400- 4000 cm-1. Soil samples for different soil types (sandy, clay and loam) were collected from different areas of East Azerbaijan. Three types of fertilizers in conventional farming (urea, triple superphosphate, potassium sulphate) were used for soil treatment. Each specimen was divided into two categories: The first group was used in the laboratory (direct measurement) to extract nitrate, phosphorus and potassium uptake by colorimetric method of Olsen and ammonium acetate. The second group was used to measure drug absorption spectrometry. In spectrometry, the small amount of soil samples mixed with KBr and was taken in a small pill form. For the tests, the pills were put in the center of infrared spectrometer and graphs were obtained. Analysis of data was done using MINITAB and PLSR software. The data obtained from spectrometry method were compared with amount of soil nutrients obtained from direct drug absorption using EXCEL software. There were good fitting between these two data series. For nitrate, phosphorus and potassium R2 was 79.5%, 92.0% and 81.9%, respectively. Also, results showed that the range of MIR (mid-infrared) is appropriate for determine the amount of soil nitrate and potassium and can be used in future research to obtain detailed maps of land in agricultural use.

Keywords: nitrate, phosphorus, potassium, soil nutrients, spectroscopy

Procedia PDF Downloads 383
744 Requirements Management in Agile

Authors: Ravneet Kaur

Abstract:

The concept of Agile Requirements Engineering and Management is not new. However, the struggle to figure out how traditional Requirements Management Process fits within an Agile framework remains complex. This paper talks about a process that can merge the organization’s traditional Requirements Management Process nicely into the Agile Software Development Process. This process provides Traceability of the Product Backlog to the external documents on one hand and User Stories on the other hand. It also gives sufficient evidence that the system will deliver the right functionality with good quality in the form of various statistics and reports. In the nutshell, by overlaying a process on top of Agile, without disturbing the Agility, we are able to get synergic benefits in terms of productivity, profitability, its reporting, and end to end visibility to all Stakeholders. The framework can be used for just-in-time requirements definition or to build a repository of requirements for future use. The goal is to make sure that the business (specifically, the product owner) can clearly articulate what needs to be built and define what is of high quality. To accomplish this, the requirements cycle follows a Scrum-like process that mirrors the development cycle but stays two to three steps ahead. The goal is to create a process by which requirements can be thoroughly vetted, organized, and communicated in a manner that is iterative, timely, and quality-focused. Agile is quickly becoming the most popular way of developing software because it fosters continuous improvement, time-boxed development cycles, and more quickly delivering value to the end users. That value will be driven to a large extent by the quality and clarity of requirements that feed the software development process. An agile, lean, and timely approach to requirements as the starting point will help to ensure that the process is optimized.

Keywords: requirements management, Agile

Procedia PDF Downloads 354
743 Development of a Research Platform to Revitalize People-Forest Relationship Through a Cycle of Architectural Embodiments

Authors: Hande Ünlü, Yu Morishita

Abstract:

The total area of forest land in Japan accounts for 67% of the national land; however, despite this wealth and hundred years history of silviculture, today Japanese forestry faces socio-economic stagnation in forestry. While the growing gap in the people-forest relationship causes the depopulation of many forest villages, this paper introduces a methodology aiming to develop a place-specific approach in revitalizing this relationship. The paper focuses on a case study from Taiki town in the Hokkaido region to analyze the place's specific socio-economic requirements through interviews and workshops with the local experts, researchers, and stakeholders. Based on the analyzed facts, a master outline of design requirements is developed to produce locally sourced architectural embodiments that aim to act as a unifying element between the forests and the people of Taiki town. In parallel, the proposed methodology aims to generate a cycle of research feed and a researcher retreat, a definition given by Memu Earth Lab to the researchers' stay at Memu in Taiki town for a defined period to analyze local resources, for the continuous improvement of the introduced methodology to revitalize the interaction between people and forest through architecture.

Keywords: architecture, Japanese forestry, local timber, people-forest relationship, research platform

Procedia PDF Downloads 163
742 Selective Laser Melting (SLM) Process and Its Influence on the Machinability of TA6V Alloy

Authors: Rafał Kamiński, Joel Rech, Philippe Bertrand, Christophe Desrayaud

Abstract:

Titanium alloys are among the most important material in the aircraft industry, due to its low density, high strength, and corrosion resistance. However, these alloys are considered as difficult to machine because they have poor thermal properties and high reactivity with cutting tools. The Selective Laser Melting (SLM) process becomes even more popular through industry since it enables the design of new complex components, that cannot be manufactured by standard processes. However, the high temperature reached during the melting phase as well as the several rapid heating and cooling phases, due to the movement of the laser, induce complex microstructures. These microstructures differ from conventional equiaxed ones obtained by casting+forging. Parts obtained by SLM have to be machined in order calibrate the dimensions and the surface roughness of functional surfaces. The ball milling technique is widely applied to finish complex shapes. However, the machinability of titanium is strongly influenced by the microstructure. So the objective of this work is to investigate the influence of the SLM process, i.e. microstructure, on the machinability of titanium, compared to conventional forming processes. The machinability is analyzed by measuring surface roughness, cutting forces, cutting tool wear for a range of cutting conditions (depth of cut ap, feed per tooth fz, spindle speed N) in accordance with industrial practices.

Keywords: ball milling, microstructure, surface roughness, titanium

Procedia PDF Downloads 279
741 Disease Level Assessment in Wheat Plots Using a Residual Deep Learning Algorithm

Authors: Felipe A. Guth, Shane Ward, Kevin McDonnell

Abstract:

The assessment of disease levels in crop fields is an important and time-consuming task that generally relies on expert knowledge of trained individuals. Image classification in agriculture problems historically has been based on classical machine learning strategies that make use of hand-engineered features in the top of a classification algorithm. This approach tends to not produce results with high accuracy and generalization to the classes classified by the system when the nature of the elements has a significant variability. The advent of deep convolutional neural networks has revolutionized the field of machine learning, especially in computer vision tasks. These networks have great resourcefulness of learning and have been applied successfully to image classification and object detection tasks in the last years. The objective of this work was to propose a new method based on deep learning convolutional neural networks towards the task of disease level monitoring. Common RGB images of winter wheat were obtained during a growing season. Five categories of disease levels presence were produced, in collaboration with agronomists, for the algorithm classification. Disease level tasks performed by experts provided ground truth data for the disease score of the same winter wheat plots were RGB images were acquired. The system had an overall accuracy of 84% on the discrimination of the disease level classes.

Keywords: crop disease assessment, deep learning, precision agriculture, residual neural networks

Procedia PDF Downloads 308
740 Taguchi-Based Six Sigma Approach to Optimize Surface Roughness for Milling Processes

Authors: Sky Chou, Joseph C. Chen

Abstract:

This paper focuses on using Six Sigma methodologies to improve the surface roughness of a manufactured part produced by the CNC milling machine. It presents a case study where the surface roughness of milled aluminum is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for a CNC milling process. The six sigma methodology, DMAIC (design, measure, analyze, improve, and control) approach, was applied in this study to improve the process, reduce defects, and ultimately reduce costs. The Taguchi-based six sigma approach was applied to identify the optimized processing parameters that led to the targeted surface roughness specified by our customer. A L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of feed rate, depth of cut, spindle speed, and surface roughness. The noise factor is the difference between the old cutting tool and the new cutting tool. The confirmation run with the optimal parameters confirmed that the new parameter settings are correct. The new settings also improved the process capability index. The purpose of this study is that the Taguchi–based six sigma approach can be efficiently used to phase out defects and improve the process capability index of the CNC milling process.

Keywords: CNC machining, six sigma, surface roughness, Taguchi methodology

Procedia PDF Downloads 231
739 Analysis of Veterinary Drug Residues and Pesticide Residues in Beehive Products

Authors: Alba Luna Jimenez, Maria Dolores Hernando

Abstract:

The administration of veterinary treatments at higher doses than the recommended Varroa mite control in beehive matrices has the potential to generate residues in the honeybee colony and in the derived products for consumption. Honeybee colonies can also be indirectly exposed to residues of plant protection products when foraging in crops, wildflowers near the crops, or in urban gardens just after spraying. The study evaluates the presence of both types of residues, veterinary treatments, and pesticides in beeswax, bee bread, and honey. The study was carried out in apiaries located in agricultural zones and forest areas in Andalusia, Spain. Up to nineteen residues were identified above LOQ using gas chromatography-triple quadrupole-mass spectrometry analysis (GC-MS/MS). Samples were extracted by a modified QuEChERs method. Chlorfenvinphos was detected in beeswax and bee bread despite its use is not authorized for Varroa mite control. Residues of fluvalinate-tau, authorized as veterinary treatment, were detected in most of the samples of beeswax and bee bread, presumably due to overdose or also to its potential for accumulation associated with its marked liposolubility. Residues of plant protection products were also detected in samples of beeswax and bee bread. Pesticide residues were detected above the LOQ that was established at 5 µg.kg⁻¹, which is the minimum concentration that can be quantified with acceptable accuracy and precision, as described in the European guidelines for pesticide residue analysis SANTE/11945/2015. No residues of phytosanitary treatments used in agriculture were detected in honey.

Keywords: honeybee colony, mass spectrometry analysis, pesticide residues, Varroa destructor, veterinary treatment

Procedia PDF Downloads 143
738 An Investigation of Surface Texturing by Ultrasonic Impingement of Micro-Particles

Authors: Nagalingam Arun Prasanth, Ahmed Syed Adnan, S. H. Yeo

Abstract:

Surface topography plays a significant role in the functional performance of engineered parts. It is important to have a control on the surface geometry and understanding on the surface details to get the desired performance. Hence, in the current research contribution, a non-contact micro-texturing technique has been explored and developed. The technique involves ultrasonic excitation of a tool as a prime source of surface texturing for aluminum alloy workpieces. The specimen surface is polished first and is then immersed in a liquid bath containing 10% weight concentration of Ti6Al4V grade 5 spherical powders. A submerged slurry jet is used to recirculate the spherical powders under the ultrasonic horn which is excited at an ultrasonic frequency and amplitude of 40 kHz and 70 µm respectively. The distance between the horn and workpiece surface was remained fixed at 200 µm using a precision control stage. Texturing effects were investigated for different process timings of 1, 3 and 5 s. Thereafter, the specimens were cleaned in an ultrasonic bath for 5 mins to remove loose debris on the surface. The developed surfaces are characterized by optical and contact surface profiler. The optical microscopic images show a texture of circular spots on the workpiece surface indented by titanium spherical balls. Waviness patterns obtained from contact surface profiler supports the texturing effect produced from the proposed technique. Furthermore, water droplet tests were performed to show the efficacy of the proposed technique to develop hydrophilic surfaces and to quantify the texturing effect produced.

Keywords: surface texturing, surface modification, topography, ultrasonic

Procedia PDF Downloads 209
737 Caecotrophy Behaviour of the Rabbits (Oryctolagus cuniculus)

Authors: Awadhesh Kishore

Abstract:

One of the most unique characteristics of rabbit feeding behaviour is caecotrophy, which involves the excretion and immediate consumption of specific faeces known as soft faeces. Caecotrophy in rabbits is the instinctual behaviour of eating soft faeces; reduced caecotrophy decreases rabbit growth and lipid synthesis in the liver. Caecotroph ingestion is highest when rabbits are fed a diet high in indigestible fibre. The colon produces two types of waste: hard and soft pellets. The hard pellets are expelled, but the soft pellets are re-ingested by the rabbit directly upon being expelled from the anus by twisting itself around and sucking in those pellets as they emerge from the anus. The type of alfalfa hay in the feed of the rabbits does not affect volatile fatty acid concentration, the pattern of fermentation, or pH in the faeces. The cecal content and the soft faeces contain significant amounts of retinoids and carotenoids, while in the tissues (blood, liver, and kidney), these pigments do not occur in substantial amounts. Preventing caecotrophy reduced growth and altered lipid metabolism, depressing the development of new approaches for rabbit feeding and production. Relative abundance is depressed for genes related to metabolic pathways such as vitamin C and sugar metabolism, vitamin B2 metabolism, and bile secretion. The key microorganisms that regulate the rapid growth performance of rabbits may provide useful references for future research and the development of microecological preparations.

Keywords: caecocolonic microorganisms, caecotrophy, fasting caecotrophy, rabbits, soft pellets

Procedia PDF Downloads 35
736 Research on Autonomous Controllability of BeiDou Navigation Satellite System Based on Knowledge Transformation

Authors: Hang Ju, Changmin Zhu

Abstract:

The development level of the BeiDou Navigation Satellite System (BDS) can strongly reflect national defense strength as an important spatial information infrastructure. BDS can be not only used for military purposes, such as intelligence gathering, nuclear explosion monitoring, emergency communications, but also for location services, transportation, mapping, precision agriculture. In order to ensure the national defense security and the wide application of BDS in civil and military areas, BDS must be autonomous and controllable. As a complex system of knowledge-intensive, knowledge transformation runs through the whole process of research and development, production, operation, and maintenance of BDS. Based on the perspective of knowledge transformation, this paper expounds on the meaning of socialization, externalization, combination, and internalization of knowledge transformation, and the coupling relationship of autonomy and control on the basis of analyzing the status quo and problems of the autonomy and control of BDS. The autonomous and controllable framework of BDS based on knowledge transformation is constructed from six dimensions of management capability, R&D capability, technical capability, manufacturing capability, service support capability, and application capability. It can provide support for the smooth implementation of information security policy, provide a reference for the autonomy and control of the upstream and downstream industrial chains in Beidou, and provide a reference for the autonomous and controllable research of aerospace components, military measurement test equipment, and other related industries.

Keywords: knowledge transformation, BeiDou Navigation Satellite System, autonomy and control, framework

Procedia PDF Downloads 163
735 Dynamic Fault Diagnosis for Semi-Batch Reactor Under Closed-Loop Control via Independent RBFNN

Authors: Abdelkarim M. Ertiame, D. W. Yu, D. L. Yu, J. B. Gomm

Abstract:

In this paper, a new robust fault detection and isolation (FDI) scheme is developed to monitor a multivariable nonlinear chemical process called the Chylla-Haase polymerization reactor when it is under the cascade PI control. The scheme employs a radial basis function neural network (RBFNN) in an independent mode to model the process dynamics and using the weighted sum-squared prediction error as the residual. The recursive orthogonal Least Squares algorithm (ROLS) is employed to train the model to overcome the training difficulty of the independent mode of the network. Then, another RBFNN is used as a fault classifier to isolate faults from different features involved in the residual vector. The several actuator and sensor faults are simulated in a nonlinear simulation of the reactor in Simulink. The scheme is used to detect and isolate the faults on-line. The simulation results show the effectiveness of the scheme even the process is subjected to disturbances and uncertainties including significant changes in the monomer feed rate, fouling factor, impurity factor, ambient temperature and measurement noise. The simulation results are presented to illustrate the effectiveness and robustness of the proposed method.

Keywords: Robust fault detection, cascade control, independent RBF model, RBF neural networks, Chylla-Haase reactor, FDI under closed-loop control

Procedia PDF Downloads 485
734 Stability-Indicating High-Performance Thin-Layer Chromatography Method for Estimation of Naftopidil

Authors: P. S. Jain, K. D. Bobade, S. J. Surana

Abstract:

A simple, selective, precise and Stability-indicating High-performance thin-layer chromatographic method for analysis of Naftopidil both in a bulk and in pharmaceutical formulation has been developed and validated. The method employed, HPTLC aluminium plates precoated with silica gel as the stationary phase. The solvent system consisted of hexane: ethyl acetate: glacial acetic acid (4:4:2 v/v). The system was found to give compact spot for Naftopidil (Rf value of 0.43±0.02). Densitometric analysis of Naftopidil was carried out in the absorbance mode at 253 nm. The linear regression analysis data for the calibration plots showed good linear relationship with r2=0.999±0.0001 with respect to peak area in the concentration range 200-1200 ng per spot. The method was validated for precision, recovery and robustness. The limits of detection and quantification were 20.35 and 61.68 ng per spot, respectively. Naftopidil was subjected to acid and alkali hydrolysis, oxidation and thermal degradation. The drug undergoes degradation under acidic, basic, oxidation and thermal conditions. This indicates that the drug is susceptible to acid, base, oxidation and thermal conditions. The degraded product was well resolved from the pure drug with significantly different Rf value. Statistical analysis proves that the method is repeatable, selective and accurate for the estimation of investigated drug. The proposed developed HPTLC method can be applied for identification and quantitative determination of Naftopidil in bulk drug and pharmaceutical formulation.

Keywords: naftopidil, HPTLC, validation, stability, degradation

Procedia PDF Downloads 386
733 Technologic Information about Photovoltaic Applied in Urban Residences

Authors: Stephanie Fabris Russo, Daiane Costa Guimarães, Jonas Pedro Fabris, Maria Emilia Camargo, Suzana Leitão Russo, José Augusto Andrade Filho

Abstract:

Among renewable energy sources, solar energy is the one that has stood out. Solar radiation can be used as a thermal energy source and can also be converted into electricity by means of effects on certain materials, such as thermoelectric and photovoltaic panels. These panels are often used to generate energy in homes, buildings, arenas, etc., and have low pollution emissions. Thus, a technological prospecting was performed to find patents related to the use of photovoltaic plates in urban residences. The patent search was based on ESPACENET, associating the keywords photovoltaic and home, where we found 136 patent documents in the period of 1994-2015 in the fields title and abstract. Note that the years 2009, 2010, 2011, 2012, 2013 and 2014 had the highest number of applicants, with respectively, 11, 13, 23, 29, 15 and 21. Regarding the country that deposited about this technology, it is clear that China leads with 67 patent deposits, followed by Japan with 38 patents applications. It is important to note that most depositors, 50% are companies, 44% are individual inventors and only 6% are universities. On the International Patent classification (IPC) codes, we noted that the most present classification in results was H02J3/38, which represents provisions in parallel to feed a single network by two or more generators, converters or transformers. Among all categories, there is the H session, which means Electricity, with 70% of the patents.

Keywords: photovoltaic, urban residences, technology forecasting, prospecting

Procedia PDF Downloads 279
732 Fault Analysis of Induction Machine Using Finite Element Method (FEM)

Authors: Wiem Zaabi, Yemna Bensalem, Hafedh Trabelsi

Abstract:

The paper presents a finite element (FE) based efficient analysis procedure for induction machine (IM). The FE formulation approaches are proposed to achieve this goal: the magnetostatic and the non-linear transient time stepped formulations. The study based on finite element models offers much more information on the phenomena characterizing the operation of electrical machines than the classical analytical models. This explains the increase of the interest for the finite element investigations in electrical machines. Based on finite element models, this paper studies the influence of the stator and the rotor faults on the behavior of the IM. In this work, a simple dynamic model for an IM with inter-turn winding fault and a broken bar fault is presented. This fault model is used to study the IM under various fault conditions and severity. The simulation results are conducted to validate the fault model for different levels of fault severity. The comparison of the results obtained by simulation tests allowed verifying the precision of the proposed FEM model. This paper presents a technical method based on Fast Fourier Transform (FFT) analysis of stator current and electromagnetic torque to detect the faults of broken rotor bar. The technique used and the obtained results show clearly the possibility of extracting signatures to detect and locate faults.

Keywords: Finite element Method (FEM), Induction motor (IM), short-circuit fault, broken rotor bar, Fast Fourier Transform (FFT) analysis

Procedia PDF Downloads 282
731 The Network Relative Model Accuracy (NeRMA) Score: A Method to Quantify the Accuracy of Prediction Models in a Concurrent External Validation

Authors: Carl van Walraven, Meltem Tuna

Abstract:

Background: Network meta-analysis (NMA) quantifies the relative efficacy of 3 or more interventions from studies containing a subgroup of interventions. This study applied the analytical approach of NMA to quantify the relative accuracy of prediction models with distinct inclusion criteria that are evaluated on a common population (‘concurrent external validation’). Methods: We simulated binary events in 5000 patients using a known risk function. We biased the risk function and modified its precision by pre-specified amounts to create 15 prediction models with varying accuracy and distinct patient applicability. Prediction model accuracy was measured using the Scaled Brier Score (SBS). Overall prediction model accuracy was measured using fixed-effects methods that accounted for model applicability patterns. Prediction model accuracy was summarized as the Network Relative Model Accuracy (NeRMA) Score which ranges from -∞ through 0 (accuracy of random guessing) to 1 (accuracy of most accurate model in concurrent external validation). Results: The unbiased prediction model had the highest SBS. The NeRMA score correctly ranked all simulated prediction models by the extent of bias from the known risk function. A SAS macro and R-function was created to implement the NeRMA Score. Conclusions: The NeRMA Score makes it possible to quantify the accuracy of binomial prediction models having distinct inclusion criteria in a concurrent external validation.

Keywords: prediction model accuracy, scaled brier score, fixed effects methods, concurrent external validation

Procedia PDF Downloads 214
730 Clinical Pharmacology Throughout the World: A View from Global Health

Authors: Ragy Raafat Gaber Attaalla

Abstract:

Despite having the greatest rates of mortality and morbidity in the world, low- and middle-income (LMIC) nations trail high-income nations in terms of the number of clinical trials, the number of qualified researchers, and the amount of research information specific to their people. Health inequities and the use of precision medicine may be hampered by a lack of local genomic data, clinical pharmacology and pharmacometrics competence, and training opportunities. These issues can be solved by carrying out health care infrastructure development, which includes data gathering and well-designed clinical pharmacology training in LMICs. It will be advantageous if there is international cooperation focused at enhancing education and infrastructure and promoting locally motivated clinical trials and research. This paper outlines various instances where clinical pharmacology knowledge could be put to use, including pharmacogenomic opportunities that could lead to better clinical guideline recommendations. Examples of how clinical pharmacology training can be successfully implemented in LMICs are also provided, including clinical pharmacology and pharmacometrics training programmes in Africa and a Tanzanian researcher's personal experience while on a training sabbatical in the United States. These training initiatives will profit from advocacy for clinical pharmacologists' employment prospects and career development pathways, which are gradually becoming acknowledged and established in LMICs. The advancement of training and research infrastructure to increase clinical pharmacologists' knowledge in LMICs would be extremely beneficial because they have a significant role to play in global health.

Keywords: low- and middle-income, clinical pharmacology, pharmacometrics, career development pathways

Procedia PDF Downloads 59
729 Optimization of End Milling Process Parameters for Minimization of Surface Roughness of AISI D2 Steel

Authors: Pankaj Chandna, Dinesh Kumar

Abstract:

The present work analyses different parameters of end milling to minimize the surface roughness for AISI D2 steel. D2 Steel is generally used for stamping or forming dies, punches, forming rolls, knives, slitters, shear blades, tools, scrap choppers, tyre shredders etc. Surface roughness is one of the main indices that determines the quality of machined products and is influenced by various cutting parameters. In machining operations, achieving desired surface quality by optimization of machining parameters, is a challenging job. In case of mating components the surface roughness become more essential and is influenced by the cutting parameters, because, these quality structures are highly correlated and are expected to be influenced directly or indirectly by the direct effect of process parameters or their interactive effects (i.e. on process environment). In this work, the effects of selected process parameters on surface roughness and subsequent setting of parameters with the levels have been accomplished by Taguchi’s parameter design approach. The experiments have been performed as per the combination of levels of different process parameters suggested by L9 orthogonal array. Experimental investigation of the end milling of AISI D2 steel with carbide tool by varying feed, speed and depth of cut and the surface roughness has been measured using surface roughness tester. Analyses of variance have been performed for mean and signal-to-noise ratio to estimate the contribution of the different process parameters on the process.

Keywords: D2 steel, orthogonal array, optimization, surface roughness, Taguchi methodology

Procedia PDF Downloads 528
728 Participation in IAEA Proficiency Test to Analyse Cobalt, Strontium and Caesium in Seawater Using Direct Counting and Radiochemical Techniques

Authors: S. Visetpotjanakit, C. Khrautongkieo

Abstract:

Radiation monitoring in the environment and foodstuffs is one of the main responsibilities of Office of Atoms for Peace (OAP) as the nuclear regulatory body of Thailand. The main goal of the OAP is to assure the safety of the Thai people and environment from any radiological incidents. Various radioanalytical methods have been developed to monitor radiation and radionuclides in the environmental and foodstuff samples. To validate our analytical performance, several proficiency test exercises from the International Atomic Energy Agency (IAEA) have been performed. Here, the results of a proficiency test exercise referred to as the Proficiency Test for Tritium, Cobalt, Strontium and Caesium Isotopes in Seawater 2017 (IAEA-RML-2017-01) are presented. All radionuclides excepting ³H were analysed using various radioanalytical methods, i.e. direct gamma-ray counting for determining ⁶⁰Co, ¹³⁴Cs and ¹³⁷Cs and developed radiochemical techniques for analysing ¹³⁴Cs, ¹³⁷Cs using AMP pre-concentration technique and 90Sr using di-(2-ethylhexyl) phosphoric acid (HDEHP) liquid extraction technique. The analysis results were submitted to IAEA. All results passed IAEA criteria, i.e. accuracy, precision and trueness and obtained ‘Accepted’ statuses. These confirm the data quality from the OAP environmental radiation laboratory to monitor radiation in the environment.

Keywords: international atomic energy agency, proficiency test, radiation monitoring, seawater

Procedia PDF Downloads 162
727 Optimal Dynamic Regime for CO Oxidation Reaction Discovered by Policy-Gradient Reinforcement Learning Algorithm

Authors: Lifar M. S., Tereshchenko A. A., Bulgakov A. N., Guda S. A., Guda A. A., Soldatov A. V.

Abstract:

Metal nanoparticles are widely used as heterogeneous catalysts to activate adsorbed molecules and reduce the energy barrier of the reaction. Reaction product yield depends on the interplay between elementary processes - adsorption, activation, reaction, and desorption. These processes, in turn, depend on the inlet feed concentrations, temperature, and pressure. At stationary conditions, the active surface sites may be poisoned by reaction byproducts or blocked by thermodynamically adsorbed gaseous reagents. Thus, the yield of reaction products can significantly drop. On the contrary, the dynamic control accounts for the changes in the surface properties and adjusts reaction parameters accordingly. Therefore dynamic control may be more efficient than stationary control. In this work, a reinforcement learning algorithm has been applied to control the simulation of CO oxidation on a catalyst. The policy gradient algorithm is learned to maximize the CO₂ production rate based on the CO and O₂ flows at a given time step. Nonstationary solutions were found for the regime with surface deactivation. The maximal product yield was achieved for periodic variations of the gas flows, ensuring a balance between available adsorption sites and the concentration of activated intermediates. This methodology opens a perspective for the optimization of catalytic reactions under nonstationary conditions.

Keywords: artificial intelligence, catalyst, co oxidation, reinforcement learning, dynamic control

Procedia PDF Downloads 105
726 Exploitation of Endophytes for the Management of Plant Pathogens

Authors: N. P. Eswara Reddy, S. Thahir Basha

Abstract:

Here, we report the success stories of potential leaf, seed and root endophytes against soil borne as well as foliar plant pathogens which are nutritionally adequate and safe for consumption. Endophytes are the microorganisms that reside asymptomatically in the tissues of higher plants are a robust source of potential biocontrol agents and it is presumed that the survival ability of endophytes may be better when compared to phylloplane microflora. Of all the 68 putative leaf endophytes, the endophytes viz., EB9 (100%), and EB35 (100%) which were superior in controlling Colletotrichum gloeosporioides causing mango anthracnose were identified as Brevundimonas bullata (EB09) and Bacillus thuringiensis (EB35) and further delayed in ripening of mango fruits up to 21 days. As a part, the seed endophyte GSE-4 was identified as Archoromobacter spp. against Sclerotium rolfsii causing stem rot of groundnut and the root endophyte REB-8 against Rhizoctonia bataticola causing dry root rot of chickpea was identified as Bacillus subtilis. Both recorded least percent disease incidence (PDI) and increased plant growth promotion, respectively. Further, the novel Bacillus subtilis (SEB-2) against Macrophomina pahseolina causing charcoal rot of sunflower provides an ample scope for exploring the endophytes at large scale. The talc-based formulations of these endophytes developed can be commercialized after toxicological studies. At the bottom line these unexplored endophytes are the need of the hour against aggressive plant pathogens and to maintain the quality and abundance of food and feed and also to fetch marginal economy to the farmers will be discussed.

Keywords: endophytes, plant pathogens, commercialization, abundance of food

Procedia PDF Downloads 406
725 Prediction of Music Track Popularity: A Machine Learning Approach

Authors: Syed Atif Hassan, Luv Mehta, Syed Asif Hassan

Abstract:

Hit song science is a field of investigation wherein machine learning techniques are applied to music tracks in order to extract such features from audio signals which can capture information that could explain the popularity of respective tracks. Record companies invest huge amounts of money into recruiting fresh talents and churning out new music each year. Gaining insight into the basis of why a song becomes popular will result in tremendous benefits for the music industry. This paper aims to extract basic musical and more advanced, acoustic features from songs while also taking into account external factors that play a role in making a particular song popular. We use a dataset derived from popular Spotify playlists divided by genre. We use ten genres (blues, classical, country, disco, hip-hop, jazz, metal, pop, reggae, rock), chosen on the basis of clear to ambiguous delineation in the typical sound of their genres. We feed these features into three different classifiers, namely, SVM with RBF kernel, a deep neural network, and a recurring neural network, to build separate predictive models and choosing the best performing model at the end. Predicting song popularity is particularly important for the music industry as it would allow record companies to produce better content for the masses resulting in a more competitive market.

Keywords: classifier, machine learning, music tracks, popularity, prediction

Procedia PDF Downloads 637
724 Optimizing Microwave Assisted Extraction of Anti-Diabetic Plant Tinospora cordifolia Used in Ayush System for Estimation of Berberine Using Taguchi L-9 Orthogonal Design

Authors: Saurabh Satija, Munish Garg

Abstract:

Present work reports an efficient extraction method using microwaves based solvent–sample duo-heating mechanism, for the extraction of an important anti-diabetic plant Tinospora cordifolia from AYUSH system for estimation of berberine content. The process is based on simultaneous heating of sample matrix and extracting solvent under microwave energy. Methanol was used as the extracting solvent, which has excellent berberine solubilizing power and warms up under microwave attributable to its great dispersal factor. Extraction conditions like time of irradition, microwave power, solute-solvent ratio and temperature were optimized using Taguchi design and berberine was quantified using high performance thin layer chromatography. The ranked optimized parameters were microwave power (rank 1), irradiation time (rank 2) and temperature (rank 3). This kind of extraction mechanism under dual heating provided choice of extraction parameters for better precision and higher yield with significant reduction in extraction time under optimum extraction conditions. This developed extraction protocol will lead to extract higher amounts of berberine which is a major anti-diabetic moiety in Tinospora cordifolia which can lead to development of cheaper formulations of the plant Tinospora cordifolia and can help in rapid prevention of diabetes in the world.

Keywords: berberine, microwave, optimization, Taguchi

Procedia PDF Downloads 324
723 Unlocking the Puzzle of Borrowing Adult Data for Designing Hybrid Pediatric Clinical Trials

Authors: Rajesh Kumar G

Abstract:

A challenging aspect of any clinical trial is to carefully plan the study design to meet the study objective in optimum way and to validate the assumptions made during protocol designing. And when it is a pediatric study, there is the added challenge of stringent guidelines and difficulty in recruiting the necessary subjects. Unlike adult trials, there is not much historical data available for pediatrics, which is required to validate assumptions for planning pediatric trials. Typically, pediatric studies are initiated as soon as approval is obtained for a drug to be marketed for adults, so with the adult study historical information and with the available pediatric pilot study data or simulated pediatric data, the pediatric study can be well planned. Generalizing the historical adult study for new pediatric study is a tedious task; however, it is possible by integrating various statistical techniques and utilizing the advantage of hybrid study design, which will help to achieve the study objective in a smoother way even with the presence of many constraints. This research paper will explain how well the hybrid study design can be planned along with integrated technique (SEV) to plan the pediatric study; In brief the SEV technique (Simulation, Estimation (using borrowed adult data and applying Bayesian methods)) incorporates the use of simulating the planned study data and getting the desired estimates to Validate the assumptions.This method of validation can be used to improve the accuracy of data analysis, ensuring that results are as valid and reliable as possible, which allow us to make informed decisions well ahead of study initiation. With professional precision, this technique based on the collected data allows to gain insight into best practices when using data from historical study and simulated data alike.

Keywords: adaptive design, simulation, borrowing data, bayesian model

Procedia PDF Downloads 57
722 A Sensor Placement Methodology for Chemical Plants

Authors: Omid Ataei Nia, Karim Salahshoor

Abstract:

In this paper, a new precise and reliable sensor network methodology is introduced for unit processes and operations using the Constriction Coefficient Particle Swarm Optimization (CPSO) method. CPSO is introduced as a new search engine for optimal sensor network design purposes. Furthermore, a Square Root Unscented Kalman Filter (SRUKF) algorithm is employed as a new data reconciliation technique to enhance the stability and accuracy of the filter. The proposed design procedure incorporates precision, cost, observability, reliability together with importance-of-variables (IVs) as a novel measure in Instrumentation Criteria (IC). To the best of our knowledge, no comprehensive approach has yet been proposed in the literature to take into account the importance of variables in the sensor network design procedure. In this paper, specific weight is assigned to each sensor, measuring a process variable in the sensor network to indicate the importance of that variable over the others to cater to the ultimate sensor network application requirements. A set of distinct scenarios has been conducted to evaluate the performance of the proposed methodology in a simulated Continuous Stirred Tank Reactor (CSTR) as a highly nonlinear process plant benchmark. The obtained results reveal the efficacy of the proposed method, leading to significant improvement in accuracy with respect to other alternative sensor network design approaches and securing the definite allocation of sensors to the most important process variables in sensor network design as a novel achievement.

Keywords: constriction coefficient PSO, importance of variable, MRMSE, reliability, sensor network design, square root unscented Kalman filter

Procedia PDF Downloads 146
721 Importance of Developing a Decision Support System for Diagnosis of Glaucoma

Authors: Murat Durucu

Abstract:

Glaucoma is a condition of irreversible blindness, early diagnosis and appropriate interventions to make the patients able to see longer time. In this study, it addressed that the importance of developing a decision support system for glaucoma diagnosis. Glaucoma occurs when pressure happens around the eyes it causes some damage to the optic nerves and deterioration of vision. There are different levels ranging blindness of glaucoma disease. The diagnosis at an early stage allows a chance for therapies that slows the progression of the disease. In recent years, imaging technology from Heidelberg Retinal Tomography (HRT), Stereoscopic Disc Photo (SDP) and Optical Coherence Tomography (OCT) have been used for the diagnosis of glaucoma. This better accuracy and faster imaging techniques in response technique of OCT have become the most common method used by experts. Although OCT images or HRT precision and quickness, especially in the early stages, there are still difficulties and mistakes are occurred in diagnosis of glaucoma. It is difficult to obtain objective results on diagnosis and placement process of the doctor's. It seems very important to develop an objective decision support system for diagnosis and level the glaucoma disease for patients. By using OCT images and pattern recognition systems, it is possible to develop a support system for doctors to make their decisions on glaucoma. Thus, in this recent study, we develop an evaluation and support system to the usage of doctors. Pattern recognition system based computer software would help the doctors to make an objective evaluation for their patients. It is intended that after development and evaluation processes of the software, the system is planning to be serve for the usage of doctors in different hospitals.

Keywords: decision support system, glaucoma, image processing, pattern recognition

Procedia PDF Downloads 277