Search results for: energy performance gap
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18793

Search results for: energy performance gap

10363 A Validated High-Performance Liquid Chromatography-UV Method for Determination of Malondialdehyde-Application to Study in Chronic Ciprofloxacin Treated Rats

Authors: Anil P. Dewani, Ravindra L. Bakal, Anil V. Chandewar

Abstract:

Present work demonstrates the applicability of high-performance liquid chromatography (HPLC) with UV detection for the determination of malondialdehyde as malondialdehyde-thiobarbituric acid complex (MDA-TBA) in-vivo in rats. The HPLC-UV method for MDA-TBA was achieved by isocratic mode on a reverse-phase C18 column (250mm×4.6mm) at a flow rate of 1.0mLmin−1 followed by UV detection at 278 nm. The chromatographic conditions were optimized by varying the concentration and pH followed by changes in percentage of organic phase optimal mobile phase consisted of mixture of water (0.2% Triethylamine pH adjusted to 2.3 by ortho-phosphoric acid) and acetonitrile in ratio (80:20 % v/v). The retention time of MDA-TBA complex was 3.7 min. The developed method was sensitive as limit of detection and quantification (LOD and LOQ) for MDA-TBA complex were (standard deviation and slope of calibration curve) 110 ng/ml and 363 ng/ml respectively. The method was linear for MDA spiked in plasma and subjected to derivatization at concentrations ranging from 100 to 1000 ng/ml. The precision of developed method measured in terms of relative standard deviations for intra-day and inter-day studies was 1.6–5.0% and 1.9–3.6% respectively. The HPLC method was applied for monitoring MDA levels in rats subjected to chronic treatment of ciprofloxacin (CFL) (5mg/kg/day) for 21 days. Results were compared by findings in control group rats. Mean peak areas of both study groups was subjected for statistical treatment to unpaired student t-test to find p-values. The p value was < 0.001 indicating significant results and suggesting increased MDA levels in rats subjected to chronic treatment of CFL of 21 days.

Keywords: MDA, TBA, ciprofloxacin, HPLC-UV

Procedia PDF Downloads 313
10362 Evolutionary Swarm Robotics: Dynamic Subgoal-Based Path Formation and Task Allocation for Exploration and Navigation in Unknown Environments

Authors: Lavanya Ratnabala, Robinroy Peter, E. Y. A. Charles

Abstract:

This research paper addresses the challenges of exploration and navigation in unknown environments from an evolutionary swarm robotics perspective. Path formation plays a crucial role in enabling cooperative swarm robots to accomplish these tasks. The paper presents a method called the sub-goal-based path formation, which establishes a path between two different locations by exploiting visually connected sub-goals. Simulation experiments conducted in the Argos simulator demonstrate the successful formation of paths in the majority of trials. Furthermore, the paper tackles the problem of inter-collision (traffic) among a large number of robots engaged in path formation, which negatively impacts the performance of the sub-goal-based method. To mitigate this issue, a task allocation strategy is proposed, leveraging local communication protocols and light signal-based communication. The strategy evaluates the distance between points and determines the required number of robots for the path formation task, reducing unwanted exploration and traffic congestion. The performance of the sub-goal-based path formation and task allocation strategy is evaluated by comparing path length, time, and resource reduction against the A* algorithm. The simulation experiments demonstrate promising results, showcasing the scalability, robustness, and fault tolerance characteristics of the proposed approach.

Keywords: swarm, path formation, task allocation, Argos, exploration, navigation, sub-goal

Procedia PDF Downloads 32
10361 Systematic Evaluation of Convolutional Neural Network on Land Cover Classification from Remotely Sensed Images

Authors: Eiman Kattan, Hong Wei

Abstract:

In using Convolutional Neural Network (CNN) for classification, there is a set of hyperparameters available for the configuration purpose. This study aims to evaluate the impact of a range of parameters in CNN architecture i.e. AlexNet on land cover classification based on four remotely sensed datasets. The evaluation tests the influence of a set of hyperparameters on the classification performance. The parameters concerned are epoch values, batch size, and convolutional filter size against input image size. Thus, a set of experiments were conducted to specify the effectiveness of the selected parameters using two implementing approaches, named pertained and fine-tuned. We first explore the number of epochs under several selected batch size values (32, 64, 128 and 200). The impact of kernel size of convolutional filters (1, 3, 5, 7, 10, 15, 20, 25 and 30) was evaluated against the image size under testing (64, 96, 128, 180 and 224), which gave us insight of the relationship between the size of convolutional filters and image size. To generalise the validation, four remote sensing datasets, AID, RSD, UCMerced and RSCCN, which have different land covers and are publicly available, were used in the experiments. These datasets have a wide diversity of input data, such as number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in both training and testing. The results have shown that increasing the number of epochs leads to a higher accuracy rate, as expected. However, the convergence state is highly related to datasets. For the batch size evaluation, it has shown that a larger batch size slightly decreases the classification accuracy compared to a small batch size. For example, selecting the value 32 as the batch size on the RSCCN dataset achieves the accuracy rate of 90.34 % at the 11th epoch while decreasing the epoch value to one makes the accuracy rate drop to 74%. On the other extreme, setting an increased value of batch size to 200 decreases the accuracy rate at the 11th epoch is 86.5%, and 63% when using one epoch only. On the other hand, selecting the kernel size is loosely related to data set. From a practical point of view, the filter size 20 produces 70.4286%. The last performed image size experiment shows a dependency in the accuracy improvement. However, an expensive performance gain had been noticed. The represented conclusion opens the opportunities toward a better classification performance in various applications such as planetary remote sensing.

Keywords: CNNs, hyperparamters, remote sensing, land cover, land use

Procedia PDF Downloads 158
10360 Producing Sustained Renewable Energy and Removing Organic Pollutants from Distillery Wastewater using Consortium of Sludge Microbes

Authors: Anubha Kaushik, Raman Preet

Abstract:

Distillery wastewater in the form of spent wash is a complex and strong industrial effluent, with high load of organic pollutants that may deplete dissolved oxygen on being discharged into aquatic systems and contaminate groundwater by leaching of pollutants, while untreated spent wash disposed on land acidifies the soil. Stringent legislative measures have therefore been framed in different countries for discharge standards of distillery effluent. Utilising the organic pollutants present in various types of wastes as food by mixed microbial populations is emerging as an eco-friendly approach in the recent years, in which complex organic matter is converted into simpler forms, and simultaneously useful gases are produced as renewable and clean energy sources. In the present study, wastewater from a rice bran based distillery has been used as the substrate in a dark fermenter, and native microbial consortium from the digester sludge has been used as the inoculum to treat the wastewater and produce hydrogen. After optimising the operational conditions in batch reactors, sequential batch mode and continuous flow stirred tank reactors were used to study the best operational conditions for enhanced and sustained hydrogen production and removal of pollutants. Since the rate of hydrogen production by the microbial consortium during dark fermentation is influenced by concentration of organic matter, pH and temperature, these operational conditions were optimised in batch mode studies. Maximum hydrogen production rate (347.87ml/L/d) was attained in 32h dark fermentation while a good proportion of COD also got removed from the wastewater. Slightly acidic initial pH seemed to favor biohydrogen production. In continuous stirred tank reactor, high H2 production from distillery wastewater was obtained from a relatively shorter substrate retention time (SRT) of 48h and a moderate organic loading rate (OLR) of 172 g/l/d COD.

Keywords: distillery wastewater, hydrogen, microbial consortium, organic pollution, sludge

Procedia PDF Downloads 271
10359 Constraint-Based Computational Modelling of Bioenergetic Pathway Switching in Synaptic Mitochondria from Parkinson's Disease Patients

Authors: Diana C. El Assal, Fatima Monteiro, Caroline May, Peter Barbuti, Silvia Bolognin, Averina Nicolae, Hulda Haraldsdottir, Lemmer R. P. El Assal, Swagatika Sahoo, Longfei Mao, Jens Schwamborn, Rejko Kruger, Ines Thiele, Kathrin Marcus, Ronan M. T. Fleming

Abstract:

Degeneration of substantia nigra pars compacta dopaminergic neurons is one of the hallmarks of Parkinson's disease. These neurons have a highly complex axonal arborisation and a high energy demand, so any reduction in ATP synthesis could lead to an imbalance between supply and demand, thereby impeding normal neuronal bioenergetic requirements. Synaptic mitochondria exhibit increased vulnerability to dysfunction in Parkinson's disease. After biogenesis in and transport from the cell body, synaptic mitochondria become highly dependent upon oxidative phosphorylation. We applied a systems biochemistry approach to identify the metabolic pathways used by neuronal mitochondria for energy generation. The mitochondrial component of an existing manual reconstruction of human metabolism was extended with manual curation of the biochemical literature and specialised using omics data from Parkinson's disease patients and controls, to generate reconstructions of synaptic and somal mitochondrial metabolism. These reconstructions were converted into stoichiometrically- and fluxconsistent constraint-based computational models. These models predict that Parkinson's disease is accompanied by an increase in the rate of glycolysis and a decrease in the rate of oxidative phosphorylation within synaptic mitochondria. This is consistent with independent experimental reports of a compensatory switching of bioenergetic pathways in the putamen of post-mortem Parkinson's disease patients. Ongoing work, in the context of the SysMedPD project is aimed at computational prediction of mitochondrial drug targets to slow the progression of neurodegeneration in the subset of Parkinson's disease patients with overt mitochondrial dysfunction.

Keywords: bioenergetics, mitochondria, Parkinson's disease, systems biochemistry

Procedia PDF Downloads 281
10358 Cupric Oxide Thin Films for Optoelectronic Application

Authors: Sanjay Kumar, Dinesh Pathak, Sudhir Saralch

Abstract:

Copper oxide is a semiconductor that has been studied for several reasons such as the natural abundance of starting material copper (Cu); the easiness of production by Cu oxidation; their non-toxic nature and the reasonably good electrical and optical properties. Copper oxide is well-known as cuprite oxide. The cuprite is p-type semiconductors having band gap energy of 1.21 to 1.51 eV. As a p-type semiconductor, conduction arises from the presence of holes in the valence band (VB) due to doping/annealing. CuO is attractive as a selective solar absorber since it has high solar absorbency and a low thermal emittance. CuO is very promising candidate for solar cell applications as it is a suitable material for photovoltaic energy conversion. It has been demonstrated that the dip technique can be used to deposit CuO films in a simple manner using metallic chlorides (CuCl₂.2H₂O) as a starting material. Copper oxide films are prepared using a methanolic solution of cupric chloride (CuCl₂.2H₂O) at three baking temperatures. We made three samples, after heating which converts to black colour. XRD data confirm that the films are of CuO phases at a particular temperature. The optical band gap of the CuO films calculated from optical absorption measurements is 1.90 eV which is quite comparable to the reported value. Dip technique is a very simple and low-cost method, which requires no sophisticated specialized setup. Coating of the substrate with a large surface area can be easily obtained by this technique compared to that in physical evaporation techniques and spray pyrolysis. Another advantage of the dip technique is that it is very easy to coat both sides of the substrate instead of only one and to deposit otherwise inaccessible surfaces. This method is well suited for applying coating on the inner and outer surfaces of tubes of various diameters and shapes. The main advantage of the dip coating method lies in the fact that it is possible to deposit a variety of layers having good homogeneity and mechanical and chemical stability with a very simple setup. In this paper, the CuO thin films preparation by dip coating method and their characterization will be presented.

Keywords: absorber material, cupric oxide, dip coating, thin film

Procedia PDF Downloads 300
10357 Efficiency and Scale Elasticity in Network Data Envelopment Analysis: An Application to International Tourist Hotels in Taiwan

Authors: Li-Hsueh Chen

Abstract:

Efficient operation is more and more important for managers of hotels. Unlike the manufacturing industry, hotels cannot store their products. In addition, many hotels provide room service, and food and beverage service simultaneously. When efficiencies of hotels are evaluated, the internal structure should be considered. Hence, based on the operational characteristics of hotels, this study proposes a DEA model to simultaneously assess the efficiencies among the room production division, food and beverage production division, room service division and food and beverage service division. However, not only the enhancement of efficiency but also the adjustment of scale can improve the performance. In terms of the adjustment of scale, scale elasticity or returns to scale can help to managers to make decisions concerning expansion or contraction. In order to construct a reasonable approach to measure the efficiencies and scale elasticities of hotels, this study builds an alternative variable-returns-to-scale-based two-stage network DEA model with the combination of parallel and series structures to explore the scale elasticities of the whole system, room production division, food and beverage production division, room service division and food and beverage service division based on the data of international tourist hotel industry in Taiwan. The results may provide valuable information on operational performance and scale for managers and decision makers.

Keywords: efficiency, scale elasticity, network data envelopment analysis, international tourist hotel

Procedia PDF Downloads 215
10356 The Effect of Perceived Environmental Uncertainty on Corporate Entrepreneurship Performance: A Field Study in a Large Industrial Zone in Turkey

Authors: Adem Öğüt, M. Tahir Demirsel

Abstract:

Rapid changes and developments today, besides the opportunities and facilities they offer to the organization, may also be a source of danger and difficulties due to the uncertainty. In order to take advantage of opportunities and to take the necessary measures against possible uncertainties, organizations must always follow the changes and developments that occur in the business environment and develop flexible structures and strategies for the alternative cases. Perceived environmental uncertainty is an outcome of managers’ perceptions of the combined complexity, instability and unpredictability in the organizational environment. An environment that is perceived to be complex, changing rapidly, and difficult to predict creates high levels of uncertainty about the appropriate organizational responses to external circumstances. In an uncertain and complex environment, organizations experiencing cutthroat competition may be successful by developing their corporate entrepreneurial ability. Corporate entrepreneurship is a process that includes many elements such as innovation, creating new business, renewal, risk-taking and being predictive. Successful corporate entrepreneurship is a critical factor which has a significant contribution to gain a sustainable competitive advantage, to renew the organization and to adapt the environment. In this context, the objective of this study is to investigate the effect of perceived environmental uncertainty of managers on corporate entrepreneurship performance. The research was conducted on 222 business executives in one of the major industrial zones of Turkey, Konya Organized Industrial Zone (KOS). According to the results, it has been observed that there is a positive statistically significant relationship between perceived environmental uncertainty and corporate entrepreneurial activities.

Keywords: corporate entrepreneurship, entrepreneurship, industrial zone, perceived environmental uncertainty, uncertainty

Procedia PDF Downloads 305
10355 The Effect of Different Exercise Intensities on Plasma Endostatin in Healthy Volunteers

Authors: Inayat Shah, Muhammad Omar Malik, Ghareeb Alshuwaier, Ronald H. Baxendale

Abstract:

Background: The balance between angiogenesis and angiostasis is important in growth and developmental processes in the body. Angiogenic and angiostatic mediators control this balance. Endostatin is one of the prominent angiostatic mediators. The marked angiostatic effect of endostatin includes inhibiting endothelial cell migration, proliferation and apoptosis. Physical activity decreases the risk and development of many angiogenesis related health problems including atherosclerosis and numerous cancers. Physiological influences of different physical activities on plasma endostatin concentration are controversial and not completely clear. Moreover, correlation of physical characteristics and metabolic predictors during physical activity on circulating endostatin is indistinct and poorly speculated. The study aimed to determine the effects of mild, moderate and vigorous exercise on the concentration of endostatin in plasma. Methodology: 22 participants, 16 males (age = 30.6 ± 7.8 years) and 6 females (age = 26.5 ± 5 years) were recruited. Weekly session of different intensities exercise based on the predicted maximum heart of the participants [60%(low), 70% (moderate) and 80% (vigorous)] were carried out. The duration and work rate for each participant was determined through sub-maximal exercise. Standardization of the session was done on total energy expenditure of the participants per session. One pre exercise and two post exercise samples were taken at intervals of 10 and 60 minutes. Results: Pre-exercise mean endostatin was 101 ± 20 ng/dl. Low intensity exercise insignificantly decreased the endostatin concentration in plasma at 10 and 60 minutes 97 ± 20 ng/dl (p= 0.5), 98 ± 23 ng/dl (p= 0.8)). However, moderate (p= 0.022, 0.004) and vigorous intensities (p ≤ 0.001, 0.02) increased the endostatin concentrations significantly at both 10 and 60 minutes intervals respectively. The effects were not significantly influenced by gender, exercise mode (walking vs. running), components of exercise (HR, Speed, Gradients, distance, duration) or metabolism during exercise (VO₂ max, VCO₂, RER, energy expenditure, rate of carbohydrate or fats oxidation). Conclusion: Low intensity exercises did not influence endostatin concentration. However, moderate to high intensity exercises significantly increase endostatin concentration and may have potential benefits.

Keywords: angiogenesis, exercise, endostatin, physical activity

Procedia PDF Downloads 215
10354 Assessing Overall Thermal Conductance Value of Low-Rise Residential Home Exterior Above-Grade Walls Using Infrared Thermography Methods

Authors: Matthew D. Baffa

Abstract:

Infrared thermography is a non-destructive test method used to estimate surface temperatures based on the amount of electromagnetic energy radiated by building envelope components. These surface temperatures are indicators of various qualitative building envelope deficiencies such as locations and extent of heat loss, thermal bridging, damaged or missing thermal insulation, air leakage, and moisture presence in roof, floor, and wall assemblies. Although infrared thermography is commonly used for qualitative deficiency detection in buildings, this study assesses its use as a quantitative method to estimate the overall thermal conductance value (U-value) of the exterior above-grade walls of a study home. The overall U-value of exterior above-grade walls in a home provides useful insight into the energy consumption and thermal comfort of a home. Three methodologies from the literature were employed to estimate the overall U-value by equating conductive heat loss through the exterior above-grade walls to the sum of convective and radiant heat losses of the walls. Outdoor infrared thermography field measurements of the exterior above-grade wall surface and reflective temperatures and emissivity values for various components of the exterior above-grade wall assemblies were carried out during winter months at the study home using a basic thermal imager device. The overall U-values estimated from each methodology from the literature using the recorded field measurements were compared to the nominal exterior above-grade wall overall U-value calculated from materials and dimensions detailed in architectural drawings of the study home. The nominal overall U-value was validated through calendarization and weather normalization of utility bills for the study home as well as various estimated heat loss quantities from a HOT2000 computer model of the study home and other methods. Under ideal environmental conditions, the estimated overall U-values deviated from the nominal overall U-value between ±2% to ±33%. This study suggests infrared thermography can estimate the overall U-value of exterior above-grade walls in low-rise residential homes with a fair amount of accuracy.

Keywords: emissivity, heat loss, infrared thermography, thermal conductance

Procedia PDF Downloads 300
10353 Rising Levels of Greenhouse Gases: Implication for Global Warming in Anambra State South Eastern Nigeria

Authors: Chikwelu Edward Emenike, Ogbuagu Uchenna Fredrick

Abstract:

About 34% of the solar radiant energy reaching the earth is immediately reflected back to space as incoming radiation by clouds, chemicals, dust in the atmosphere and by the earth’s surface. Most of the remaining 66% warms the atmosphere and land. Most of the incoming solar radiation not reflect away is degraded into low-quality heat and flows into space. The rate at which this energy returns to space as low-quality heat is affected by the presence of molecules of greenhouse gases. Gaseous emission was measured with the aid of Growen gas Analyzer with a digital readout. Total measurements of eight parameters of twelve selected sample locations taken at two different seasons within two months were made. The ambient air quality investigation in Anambra State has shown the overall mean concentrations of gaseous emission at twelve (12) locations. The mean gaseous emissions showed (NO2=0.66ppm, SO2=0.30ppm, CO=43.93ppm, H2S=2.17ppm, CH4=1.27ppm, CFC=1.59ppb, CO2=316.33ppm, N2O=302.67ppb and O3=0.37ppm). These values do not conform to the National Ambient Air Quality Standard (NAAQS) and thus contribute significantly to the global warming. Because some of these gaseous emissions (SO2, NO2) are oxidizing agents, they act as irritants that damage delicate tissues in the eyes and respiratory passages. These can impair lung function and trigger cardiovascular problems as the heart tries to compensate for lack of Oxygen by pumping faster and harder. The major sources of air pollution are transportation, industrial processes, stationary fuel combustion and solid waste disposal, thus much is yet to be done in a developing country like Nigeria. Air pollution control using pollution-control equipment to reduce the major conventional pollutants, relocating people who live very close to dumpsites, processing and treatment of gases to produce electricity, heat, fuel and various chemical components should be encouraged.

Keywords: ambient air, atmosphere, greenhouse gases, anambra state

Procedia PDF Downloads 412
10352 Bi-Liquid Free Surface Flow Simulation of Liquid Atomization for Bi-Propellant Thrusters

Authors: Junya Kouwa, Shinsuke Matsuno, Chihiro Inoue, Takehiro Himeno, Toshinori Watanabe

Abstract:

Bi-propellant thrusters use impinging jet atomization to atomize liquid fuel and oxidizer. Atomized propellants are mixed and combusted due to auto-ignitions. Therefore, it is important for a prediction of thruster’s performance to simulate the primary atomization phenomenon; especially, the local mixture ratio can be used as indicator of thrust performance, so it is useful to evaluate it from numerical simulations. In this research, we propose a numerical method for considering bi-liquid and the mixture and install it to CIP-LSM which is a two-phase flow simulation solver with level-set and MARS method as an interfacial tracking method and can predict local mixture ratio distribution downstream from an impingement point. A new parameter, beta, which is defined as the volume fraction of one liquid in the mixed liquid within a cell is introduced and the solver calculates the advection of beta, inflow and outflow flux of beta to a cell. By validating this solver, we conducted a simple experiment and the same simulation by using the solver. From the result, the solver can predict the penetrating length of a liquid jet correctly and it is confirmed that the solver can simulate the mixing of liquids. Then we apply this solver to the numerical simulation of impinging jet atomization. From the result, the inclination angle of fan after the impingement in the bi-liquid condition reasonably agrees with the theoretical value. Also, it is seen that the mixture of liquids can be simulated in this result. Furthermore, simulation results clarify that the injecting condition affects the atomization process and local mixture ratio distribution downstream drastically.

Keywords: bi-propellant thrusters, CIP-LSM, free-surface flow simulation, impinging jet atomization

Procedia PDF Downloads 272
10351 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection

Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye

Abstract:

The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.

Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document

Procedia PDF Downloads 146
10350 Count of Trees in East Africa with Deep Learning

Authors: Nubwimana Rachel, Mugabowindekwe Maurice

Abstract:

Trees play a crucial role in maintaining biodiversity and providing various ecological services. Traditional methods of counting trees are time-consuming, and there is a need for more efficient techniques. However, deep learning makes it feasible to identify the multi-scale elements hidden in aerial imagery. This research focuses on the application of deep learning techniques for tree detection and counting in both forest and non-forest areas through the exploration of the deep learning application for automated tree detection and counting using satellite imagery. The objective is to identify the most effective model for automated tree counting. We used different deep learning models such as YOLOV7, SSD, and UNET, along with Generative Adversarial Networks to generate synthetic samples for training and other augmentation techniques, including Random Resized Crop, AutoAugment, and Linear Contrast Enhancement. These models were trained and fine-tuned using satellite imagery to identify and count trees. The performance of the models was assessed through multiple trials; after training and fine-tuning the models, UNET demonstrated the best performance with a validation loss of 0.1211, validation accuracy of 0.9509, and validation precision of 0.9799. This research showcases the success of deep learning in accurate tree counting through remote sensing, particularly with the UNET model. It represents a significant contribution to the field by offering an efficient and precise alternative to conventional tree-counting methods.

Keywords: remote sensing, deep learning, tree counting, image segmentation, object detection, visualization

Procedia PDF Downloads 49
10349 Analysis of Flow Dynamics of Heated and Cooled Pylon Upstream to the Cavity past Supersonic Flow with Wall Heating and Cooling

Authors: Vishnu Asokan, Zaid M. Paloba

Abstract:

Flow over cavities is an important area of research due to the significant change in flow physics caused by cavity aspect ratio, free stream Mach number and the nature of upstream boundary layer approaching the cavity leading edge. Cavity flow finds application in aircraft wheel well, weapons bay, combustion chamber of scramjet engines, etc. These flows are highly unsteady, compressible and turbulent and it involves mass entrainment coupled with acoustics phenomenon. Variation of flow dynamics in an angled cavity with a heated and cooled pylon upstream to the cavity with spatial combinations of heat flux addition and removal to the wall studied numerically. The goal of study is to investigate the effect of energy addition, removal to the cavity walls and pylon cavity flow dynamics. Preliminary steady state numerical simulations on inclined cavities with heat addition have shown that wall pressure profiles, as well as the recirculation, are influenced by heat transfer to the compressible fluid medium. Such a hybrid control of cavity flow dynamics in the form of heat transfer and pylon geometry can open out greater opportunities in enhancement of mixing and flame holding requirements of supersonic combustors. Addition of pylon upstream to the cavity reduces the acoustic oscillations emanating from the geometry. A numerical unsteady analysis of supersonic flow past cavities exposed to cavity wall heating and cooling with heated and cooled pylon helps to get a clear idea about the oscillation suppression in the cavity. A Cavity of L/D 4 and aft wall angle 22 degree with an upstream pylon of h/D=1.5 mm with a wall angle 29 degree exposed to supersonic flow of Mach number 2 and heat flux of 40 W/cm² and -40 W/cm² modeled for the above study. In the preliminary study, the domain is modeled and validated numerically with a turbulence model of SST k-ω using an HLLC implicit scheme. Both qualitative and quantitative flow data extracted and analyzed using advanced CFD tools. Flow visualization is done using numerical Schlieren method as the fluid medium gives the density variation. The heat flux addition to the wall increases the secondary vortex size of the cavity and removal of energy leads to the reduction in vortex size. The flow field turbulence seems to be increasing at higher heat flux. The shear layer thickness increases as heat flux increases. The steady state analysis of wall pressure shows that there is variation on wall pressure as heat flux increases. Shift in frequency of unsteady wall pressure analysis is an interesting observation for the above study. The time averaged skin friction seems to be reducing at higher heat flux due to the variation in viscosity of fluid inside the cavity.

Keywords: energy addition, frequency shift, Numerical Schlieren, shear layer, vortex evolution

Procedia PDF Downloads 134
10348 Evidence on the Nature and Extent of Fall in Oil Prices on the Financial Performance of Listed Companies: A Ratio Analysis Case Study of the Insurance Sector in the UAE

Authors: Pallavi Kishore, Mariam Aslam

Abstract:

The sharp decline in oil prices that started in 2014 affected most economies in the world either positively or negatively. In some economies, particularly the oil exporting countries, the effects were felt immediately. The Gulf Cooperation Council’s (GCC henceforth) countries are oil and gas-dependent with the largest oil reserves in the world. UAE (United Arab Emirates) has been striving to diversify away from oil and expects higher non-oil growth in 2018. These two factors, falling oil prices and the economy strategizing away from oil dependence, make a compelling case to study the financial performance of various sectors in the economy. Among other sectors, the insurance sector is widely recognized as an important indicator of the health of the economy. An expanding population, surge in construction and infrastructure, increased life expectancy, greater expenditure on automobiles and other luxury goods translate to a booming insurance sector. A slow-down of the insurance sector, on the other hand, may indicate a general slow-down in the economy. Therefore, a study on the insurance sector will help understand the general nature of the current economy. This study involves calculations and comparisons of ratios pre and post the fall in oil prices in the insurance sector in the UAE. A sample of 33 companies listed on the official stock exchanges of UAE-Dubai Financial Market and Abu Dhabi Stock Exchange were collected and empirical analysis employed to study the financial performance pre and post fall in oil prices. Ratios were calculated in 5 categories: Profitability, Liquidity, Leverage, Efficiency, and Investment. The means pre- and post-fall are compared to conclude that the profitability ratios including ROSF (Return on Shareholder Funds), ROCE (Return on Capital Employed) and NPM (Net Profit Margin) have all taken a hit. Parametric tests, including paired t-test, concludes that while the fall in profitability ratios is statistically significant, the other ratios have been quite stable in the period. The efficiency, liquidity, gearing and investment ratios have not been severely affected by the fall in oil prices. This may be due to the implementation of stronger regulatory policies and is a testimony to the diversification into the non-oil economy. The regulatory authorities can use the findings of this study to ensure transparency in revealing financial information to the public and employ policies that will help further the health of the economy. The study will also help understand which areas within the sector could benefit from more regulations.

Keywords: UAE, insurance sector, ratio analysis, oil price, profitability, liquidity, gearing, investment, efficiency

Procedia PDF Downloads 234
10347 Discrete PID and Discrete State Feedback Control of a Brushed DC Motor

Authors: I. Valdez, J. Perdomo, M. Colindres, N. Castro

Abstract:

Today, digital servo systems are extensively used in industrial manufacturing processes, robotic applications, vehicles and other areas. In such control systems, control action is provided by digital controllers with different compensation algorithms, which are designed to meet specific requirements for a given application. Due to the constant search for optimization in industrial processes, it is of interest to design digital controllers that offer ease of realization, improved computational efficiency, affordable return rates, and ease of tuning that ultimately improve the performance of the controlled actuators. There is a vast range of options of compensation algorithms that could be used, although in the industry, most controllers used are based on a PID structure. This research article compares different types of digital compensators implemented in a servo system for DC motor position control. PID compensation is evaluated on its two most common architectures: PID position form (1 DOF), and PID speed form (2 DOF). State feedback algorithms are also evaluated, testing two modern control theory techniques: discrete state observer for non-measurable variables tracking, and a linear quadratic method which allows a compromise between the theoretical optimal control and the realization that most closely matches it. The compared control systems’ performance is evaluated through simulations in the Simulink platform, in which it is attempted to model accurately each of the system’s hardware components. The criteria by which the control systems are compared are reference tracking and disturbance rejection. In this investigation, it is considered that the accurate tracking of the reference signal for a position control system is particularly important because of the frequency and the suddenness in which the control signal could change in position control applications, while disturbance rejection is considered essential because the torque applied to the motor shaft due to sudden load changes can be modeled as a disturbance that must be rejected, ensuring reference tracking. Results show that 2 DOF PID controllers exhibit high performance in terms of the benchmarks mentioned, as long as they are properly tuned. As for controllers based on state feedback, due to the nature and the advantage which state space provides for modelling MIMO, it is expected that such controllers evince ease of tuning for disturbance rejection, assuming that the designer of such controllers is experienced. An in-depth multi-dimensional analysis of preliminary research results indicate that state feedback control method is more satisfactory, but PID control method exhibits easier implementation in most control applications.

Keywords: control, DC motor, discrete PID, discrete state feedback

Procedia PDF Downloads 256
10346 Exploring the Contribution of Dynamic Capabilities to a Firm's Value Creation: The Role of Competitive Strategy

Authors: Mona Rashidirad, Hamid Salimian

Abstract:

Dynamic capabilities, as the most considerable capabilities of firms in the current fast-moving economy may not be sufficient for performance improvement, but their contribution to performance is undeniable. While much of the extant literature investigates the impact of dynamic capabilities on organisational performance, little attention has been devoted to understand whether and how dynamic capabilities create value. Dynamic capabilities as the mirror of competitive strategies should enable firms to search and seize new ideas, integrate and coordinate the firm’s resources and capabilities in order to create value. A careful investigation to the existing knowledge base remains us puzzled regarding the relationship among competitive strategies, dynamic capabilities and value creation. This study thus attempts to fill in this gap by empirically investigating the impact of dynamic capabilities on value creation and the mediating impact of competitive strategy on this relationship. We aim to contribute to dynamic capability view (DCV), in both theoretical and empirical senses, by exploring the impact of dynamic capabilities on firms’ value creation and whether competitive strategy can play any role in strengthening/weakening this relationship. Using a sample of 491 firms in the UK telecommunications market, the results demonstrate that dynamic sensing, learning, integrating and coordinating capabilities play a significant role in firm’s value creation, and competitive strategy mediates the impact of dynamic capabilities on value creation. Adopting DCV, this study investigates whether the value generating from dynamic capabilities depends on firms’ competitive strategy. This study argues a firm’s competitive strategy can mediate its ability to derive value from its dynamic capabilities and it explains the extent a firm’s competitive strategy may influence its value generation. The results of the dynamic capabilities-value relationships support our expectations and justify the non-financial value added of the four dynamic capability processes in a highly turbulent market, such as UK telecommunications. Our analytical findings of the relationship among dynamic capabilities, competitive strategy and value creation provide further evidence of the undeniable role of competitive strategy in deriving value from dynamic capabilities. The results reinforce the argument for the need to consider the mediating impact of organisational contextual factors, such as firm’s competitive strategy to examine how they interact with dynamic capabilities to deliver value. The findings of this study provide significant contributions to theory. Unlike some previous studies which conceptualise dynamic capabilities as a unidimensional construct, this study demonstrates the benefits of understanding the details of the link among the four types of dynamic capabilities, competitive strategy and value creation. In terms of contributions to managerial practices, this research draws attention to the importance of competitive strategy in conjunction with development and deployment of dynamic capabilities to create value. Managers are now equipped with solid empirical evidence which explains why DCV has become essential to firms in today’s business world.

Keywords: dynamic capabilities, resource based theory, value creation, competitive strategy

Procedia PDF Downloads 234
10345 Study of Clutch Cable Architecture and Its Influence in Efficiency of Mechanical Cable Release System

Authors: M. Devamanalan, K. Pothiraj, M. Sudhan

Abstract:

In competitive market like India, there is a high demand on the equal contribution on performance and its durability aspect of any system. In General vehicle has multiple sub-systems such as powertrain, BIW, Brakes, Actuations, Suspension and Seats etc., To withstand the market challenges, the contribution of each sub-system is very vital. The malfunction of any one sub system will directly have an impact on the performance of the major system which lead to dis-satisfaction to the end user. The Powertrain system consists of several sub-systems in which clutch is one of the prime sub-systems in MT vehicles which assist for smoother gear shifts with proper clutch dis-engagement and engagement. In general, most of the vehicles will have a mechanical or semi or full hydraulic clutch release system, whereas in small Commercial Vehicles (SCV) the majorly used clutch release system is mechanical cable release system due to its lesser cost and functional requirements. The major bottle neck in the cable type clutch release system is increase in pedal effort due to hysteresis increase and Gear shifting hard due to efficiency loss / cable slackness over the mileage accumulation of the vehicle. This study is to mainly focus on how the efficiency and hysteresis change over the mileage of the vehicle occurs because of the design architecture of outer and inner cable. The study involves several cable design validation results from vehicle level and rig level through the defined cable routing and test procedures. Results are compared to evaluate the suitable cable design architecture based on better efficiency and lower hysteresis parameters at initial and end of the validation.

Keywords: clutch, clutch cable, efficiency, architecture, cable routing

Procedia PDF Downloads 104
10344 Green Synthesis (Using Environment Friendly Bacteria) of Silver-Nanoparticles and Their Application as Drug Delivery Agents

Authors: Sutapa Mondal Roy, Suban K. Sahoo

Abstract:

The primary aim of this work is to synthesis silver nanoparticles (AgNPs) through environmentally benign routes to avoid any chemical toxicity related undesired side effects. The nanoparticles were stabilized with drug ciprofloxacin (Cp) and were studied for their effectiveness as drug delivery agent. Targeted drug delivery improves the therapeutic potential of drugs at the diseased site as well as lowers the overall dose and undesired side effects. The small size of nanoparticles greatly facilitates the transport of active agents (drugs) across biological membranes and allows them to pass through the smallest capillaries in the body that are 5-6 μm in diameter, and can minimize possible undesired side effects. AgNPs are non-toxic, inert, stable, and has a high binding capacity and thus can be considered as biomaterials. AgNPs were synthesized from the nutrient broth supernatant after the culture of environment-friendly bacteria Bacillus subtilis. The AgNPs were found to show the surface plasmon resonance (SPR) band at 425 nm. The Cp capped Ag nanoparticles formation was complete within 30 minutes, which was confirmed from absorbance spectroscopy. Physico-chemical nature of the AgNPs-Cp system was confirmed by Dynamic Light Scattering (DLS), Transmission Electron Microscopy (TEM) etc. The AgNPs-Cp system size was found to be in the range of 30-40 nm. To monitor the kinetics of drug release from the surface of nanoparticles, the release of Cp was carried out by careful dialysis keeping AgNPs-Cp system inside the dialysis bag at pH 7.4 over time. The drug release was almost complete after 30 hrs. During the drug delivery process, to understand the AgNPs-Cp system in a better way, the sincere theoretical investigation is been performed employing Density Functional Theory. Electronic charge transfer, electron density, binding energy as well as thermodynamic properties like enthalpy, entropy, Gibbs free energy etc. has been predicted. The electronic and thermodynamic properties, governed by the AgNPs-Cp interactions, indicate that the formation of AgNPs-Cp system is exothermic i.e. thermodynamically favorable process. The binding energy and charge transfer analysis implies the optimum stability of the AgNPs-Cp system. Thus, the synthesized Cp-Ag nanoparticles can be effectively used for biological purposes due to its environmentally benign routes of synthesis procedures, which is clean, biocompatible, non-toxic, safe, cost-effective, sustainable and eco-friendly. The Cp-AgNPs as biomaterials can be successfully used for drug delivery procedures due to slow release of drug from nanoparticles over a considerable period of time. The kinetics of the drug release show that this drug-nanoparticle assembly can be effectively used as potential tools for therapeutic applications. The ease of synthetic procedure, lack of possible chemical toxicity and their biological activity along with excellent application as drug delivery agent will open up vista of using nanoparticles as effective and successful drug delivery agent to be used in modern days.

Keywords: silver nanoparticles, ciprofloxacin, density functional theory, drug delivery

Procedia PDF Downloads 375
10343 Automated Building Internal Layout Design Incorporating Post-Earthquake Evacuation Considerations

Authors: Sajjad Hassanpour, Vicente A. González, Yang Zou, Jiamou Liu

Abstract:

Earthquakes pose a significant threat to both structural and non-structural elements in buildings, putting human lives at risk. Effective post-earthquake evacuation is critical for ensuring the safety of building occupants. However, current design practices often neglect the integration of post-earthquake evacuation considerations into the early-stage architectural design process. To address this gap, this paper presents a novel automated internal architectural layout generation tool that optimizes post-earthquake evacuation performance. The tool takes an initial plain floor plan as input, along with specific requirements from the user/architect, such as minimum room dimensions, corridor width, and exit lengths. Based on these inputs, firstly, the tool randomly generates different architectural layouts. Secondly, the human post-earthquake evacuation behaviour will be thoroughly assessed for each generated layout using the advanced Agent-Based Building Earthquake Evacuation Simulation (AB2E2S) model. The AB2E2S prototype is a post-earthquake evacuation simulation tool that incorporates variables related to earthquake intensity, architectural layout, and human factors. It leverages a hierarchical agent-based simulation approach, incorporating reinforcement learning to mimic human behaviour during evacuation. The model evaluates different layout options and provides feedback on evacuation flow, time, and possible casualties due to earthquake non-structural damage. By integrating the AB2E2S model into the automated layout generation tool, architects and designers can obtain optimized architectural layouts that prioritize post-earthquake evacuation performance. Through the use of the tool, architects and designers can explore various design alternatives, considering different minimum room requirements, corridor widths, and exit lengths. This approach ensures that evacuation considerations are embedded in the early stages of the design process. In conclusion, this research presents an innovative automated internal architectural layout generation tool that integrates post-earthquake evacuation simulation. By incorporating evacuation considerations into the early-stage design process, architects and designers can optimize building layouts for improved post-earthquake evacuation performance. This tool empowers professionals to create resilient designs that prioritize the safety of building occupants in the face of seismic events.

Keywords: agent-based simulation, automation in design, architectural layout, post-earthquake evacuation behavior

Procedia PDF Downloads 87
10342 High-Performance Thin-layer Chromatography (HPTLC) Analysis of Multi-Ingredient Traditional Chinese Medicine Supplement

Authors: Martin Cai, Khadijah B. Hashim, Leng Leo, Edmund F. Tian

Abstract:

Analysis of traditional Chinese medicinal (TCM) supplements has always been a laborious task, particularly in the case of multi‐ingredient formulations. Traditionally, herbal extracts are analysed using one or few markers compounds. In the recent years, however, pharmaceutical companies are introducing health supplements of TCM active ingredients to cater to the needs of consumers in the fast-paced society in this age. As such, new problems arise in the aspects of composition identification as well as quality analysis. In most cases of products or supplements formulated with multiple TCM herbs, the chemical composition, and nature of each raw material differs greatly from the others in the formulation. This results in a requirement for individual analytical processes in order to identify the marker compounds in the various botanicals. Thin-layer Chromatography (TLC) is a simple, cost effective, yet well-regarded method for the analysis of natural products, both as a Pharmacopeia-approved method for identification and authentication of herbs, and a great analytical tool for the discovery of chemical compositions in herbal extracts. Recent technical advances introduced High-Performance TLC (HPTLC) where, with the help of automated equipment and improvements on the chromatographic materials, both the quality and reproducibility are greatly improved, allowing for highly standardised analysis with greater details. Here we report an industrial consultancy project with ONI Global Pte Ltd for the analysis of LAC Liver Protector, a TCM formulation aimed at improving liver health. The aim of this study was to identify 4 key components of the supplement using HPTLC, following protocols derived from Chinese Pharmacopeia standards. By comparing the TLC profiles of the supplement to the extracts of the herbs reported in the label, this project proposes a simple and cost-effective analysis of the presence of the 4 marker compounds in the multi‐ingredient formulation by using 4 different HPTLC methods. With the increasing trend of small and medium-sized enterprises (SMEs) bringing natural products and health supplements into the market, it is crucial that the qualities of both raw materials and end products be well-assured for the protection of consumers. With the technology of HPTLC, science can be incorporated to help SMEs with their quality control, thereby ensuring product quality.

Keywords: traditional Chinese medicine supplement, high performance thin layer chromatography, active ingredients, product quality

Procedia PDF Downloads 266
10341 Design Optimization of Miniature Mechanical Drive Systems Using Tolerance Analysis Approach

Authors: Eric Mxolisi Mkhondo

Abstract:

Geometrical deviations and interaction of mechanical parts influences the performance of miniature systems.These deviations tend to cause costly problems during assembly due to imperfections of components, which are invisible to a naked eye.They also tend to cause unsatisfactory performance during operation due to deformation cause by environmental conditions.One of the effective tools to manage the deviations and interaction of parts in the system is tolerance analysis.This is a quantitative tool for predicting the tolerance variations which are defined during the design process.Traditional tolerance analysis assumes that the assembly is static and the deviations come from the manufacturing discrepancies, overlooking the functionality of the whole system and deformation of parts due to effect of environmental conditions. This paper presents an integrated tolerance analysis approach for miniature system in operation.In this approach, a computer-aided design (CAD) model is developed from system’s specification.The CAD model is then used to specify the geometrical and dimensional tolerance limits (upper and lower limits) that vary component’s geometries and sizes while conforming to functional requirements.Worst-case tolerances are analyzed to determine the influenced of dimensional changes due to effects of operating temperatures.The method is used to evaluate the nominal conditions, and worse case conditions in maximum and minimum dimensions of assembled components.These three conditions will be evaluated under specific operating temperatures (-40°C,-18°C, 4°C, 26°C, 48°C, and 70°C). A case study on the mechanism of a zoom lens system is used to illustrate the effectiveness of the methodology.

Keywords: geometric dimensioning, tolerance analysis, worst-case analysis, zoom lens mechanism

Procedia PDF Downloads 156
10340 Evolving Urban Landscapes: Smart Cities and Sustainable Futures

Authors: Mehrzad Soltani, Pegah Rezaei

Abstract:

In response to the escalating challenges posed by resource scarcity, urban congestion, and the dearth of green spaces, contemporary urban areas have undergone a remarkable transformation into smart cities. This evolution necessitates a strategic and forward-thinking approach to urban development, with the primary objective of diminishing and eventually eradicating dependence on non-renewable energy sources. This steadfast commitment to sustainable development is geared toward the continual enhancement of our global urban milieu, ensuring a healthier and more prosperous environment for forthcoming generations. This transformative vision has been meticulously shaped by an extensive research framework, incorporating in-depth field studies and investigations conducted at both neighborhood and city levels. Our holistic strategy extends its purview to encompass major cities and states, advocating for the realization of exceptional development firmly rooted in the principles of sustainable intelligence. At its core, this approach places a paramount emphasis on stringent pollution control measures, concurrently safeguarding ecological equilibrium and regional cohesion. Central to the realization of this vision is the widespread adoption of environmentally friendly materials and components, championing the cultivation of plant life and harmonious green spaces, and the seamless integration of intelligent lighting and irrigation systems. These systems, including solar panels and solar energy utilization, are deployed wherever feasible, effectively meeting the essential lighting and irrigation needs of these dynamic urban ecosystems. Overall, the transformation of urban areas into smart cities necessitates a holistic and innovative approach to urban development. By actively embracing sustainable intelligence and adhering to strict environmental standards, these cities pave the way for a brighter and more sustainable future, one that is marked by resilient, thriving, and eco-conscious urban communities.

Keywords: smart city, green urban, sustainability, urban management

Procedia PDF Downloads 63
10339 Robust Processing of Antenna Array Signals under Local Scattering Environments

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.

Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch

Procedia PDF Downloads 103
10338 Bioanalytical Method Development and Validation of Aminophylline in Rat Plasma Using Reverse Phase High Performance Liquid Chromatography: An Application to Preclinical Pharmacokinetics

Authors: S. G. Vasantharaju, Viswanath Guptha, Raghavendra Shetty

Abstract:

Introduction: Aminophylline is a methylxanthine derivative belonging to the class bronchodilator. From the literature survey, reported methods reveals the solid phase extraction and liquid liquid extraction which is highly variable, time consuming, costly and laborious analysis. Present work aims to develop a simple, highly sensitive, precise and accurate high-performance liquid chromatography method for the quantification of Aminophylline in rat plasma samples which can be utilized for preclinical studies. Method: Reverse Phase high-performance liquid chromatography method. Results: Selectivity: Aminophylline and the internal standard were well separated from the co-eluted components and there was no interference from the endogenous material at the retention time of analyte and the internal standard. The LLOQ measurable with acceptable accuracy and precision for the analyte was 0.5 µg/mL. Linearity: The developed and validated method is linear over the range of 0.5-40.0 µg/mL. The coefficient of determination was found to be greater than 0.9967, indicating the linearity of this method. Accuracy and precision: The accuracy and precision values for intra and inter day studies at low, medium and high quality control samples concentrations of aminophylline in the plasma were within the acceptable limits Extraction recovery: The method produced consistent extraction recovery at all 3 QC levels. The mean extraction recovery of aminophylline was 93.57 ± 1.28% while that of internal standard was 90.70 ± 1.30%. Stability: The results show that aminophylline is stable in rat plasma under the studied stability conditions and that it is also stable for about 30 days when stored at -80˚C. Pharmacokinetic studies: The method was successfully applied to the quantitative estimation of aminophylline rat plasma following its oral administration to rats. Discussion: Preclinical studies require a rapid and sensitive method for estimating the drug concentration in the rat plasma. The method described in our article includes a simple protein precipitation extraction technique with ultraviolet detection for quantification. The present method is simple and robust for fast high-throughput sample analysis with less analysis cost for analyzing aminophylline in biological samples. In this proposed method, no interfering peaks were observed at the elution times of aminophylline and the internal standard. The method also had sufficient selectivity, specificity, precision and accuracy over the concentration range of 0.5 - 40.0 µg/mL. An isocratic separation technique was used underlining the simplicity of the presented method.

Keywords: Aminophyllin, preclinical pharmacokinetics, rat plasma, RPHPLC

Procedia PDF Downloads 210
10337 Analyzing the Relationship between Physical Fitness and Academic Achievement in Chinese High School Students

Authors: Juan Li, Hui Tian, Min Wang

Abstract:

In China, under the considerable pressure of 'Gaokao' –the highly competitive college entrance examination, high school teachers and parents often worry that doing physical activity would take away the students’ precious study time and may have a negative impact on the academic grades. There was a tendency to achieve high academic scores at the cost of physical exercise. Therefore, the purpose of this study was to examine the relationship between the physical fitness and academic achievement of Chinese high school students. The participants were 968 grade one (N=457) and grade two students (N=511) with an average age of 16 years from three high schools of different levels in Beijing, China. 479 were boys, and 489 were girls. One of the schools is a top high school in China, another is a key high school in Beijing, and the other is an ordinary high school. All analyses were weighted using SAS 9.4 to ensure the representatives of the sample. The weights were based on 12 strata of schools, sex, and grades. Physical fitness data were collected using the scores of the National Physical Fitness Test, which is an annual official test administered by the Ministry of Education in China. It includes 50m run, sits and reach test, standing long jump, 1000m run (for boys), 800m run (for girls), pull-ups for 1 minute (for boys), and bent-knee sit-ups for 1 minute (for girls). The test is an overall evaluation of the students’ physical health on the major indexes of strength, endurance, flexibility, and cardiorespiratory function. Academic scores were obtained from the three schools with the students’ consent. The statistical analysis was conducted with SPSS 24. Independent-Samples T-test was used to examine the gender group differences. Spearman’s Rho bivariate correlation was adopted to test for associations between physical test results and academic performance. Statistical significance was set at p<.05. The study found that girls obtained higher fitness scores than boys (p=.000). The girls’ physical fitness test scores were positively associated with the total academic grades (rs=.103, p=.029), English (rs=.096, p=.042), physics (rs=.202, p=.000) and chemistry scores (rs=.131, p=.009). No significant relationship was observed in boys. Cardiorespiratory fitness had a positive association with physics (rs=.196, p=.000) and biology scores (rs=.168, p=.023) in girls, and with English score in boys (rs=.104, p=.029). A possible explanation for the greater association between physical fitness and academic achievement in girls rather than boys was that girls showed stronger motivation in achieving high scores in whether academic tests or fitness tests. More driven by the test results, girls probably tended to invest more time and energy in training for the fitness test. Higher fitness levels were associated with an academic benefit among girls generally in Chinese high schools. Therefore, physical fitness needs to be given greater emphasis among Chinese adolescents and gender differences need to be taken into consideration.

Keywords: physical fitness; adolescents; academic achievement; high school

Procedia PDF Downloads 118
10336 Modelling and Simulation of Aero-Elastic Vibrations Using System Dynamic Approach

Authors: Cosmas Pandit Pagwiwoko, Ammar Khaled Abdelaziz Abdelsamia

Abstract:

Flutter as a phenomenon of flow-induced and self-excited vibration has to be recognized considering its harmful effect on the structure especially in a stage of aircraft design. This phenomenon is also important for a wind energy harvester based on the fluttering surface due to its effective operational velocity range. This multi-physics occurrence can be presented by two governing equations in both fluid and structure simultaneously in respecting certain boundary conditions on the surface of the body. In this work, the equations are resolved separately by two distinct solvers, one-time step of each domain. The modelling and simulation of this flow-structure interaction in ANSYS show the effectiveness of this loosely coupled method in representing flutter phenomenon however the process is time-consuming for design purposes. Therefore, another technique using the same weak coupled aero-structure is proposed by using system dynamics approach. In this technique, the aerodynamic forces were calculated using singularity function for a range of frequencies and certain natural mode shapes are transformed into time domain by employing an approximation model of fraction rational function in Laplace variable. The representation of structure in a multi-degree-of-freedom coupled with a transfer function of aerodynamic forces can then be simulated in time domain on a block-diagram platform such as Simulink MATLAB. The dynamic response of flutter at certain velocity can be evaluated with another established flutter calculation in frequency domain k-method. In this method, a parameter of artificial structural damping is inserted in the equation of motion to assure the energy balance of flow and vibrating structure. The simulation in time domain is particularly interested as it enables to apply the structural non-linear factors accurately. Experimental tests on a fluttering airfoil in the wind tunnel are also conducted to validate the method.

Keywords: flutter, flow-induced vibration, flow-structure interaction, non-linear structure

Procedia PDF Downloads 300
10335 Avoiding Gas Hydrate Problems in Qatar Oil and Gas Industry: Environmentally Friendly Solvents for Gas Hydrate Inhibition

Authors: Nabila Mohamed, Santiago Aparicio, Bahman Tohidi, Mert Atilhan

Abstract:

Qatar's one of the biggest problem in processing its natural resource, which is natural gas, is the often occurring blockage in the pipelines caused due to uncontrolled gas hydrate formation in the pipelines. Several millions of dollars are being spent at the process site to dehydrate the blockage safely by using chemical inhibitors. We aim to establish national database, which addresses the physical conditions that promotes Qatari natural gas to form gas hydrates in the pipelines. Moreover, we aim to design and test novel hydrate inhibitors that are suitable for Qatari natural gas and its processing facilities. From these perspectives we are aiming to provide more effective and sustainable reservoir utilization and processing of Qatari natural gas. In this work, we present the initial findings of a QNRF funded project, which deals with the natural gas hydrate formation characteristics of Qatari type gas in both experimental (PVTx) and computational (molecular simulations) methods. We present the data from the two fully automated apparatus: a gas hydrate autoclave and a rocking cell. Hydrate equilibrium curves including growth/dissociation conditions for multi-component systems for several gas mixtures that represent Qatari type natural gas with and without the presence of well known kinetic and thermodynamic hydrate inhibitors. Ionic liquids were designed and used for testing their inhibition performance and their DFT and molecular modeling simulation results were also obtained and compared with the experimental results. Results showed significant performance of ionic liquids with up to 0.5 % in volume with up to 2 to 4 0C inhibition at high pressures.

Keywords: gas hydrates, natural gas, ionic liquids, inhibition, thermodynamic inhibitors, kinetic inhibitors

Procedia PDF Downloads 1301
10334 Lipid Extraction from Microbial Cell by Electroporation Technique and Its Influence on Direct Transesterification for Biodiesel Synthesis

Authors: Abu Yousuf, Maksudur Rahman Khan, Ahasanul Karim, Amirul Islam, Minhaj Uddin Monir, Sharmin Sultana, Domenico Pirozzi

Abstract:

Traditional biodiesel feedstock like edible oils or plant oils, animal fats and cooking waste oil have been replaced by microbial oil in recent research of biodiesel synthesis. The well-known community of microbial oil producers includes microalgae, oleaginous yeast and seaweeds. Conventional transesterification of microbial oil to produce biodiesel is lethargic, energy consuming, cost-ineffective and environmentally unhealthy. This process follows several steps such as microbial biomass drying, cell disruption, oil extraction, solvent recovery, oil separation and transesterification. Therefore, direct transesterification of biodiesel synthesis has been studying for last few years. It combines all the steps in a single reactor and it eliminates the steps of biomass drying, oil extraction and separation from solvent. Apparently, it seems to be cost-effective and faster process but number of difficulties need to be solved to make it large scale applicable. The main challenges are microbial cell disruption in bulk volume and make faster the esterification reaction, because water contents of the medium sluggish the reaction rate. Several methods have been proposed but none of them is up to the level to implement in large scale. It is still a great challenge to extract maximum lipid from microbial cells (yeast, fungi, algae) investing minimum energy. Electroporation technique results a significant increase in cell conductivity and permeability caused due to the application of an external electric field. Electroporation is required to alter the size and structure of the cells to increase their porosity as well as to disrupt the microbial cell walls within few seconds to leak out the intracellular lipid to the solution. Therefore, incorporation of electroporation techniques contributed in direct transesterification of microbial lipids by increasing the efficiency of biodiesel production rate.

Keywords: biodiesel, electroporation, microbial lipids, transesterification

Procedia PDF Downloads 269