Search results for: optimization/inverse mapping
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4776

Search results for: optimization/inverse mapping

786 Estimation and Removal of Chlorophenolic Compounds from Paper Mill Waste Water by Electrochemical Treatment

Authors: R. Sharma, S. Kumar, C. Sharma

Abstract:

A number of toxic chlorophenolic compounds are formed during pulp bleaching. The nature and concentration of these chlorophenolic compounds largely depends upon the amount and nature of bleaching chemicals used. These compounds are highly recalcitrant and difficult to remove but are partially removed by the biochemical treatment processes adopted by the paper industry. Identification and estimation of these chlorophenolic compounds has been carried out in the primary and secondary clarified effluents from the paper mill by GCMS. Twenty-six chorophenolic compounds have been identified and estimated in paper mill waste waters. Electrochemical treatment is an efficient method for oxidation of pollutants and has successfully been used to treat textile and oil waste water. Electrochemical treatment using less expensive anode material, stainless steel electrodes has been tried to study their removal. The electrochemical assembly comprised a DC power supply, a magnetic stirrer and stainless steel (316 L) electrode. The optimization of operating conditions has been carried out and treatment has been performed under optimized treatment conditions. Results indicate that 68.7% and 83.8% of cholorphenolic compounds are removed during 2 h of electrochemical treatment from primary and secondary clarified effluent respectively. Further, there is a reduction of 65.1, 60 and 92.6% of COD, AOX and color, respectively for primary clarified and 83.8%, 75.9% and 96.8% of COD, AOX and color, respectively for secondary clarified effluent. EC treatment has also been found to increase significantly the biodegradability index of wastewater because of conversion of non- biodegradable fraction into biodegradable fraction. Thus, electrochemical treatment is an efficient method for the degradation of cholorophenolic compounds, removal of color, AOX and other recalcitrant organic matter present in paper mill waste water.

Keywords: chlorophenolics, effluent, electrochemical treatment, wastewater

Procedia PDF Downloads 387
785 Maximizing Profit Using Optimal Control by Exploiting the Flexibility in Thermal Power Plants

Authors: Daud Mustafa Minhas, Raja Rehan Khalid, Georg Frey

Abstract:

The next generation power systems are equipped with abundantly available free renewable energy resources (RES). During their low-cost operations, the price of electricity significantly reduces to a lower value, and sometimes it becomes negative. Therefore, it is recommended not to operate the traditional power plants (e.g. coal power plants) and to reduce the losses. In fact, it is not a cost-effective solution, because these power plants exhibit some shutdown and startup costs. Moreover, they require certain time for shutdown and also need enough pause before starting up again, increasing inefficiency in the whole power network. Hence, there is always a trade-off between avoiding negative electricity prices, and the startup costs of power plants. To exploit this trade-off and to increase the profit of a power plant, two main contributions are made: 1) introducing retrofit technology for state of art coal power plant; 2) proposing optimal control strategy for a power plant by exploiting different flexibility features. These flexibility features include: improving ramp rate of power plant, reducing startup time and lowering minimum load. While, the control strategy is solved as mixed integer linear programming (MILP), ensuring optimal solution for the profit maximization problem. Extensive comparisons are made considering pre and post-retrofit coal power plant having the same efficiencies under different electricity price scenarios. It concludes that if the power plant must remain in the market (providing services), more flexibility reflects direct economic advantage to the plant operator.

Keywords: discrete optimization, power plant flexibility, profit maximization, unit commitment model

Procedia PDF Downloads 143
784 Optimization of Smart Beta Allocation by Momentum Exposure

Authors: J. B. Frisch, D. Evandiloff, P. Martin, N. Ouizille, F. Pires

Abstract:

Smart Beta strategies intend to be an asset management revolution with reference to classical cap-weighted indices. Indeed, these strategies allow a better control on portfolios risk factors and an optimized asset allocation by taking into account specific risks or wishes to generate alpha by outperforming indices called 'Beta'. Among many strategies independently used, this paper focuses on four of them: Minimum Variance Portfolio, Equal Risk Contribution Portfolio, Maximum Diversification Portfolio, and Equal-Weighted Portfolio. Their efficiency has been proven under constraints like momentum or market phenomenon, suggesting a reconsideration of cap-weighting.
 To further increase strategy return efficiency, it is proposed here to compare their strengths and weaknesses inside time intervals corresponding to specific identifiable market phases, in order to define adapted strategies depending on pre-specified situations. 
Results are presented as performance curves from different combinations compared to a benchmark. If a combination outperforms the applicable benchmark in well-defined actual market conditions, it will be preferred. It is mainly shown that such investment 'rules', based on both historical data and evolution of Smart Beta strategies, and implemented according to available specific market data, are providing very interesting optimal results with higher return performance and lower risk.
 Such combinations have not been fully exploited yet and justify present approach aimed at identifying relevant elements characterizing them.

Keywords: smart beta, minimum variance portfolio, equal risk contribution portfolio, maximum diversification portfolio, equal weighted portfolio, combinations

Procedia PDF Downloads 340
783 An Improved Total Variation Regularization Method for Denoising Magnetocardiography

Authors: Yanping Liao, Congcong He, Ruigang Zhao

Abstract:

The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.

Keywords: constraint parameters, derivative matrix, magnetocardiography, regular term, total variation

Procedia PDF Downloads 153
782 Displaying Compostela: Literature, Tourism and Cultural Representation, a Cartographic Approach

Authors: Fernando Cabo Aseguinolaza, Víctor Bouzas Blanco, Alberto Martí Ezpeleta

Abstract:

Santiago de Compostela became a stable object of literary representation during the period between 1840 and 1915, approximately. This study offers a partial cartographical look at this process, suggesting that a cultural space like Compostela’s becoming an object of literary representation paralleled the first stages of its becoming a tourist destination. We use maps as a method of analysis to show the interaction between a corpus of novels and the emerging tradition of tourist guides on Compostela during the selected period. Often, the novels constitute ways to present a city to the outside, marking it for the gaze of others, as guidebooks do. That leads us to examine the ways of constructing and rendering communicable the local in other contexts. For that matter, we should also acknowledge the fact that a good number of the narratives in the corpus evoke the representation of the city through the figure of one who comes from elsewhere: a traveler, a student or a professor. The guidebooks coincide in this with the emerging fiction, of which the mimesis of a city is a key characteristic. The local cannot define itself except through a process of symbolic negotiation, in which recognition and self-recognition play important roles. Cartography shows some of the forms that these processes of symbolic representation take through the treatment of space. The research uses GIS to find significant models of representation. We used the program ArcGIS for the mapping, defining the databases starting from an adapted version of the methodology applied by Barbara Piatti and Lorenz Hurni’s team at the University of Zurich. First, we designed maps that emphasize the peripheral position of Compostela from a historical and institutional perspective using elements found in the texts of our corpus (novels and tourist guides). Second, other maps delve into the parallels between recurring techniques in the fictional texts and characteristic devices of the guidebooks (sketching itineraries and the selection of zones and indexicalization), like a foreigner’s visit guided by someone who knows the city or the description of one’s first entrance into the city’s premises. Last, we offer a cartography that demonstrates the connection between the best known of the novels in our corpus (Alejandro Pérez Lugín’s 1915 novel La casa de la Troya) and the first attempt to create package tourist tours with Galicia as a destination, in a joint venture of Galician and British business owners, in the years immediately preceding the Great War. Literary cartography becomes a crucial instrument for digging deeply into the methods of cultural production of places. Through maps, the interaction between discursive forms seemingly so far removed from each other as novels and tourist guides becomes obvious and suggests the need to go deeper into a complex process through which a city like Compostela becomes visible on the contemporary cultural horizon.

Keywords: compostela, literary geography, literary cartography, tourism

Procedia PDF Downloads 392
781 Thermodynamic Modeling and Exergoeconomic Analysis of an Isobaric Adiabatic Compressed Air Energy Storage System

Authors: Youssef Mazloum, Haytham Sayah, Maroun Nemer

Abstract:

The penetration of renewable energy sources into the electric grid is significantly increasing. However, the intermittence of these sources breaks the balance between supply and demand for electricity. Hence, the importance of the energy storage technologies, they permit restoring the balance and reducing the drawbacks of intermittence of the renewable energies. This paper discusses the modeling and the cost-effectiveness of an isobaric adiabatic compressed air energy storage (IA-CAES) system. The proposed system is a combination among a compressed air energy storage (CAES) system with pumped hydro storage system and thermal energy storage system. The aim of this combination is to overcome the disadvantages of the conventional CAES system such as the losses due to the storage pressure variation, the loss of the compression heat and the use of fossil fuel sources. A steady state model is developed to perform an energy and exergy analyses of the IA-CAES system and calculate the distribution of the exergy losses in the latter system. A sensitivity analysis is also carried out to estimate the effects of some key parameters on the system’s efficiency, such as the pinch of the heat exchangers, the isentropic efficiency of the rotating machinery and the pressure losses. The conducted sensitivity analysis is a local analysis since the sensibility of each parameter changes with the variation of the other parameters. Therefore, an exergoeconomic study is achieved as well as a cost optimization in order to reduce the electricity cost produced during the production phase. The optimizer used is OmOptim which is a genetic algorithms based optimizer.

Keywords: cost-effectiveness, Exergoeconomic analysis, isobaric adiabatic compressed air energy storage (IA-CAES) system, thermodynamic modeling

Procedia PDF Downloads 246
780 Mining Scientific Literature to Discover Potential Research Data Sources: An Exploratory Study in the Field of Haemato-Oncology

Authors: A. Anastasiou, K. S. Tingay

Abstract:

Background: Discovering suitable datasets is an important part of health research, particularly for projects working with clinical data from patients organized in cohorts (cohort data), but with the proliferation of so many national and international initiatives, it is becoming increasingly difficult for research teams to locate real world datasets that are most relevant to their project objectives. We present a method for identifying healthcare institutes in the European Union (EU) which may hold haemato-oncology (HO) data. A key enabler of this research was the bibInsight platform, a scientometric data management and analysis system developed by the authors at Swansea University. Method: A PubMed search was conducted using HO clinical terms taken from previous work. The resulting XML file was processed using the bibInsight platform, linking affiliations to the Global Research Identifier Database (GRID). GRID is an international, standardized list of institutions, including the city and country in which the institution exists, as well as a category of the main business type, e.g., Academic, Healthcare, Government, Company. Countries were limited to the 28 current EU members, and institute type to 'Healthcare'. An article was considered valid if at least one author was affiliated with an EU-based healthcare institute. Results: The PubMed search produced 21,310 articles, consisting of 9,885 distinct affiliations with correspondence in GRID. Of these articles, 760 were from EU countries, and 390 of these were healthcare institutes. One affiliation was excluded as being a veterinary hospital. Two EU countries did not have any publications in our analysis dataset. The results were analysed by country and by individual healthcare institute. Networks both within the EU and internationally show institutional collaborations, which may suggest a willingness to share data for research purposes. Geographical mapping can ensure that data has broad population coverage. Collaborations with industry or government may exclude healthcare institutes that may have embargos or additional costs associated with data access. Conclusions: Data reuse is becoming increasingly important both for ensuring the validity of results, and economy of available resources. The ability to identify potential, specific data sources from over twenty thousand articles in less than an hour could assist in improving knowledge of, and access to, data sources. As our method has not yet specified if these healthcare institutes are holding data, or merely publishing on that topic, future work will involve text mining of data-specific concordant terms to identify numbers of participants, demographics, study methodologies, and sub-topics of interest.

Keywords: data reuse, data discovery, data linkage, journal articles, text mining

Procedia PDF Downloads 115
779 The Effect of Electrical Discharge Plasma on Inactivation of Escherichia Coli MG 1655 in Pure Culture

Authors: Zoran Herceg, Višnja Stulić, Anet Režek Jambrak, Tomislava Vukušić

Abstract:

Electrical discharge plasma is a new non-thermal processing technique which is used for the inactivation of contaminating and hazardous microbes in liquids. Plasma is a source of different antimicrobial species including UV photons, charged particles, and reactive species such as superoxide, hydroxyl radicals, nitric oxide and ozone. Escherichia coli was studied as foodborne pathogen. The aim of this work was to examine inactivation effects of electrical discharge plasma treatment on the Escherichia coli MG 1655 in pure culture. Two types of plasma configuration and polarity were used. First configuration was with titanium wire as high voltage needle and another with medical stainless steel needle used to form bubbles in treated volume and titanium wire as high voltage needle. Model solution samples were inoculated with Escerichia coli MG 1655 and treated by electrical discharge plasma at treatment time of 5 and 10 min, and frequency of 60, 90 and 120 Hz. With the first configuration after 5 minutes of treatment at frequency of 120 Hz the inactivation rate was 1.3 log₁₀ reduction and after 10 minutes of treatment the inactivation rate was 3.0 log₁₀ reduction. At the frequency of 90 Hz after 10 minutes inactivation rate was 1.3 log₁₀ reduction. With the second configuration after 5 minutes of treatment at frequency of 120 Hz the inactivation rate was 1.2 log₁₀ reduction and after 10 minutes of treatment the inactivation rate was also 3.0 log₁₀ reduction. In this work it was also examined the formation of biofilm, nucleotide and protein leakage at 260/280 nm, before and after treatment and recuperation of treated samples. Further optimization of method is needed to understand mechanism of inactivation.

Keywords: electrical discharge plasma, escherichia coli MG 1655, inactivation, point-to-plate electrode configuration

Procedia PDF Downloads 432
778 Biomass and Lipid Enhancement by Response Surface Methodology in High Lipid Accumulating Indigenous Strain Rhodococcus opacus and Biodiesel Study

Authors: Kulvinder Bajwa, Narsi R. Bishnoi

Abstract:

Finding a sustainable alternative for today’s petrochemical industry is a major challenge facing by researchers, scientists, chemical engineers, and society at the global level. Microorganisms are considered to be sustainable feedstock for 3rd generation biofuel production. In this study, we have investigated the potential of a native bacterial strain isolated from a petrol contaminated site for the production of biodiesel. The bacterium was identified to be Rhodococcus opacus by biochemical test and 16S rRNA. Compositional analysis of bacterial biomass has been carried out by Fourier transform infrared spectroscopy (FTIR) in order to confirm lipid profile. Lipid and biomass were optimized by combination with Box Behnken design (BBD) of response surface methodology. The factors selected for the optimization of growth condition were glucose, yeast extract, and ammonium nitrate concentration. The experimental model developed through RSM in terms of effective operational factors (BBD) was found to be suitable to describe the lipid and biomass production, which indicated higher lipid and biomass with a minimum concentration of ammonium nitrate, yeast extract, and quite higher dose of glucose supplementation. Optimum results of the experiments were found to be 2.88 gL⁻¹ biomass and lipid content 38.75% at glucose 20 gL⁻¹, ammonium nitrate 0.5 gL⁻¹ and yeast extract 1.25 gL⁻¹. Furthermore, GCMS study revealed that Rhodococcus opacus has favorable fatty acid profile for biodiesel production.

Keywords: biofuel, Oleaginious bacteria, Rhodococcus opacus, FTIR, BBD, free fatty acids

Procedia PDF Downloads 136
777 Opto-Electronic Properties and Structural Phase Transition of Filled-Tetrahedral NaZnAs

Authors: R. Khenata, T. Djied, R. Ahmed, H. Baltache, S. Bin-Omran, A. Bouhemadou

Abstract:

We predict structural, phase transition as well as opto-electronic properties of the filled-tetrahedral (Nowotny-Juza) NaZnAs compound in this study. Calculations are carried out by employing the full potential (FP) linearized augmented plane wave (LAPW) plus local orbitals (lo) scheme developed within the structure of density functional theory (DFT). Exchange-correlation energy/potential (EXC/VXC) functional is treated using Perdew-Burke and Ernzerhof (PBE) parameterization for generalized gradient approximation (GGA). In addition to Trans-Blaha (TB) modified Becke-Johnson (mBJ) potential is incorporated to get better precision for optoelectronic properties. Geometry optimization is carried out to obtain the reliable results of the total energy as well as other structural parameters for each phase of NaZnAs compound. Order of the structural transitions as a function of pressure is found as: Cu2Sb type → β → α phase in our study. Our calculated electronic energy band structures for all structural phases at the level of PBE-GGA as well as mBJ potential point out; NaZnAs compound is a direct (Γ–Γ) band gap semiconductor material. However, as compared to PBE-GGA, mBJ potential approximation reproduces higher values of fundamental band gap. Regarding the optical properties, calculations of real and imaginary parts of the dielectric function, refractive index, reflectivity coefficient, absorption coefficient and energy loss-function spectra are performed over a photon energy ranging from 0.0 to 30.0 eV by polarizing incident radiation in parallel to both [100] and [001] crystalline directions.

Keywords: NaZnAs, FP-LAPW+lo, structural properties, phase transition, electronic band-structure, optical properties

Procedia PDF Downloads 436
776 Modelling of Meandering River Dynamics in Colombia: A Case Study of the Magdalena River

Authors: Laura Isabel Guarin, Juliana Vargas, Philippe Chang

Abstract:

The analysis and study of Open Channel flow dynamics for River applications has been based on flow modelling using discreet numerical models based on hydrodynamic equations. The overall spatial characteristics of rivers, i.e. its length to depth to width ratio generally allows one to correctly disregard processes occurring in the vertical or transverse dimensions thus imposing hydrostatic pressure conditions and considering solely a 1D flow model along the river length. Through a calibration process an accurate flow model may thus be developed allowing for channel study and extrapolation of various scenarios. The Magdalena River in Colombia is a large river basin draining the country from South to North with 1550 km with 0.0024 average slope and 275 average width across. The river displays high water level fluctuation and is characterized by a series of meanders. The city of La Dorada has been affected over the years by serious flooding in the rainy and dry seasons. As the meander is evolving at a steady pace repeated flooding has endangered a number of neighborhoods. This study has been undertaken in pro of correctly model flow characteristics of the river in this region in order to evaluate various scenarios and provide decision makers with erosion control measures options and a forecasting tool. Two field campaigns have been completed over the dry and rainy seasons including extensive topographical and channel survey using Topcon GR5 DGPS and River Surveyor ADCP. Also in order to characterize the erosion process occurring through the meander, extensive suspended and river bed samples were retrieved as well as soil perforation over the banks. Hence based on DEM ground digital mapping survey and field data a 2DH flow model was prepared using the Iber freeware based on the finite volume method in a non-structured mesh environment. The calibration process was carried out comparing available historical data of nearby hydrologic gauging station. Although the model was able to effectively predict overall flow processes in the region, its spatial characteristics and limitations related to pressure conditions did not allow for an accurate representation of erosion processes occurring over specific bank areas and dwellings. As such a significant helical flow has been observed through the meander. Furthermore, the rapidly changing channel cross section as a consequence of severe erosion has hindered the model’s ability to provide decision makers with a valid up to date planning tool.

Keywords: erosion, finite volume method, flow dynamics, flow modelling, meander

Procedia PDF Downloads 319
775 Managing Data from One Hundred Thousand Internet of Things Devices Globally for Mining Insights

Authors: Julian Wise

Abstract:

Newcrest Mining is one of the world’s top five gold and rare earth mining organizations by production, reserves and market capitalization in the world. This paper elaborates on the data acquisition processes employed by Newcrest in collaboration with Fortune 500 listed organization, Insight Enterprises, to standardize machine learning solutions which process data from over a hundred thousand distributed Internet of Things (IoT) devices located at mine sites globally. Through the utilization of software architecture cloud technologies and edge computing, the technological developments enable for standardized processes of machine learning applications to influence the strategic optimization of mineral processing. Target objectives of the machine learning optimizations include time savings on mineral processing, production efficiencies, risk identification, and increased production throughput. The data acquired and utilized for predictive modelling is processed through edge computing by resources collectively stored within a data lake. Being involved in the digital transformation has necessitated the standardization software architecture to manage the machine learning models submitted by vendors, to ensure effective automation and continuous improvements to the mineral process models. Operating at scale, the system processes hundreds of gigabytes of data per day from distributed mine sites across the globe, for the purposes of increased improved worker safety, and production efficiency through big data applications.

Keywords: mineral technology, big data, machine learning operations, data lake

Procedia PDF Downloads 112
774 Scheduling in a Single-Stage, Multi-Item Compatible Process Using Multiple Arc Network Model

Authors: Bokkasam Sasidhar, Ibrahim Aljasser

Abstract:

The problem of finding optimal schedules for each equipment in a production process is considered, which consists of a single stage of manufacturing and which can handle different types of products, where changeover for handling one type of product to the other type incurs certain costs. The machine capacity is determined by the upper limit for the quantity that can be processed for each of the products in a set up. The changeover costs increase with the number of set ups and hence to minimize the costs associated with the product changeover, the planning should be such that similar types of products should be processed successively so that the total number of changeovers and in turn the associated set up costs are minimized. The problem of cost minimization is equivalent to the problem of minimizing the number of set ups or equivalently maximizing the capacity utilization in between every set up or maximizing the total capacity utilization. Further, the production is usually planned against customers’ orders, and generally different customers’ orders are assigned one of the two priorities – “normal” or “priority” order. The problem of production planning in such a situation can be formulated into a Multiple Arc Network (MAN) model and can be solved sequentially using the algorithm for maximizing flow along a MAN and the algorithm for maximizing flow along a MAN with priority arcs. The model aims to provide optimal production schedule with an objective of maximizing capacity utilization, so that the customer-wise delivery schedules are fulfilled, keeping in view the customer priorities. Algorithms have been presented for solving the MAN formulation of the production planning with customer priorities. The application of the model is demonstrated through numerical examples.

Keywords: scheduling, maximal flow problem, multiple arc network model, optimization

Procedia PDF Downloads 402
773 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis

Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen

Abstract:

The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluate the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.

Keywords: convolutional neural network, electronic medical record, feature representation, lexical semantics, semantic decision

Procedia PDF Downloads 126
772 Biodegradation of Chlorophenol Derivatives Using Macroporous Material

Authors: Dmitriy Berillo, Areej K. A. Al-Jwaid, Jonathan L. Caplin, Andrew Cundy, Irina Savina

Abstract:

Chlorophenols (CPs) are used as a precursor in the production of higher CPs and dyestuffs, and as a preservative. Contamination by CPs of the ground water is located in the range from 0.15-100mg/L. The EU has set maximum concentration limits for pesticides and their degradation products of 0.1μg/L and 0.5μg/L, respectively. People working in industries which produce textiles, leather products, domestic preservatives, and petrochemicals are most heavily exposed to CPs. The International Agency for Research on Cancers categorized CPs as potential human carcinogens. Existing multistep water purification processes for CPs such as hydrogenation, ion exchange, liquid-liquid extraction, adsorption by activated carbon, forward and inverse osmosis, electrolysis, sonochemistry, UV irradiation, and chemical oxidation are not always cost effective and can cause the formation of even more toxic or mutagenic derivatives. Bioremediation of CPs derivatives utilizing microorganisms results in 60 to 100% decontamination efficiency and the process is more environmentally-friendly compared with existing physico-chemical methods. Microorganisms immobilized onto a substrate show many advantages over free bacteria systems, such as higher biomass density, higher metabolic activity, and resistance to toxic chemicals. They also enable continuous operation, avoiding the requirement for biomass-liquid separation. The immobilized bacteria can be reused several times, which opens the opportunity for developing cost-effective processes for wastewater treatment. In this study, we develop a bioremediation system for CPs based on macroporous materials, which can be efficiently used for wastewater treatment. Conditions for the preparation of the macroporous material from specific bacterial strains (Pseudomonas mendocina and Rhodococus koreensis) were optimized. The concentration of bacterial cells was kept constant; the difference was only the type of cross-linking agents used e.g. glutaraldehyde, novel polymers, which were utilized at concentrations of 0.5 to 1.5%. SEM images and rheology analysis of the material indicated a monolithic macroporous structure. Phenol was chosen as a model system to optimize the function of the cryogel material and to estimate its enzymatic activity, since it is relatively less toxic and harmful compared to CPs. Several types of macroporous systems comprising live bacteria were prepared. The viability of the cross-linked bacteria was checked using Live/Dead BacLight kit and Laser Scanning Confocal Microscopy, which revealed the presence of viable bacteria with the novel cross-linkers, whereas the control material cross-linked with glutaraldehyde(GA), contained mostly dead cells. The bioreactors based on bacteria were used for phenol degradation in batch mode at an initial concentration of 50mg/L, pH 7.5 and a temperature of 30°C. Bacterial strains cross-linked with GA showed insignificant ability to degrade phenol and for one week only, but a combination of cross-linking agents illustrated higher stability, viability and the possibility to be reused for at least five weeks. Furthermore, conditions for CPs degradation will be optimized, and the chlorophenol degradation rates will be compared to those for phenol. This is a cutting-edge bioremediation approach, which allows the purification of waste water from sustainable compounds without a separation step to remove free planktonic bacteria. Acknowledgments: Dr. Berillo D. A. is very grateful to Individual Fellowship Marie Curie Program for funding of the research.

Keywords: bioremediation, cross-linking agents, cross-linked microbial cell, chlorophenol degradation

Procedia PDF Downloads 214
771 Creation of Ultrafast Ultra-Broadband High Energy Laser Pulses

Authors: Walid Tawfik

Abstract:

The interaction of high intensity ultrashort laser pulses with plasma generates many significant applications, including soft x-ray lasers, time-resolved laser induced plasma spectroscopy LIPS, and laser-driven accelerators. The development in producing of femtosecond down to ten femtosecond optical pulses has facilitates scientists with a vital tool in a variety of ultrashort phenomena, such as high field physics, femtochemistry and high harmonic generation HHG. In this research, we generate a two-octave-wide ultrashort supercontinuum pulses with an optical spectrum extending from 3.5 eV (ultraviolet) to 1.3 eV (near-infrared) using a capillary fiber filled with neon gas. These pulses are formed according to nonlinear self-phase modulation in the neon gas as a nonlinear medium. The investigations of the created pulses were made using spectral phase interferometry for direct electric-field reconstruction (SPIDER). A complete description of the output pulses was considered. The observed characterization of the produced pulses includes the beam profile, the pulse width, and the spectral bandwidth. After reaching optimization conditions, the intensity of the reconstructed pulse autocorrelation function was applied for the shorts pulse duration to achieve transform limited ultrashort pulses with durations below 6-fs energies up to 600μJ. Moreover, the effect of neon pressure variation on the pulse width was examined. The nonlinear self-phase modulation realized to be increased with the pressure of the neon gas. The observed results may lead to an advanced method to control and monitor ultrashort transit interaction in femtochemistry.

Keywords: supercontinuum, ultrafast, SPIDER, ultra-broadband

Procedia PDF Downloads 224
770 Determining Factors for Successful Blended Learning in Higher Education: A Qualitative Study

Authors: Pia Wetzl

Abstract:

The learning process of students can be optimized by combining online teaching with face-to-face sessions. So-called blended learning offers extensive flexibility as well as contact opportunities with fellow students and teachers. Furthermore, learning can be individualized and self-regulated. The aim of this article is to investigate which factors are necessary for blended learning to be successful. Semi-structured interviews were conducted with students (N = 60) and lecturers (N = 21) from different disciplines at two German universities. The questions focused on the perception of online, face-to-face and blended learning courses. In addition, questions focused on possible optimization potential and obstacles to practical implementation. The results show that on-site presence is very important for blended learning to be successful. If students do not get to know each other on-site, there is a risk of loneliness during the self-learning phases. This has a negative impact on motivation. From the perspective of the lecturers, the willingness of the students to participate in the sessions on-site is low. Especially when there is no obligation to attend, group work is difficult to implement because the number of students attending is too low. Lecturers would like to see more opportunities from the university and its administration to enforce attendance. In their view, this is the only way to ensure the success of blended learning. In addition, they see the conception of blended learning courses as requiring a great deal of time, which they are not always willing to invest. More incentives are necessary to keep the lecturers motivated to develop engaging teaching material. The study identifies factors that can help teachers conceptualize blended learning. It also provides specific implementation advice and identifies potential impacts. This catalogue has great value for the future-oriented development of courses at universities. Future studies could test its practical use.

Keywords: blended learning, higher education, teachers, student learning, qualitative research

Procedia PDF Downloads 69
769 CybeRisk Management in Banks: An Italian Case Study

Authors: E. Cenderelli, E. Bruno, G. Iacoviello, A. Lazzini

Abstract:

The financial sector is exposed to the risk of cyber-attacks like any other industrial sector. Furthermore, the topic of CybeRisk (cyber risk) has become particularly relevant given that Information Technology (IT) attacks have increased drastically in recent years, and cannot be stopped by single organizations requiring a response at international and national level. IT risk is never a matter purely for the IT manager, although he clearly plays a key role. A bank's risk management function requires a thorough understanding of the evolving risks as well as the tools and practical techniques available to address them. Upon the request of European and national legislation regarding CybeRisk in the financial system, banks are therefore called upon to strengthen the operational model for CybeRisk management. This will require an important change with a more intense collaboration with the structures that deal with information security for the development of an ad hoc system for the evaluation and control of this type of risk. The aim of the work is to propose a framework for the management and control of CybeRisk that will bridge the gap in the literature regarding the understanding and consideration of CybeRisk as an integral part of business management. The IT function has a strong relevance in the management of CybeRisk, which is perceived mainly as operational risk, but with a positive tendency on the part of risk management to the identification of CybeRisk assessment methods that are increasingly complete, quantitative and able to better describe the possible impacts on the business. The paper provides answers to the research questions: Is it possible to define a CybeRisk governance structure able to support the comparison between risk and security? How can the relationships between IT assets be integrated into a cyberisk assessment framework to guarantee a system of protection and risks control? From a methodological point of view, this research uses a case study approach. The choice of “Monte dei Paschi di Siena” was determined by the specific features of one of Italy’s biggest lenders. It is chosen to use an intensive research strategy: an in-depth study of reality. The case study methodology is an empirical approach to explore a complex and current phenomenon that develops over time. The use of cases has also the advantage of allowing the deepening of aspects concerning the "how" and "why" of contemporary events, on which the scholar has little control. The research bases on quantitative data and qualitative information obtained through semi-structured interviews of an open-ended nature and questionnaires to directors, members of the audit committee, risk, IT and compliance managers, and those responsible for internal audit function and anti-money laundering. The added value of the paper can be seen in the development of a framework based on a mapping of IT assets from which it is possible to identify their relationships for purposes of a more effective management and control of cyber risk.

Keywords: bank, CybeRisk, information technology, risk management

Procedia PDF Downloads 232
768 3D Geomechanical Model the Best Solution of the 21st Century for Perforation's Problems

Authors: Luis Guiliana, Andrea Osorio

Abstract:

The lack of comprehension of the reservoir geomechanics conditions may cause operational problems that cost to the industry billions of dollars per year. The drilling operations at the Ceuta Field, Area 2 South, Maracaibo Lake, have been very expensive due to problems associated with drilling. The principal objective of this investigation is to develop a 3D geomechanical model in this area, in order to optimize the future drillings in the field. For this purpose, a 1D geomechanical model was built at first instance, following the workflow of the MEM (Mechanical Earth Model), this consists of the following steps: 1) Data auditing, 2) Analysis of drilling events and structural model, 3) Mechanical stratigraphy, 4) Overburden stress, 5) Pore pressure, 6) Rock mechanical properties, 7) Horizontal stresses, 8) Direction of the horizontal stresses, 9) Wellbore stability. The 3D MEM was developed through the geostatistic model of the Eocene C-SUP VLG-3676 reservoir and the 1D MEM. With this data the geomechanical grid was embedded. The analysis of the results threw, that the problems occurred in the wells that were examined were mainly due to wellbore stability issues. It was determined that the stress field change as the stratigraphic column deepens, it is normal to strike-slip at the Middle Miocene and Lower Miocene, and strike-slipe to reverse at the Eocene. In agreement to this, at the level of the Eocene, the most advantageous direction to drill is parallel to the maximum horizontal stress (157º). The 3D MEM allowed having a tridimensional visualization of the rock mechanical properties, stresses and operational windows (mud weight and pressures) variations. This will facilitate the optimization of the future drillings in the area, including those zones without any geomechanics information.

Keywords: geomechanics, MEM, drilling, stress

Procedia PDF Downloads 273
767 Magnetic Cellulase/Halloysite Nanotubes as Biocatalytic System for Converting Agro-Waste into Value-Added Product

Authors: Devendra Sillu, Shekhar Agnihotri

Abstract:

The 'nano-biocatalyst' utilizes an ordered assembling of enzyme on to nanomaterial carriers to catalyze desirable biochemical kinetics and substrate selectivity. The current study describes an inter-disciplinary approach for converting agriculture waste, sugarcane bagasse into D-glucose exploiting halloysite nanotubes (HNTs) decorated cellulase enzyme as nano-biocatalytic system. Cellulase was successfully immobilized on HNTs employing polydopamine as an eco-friendly crosslinker while iron oxide nanoparticles were attached to facilitate magnetic recovery of material. The characterization studies (UV-Vis, TEM, SEM, and XRD) displayed the characteristic features of both cellulase and magnetic HNTs in the resulting nanocomposite. Various factors (i.e., working pH, temp., crosslinker conc., enzyme conc.) which may influence the activity of biocatalytic system were investigated. The experimental design was performed using Response Surface Methodology (RSM) for process optimization. Analyses data demonstrated that the nanobiocatalysts retained 80.30% activity even at elevated temperature (55°C) and excellent storage stabilities after 10 days. The repeated usage of system revealed a remarkable consistent relative activity over several cycles. The immobilized cellulase was employed to decompose agro-waste and the maximum decomposition rate of 67.2 % was achieved. Conclusively, magnetic HNTs can serve as a potential support for enzyme immobilization with long term usage, good efficacy, reusability and easy recovery from solution.

Keywords: halloysite nanotubes, enzyme immobilization, cellulase, response surface methodology, magnetic recovery

Procedia PDF Downloads 133
766 A Q-Methodology Approach for the Evaluation of Land Administration Mergers

Authors: Tsitsi Nyukurayi Muparari, Walter Timo De Vries, Jaap Zevenbergen

Abstract:

The nature of Land administration accommodates diversity in terms of both spatial data handling activities and the expertise involved, which supposedly aims to satisfy the unpredictable demands of land data and the diverse demands of the customers arising from the land. However, it is known that strategic decisions of restructuring are in most cases repelled in favour of complex structures that strive to accommodate professional diversity and diverse roles in the field of Land administration. Yet despite of this widely accepted knowledge, there is scanty theoretical knowledge concerning the psychological methodologies that can extract the deeper perceptions from the diverse spatial expertise in order to explain the invisible control arm of the polarised reception of the ideas of change. This paper evaluates Q methodology in the context of a cadastre and land registry merger (under one agency) using the Swedish cadastral system as a case study. Precisely, the aim of this paper is to evaluate the effectiveness of Q methodology towards modelling the diverse psychological perceptions of spatial professionals who are in a widely contested decision of merging the cadastre and land registry components of Land administration using the Swedish cadastral system as a case study. An empirical approach that is prescribed by Q methodology starts with the concourse development, followed by the design of statements and q sort instrument, selection of the participants, the q-sorting exercise, factor extraction by PQMethod and finally narrative development by logic of abduction. The paper uses 36 statements developed from a dominant competing value theory that stands out on its reliability and validity, purposively selects 19 participants to do the Qsorting exercise, proceeds with factor extraction from the diversity using varimax rotation and judgemental rotation provided by PQMethod and effect the narrative construction using the logic abduction. The findings from the diverse perceptions from cadastral professionals in the merger decision of land registry and cadastre components in Sweden’s mapping agency (Lantmäteriet) shows that focus is rather inclined on the perfection of the relationship between the legal expertise and technical spatial expertise. There is much emphasis on tradition, loyalty and communication attributes which concern the organisation’s internal environment rather than innovation and market attributes that reveals customer behavior and needs arising from the changing humankind-land needs. It can be concluded that Q methodology offers effective tools that pursues a psychological approach for the evaluation and gradations of the decisions of strategic change through extracting the local perceptions of spatial expertise.

Keywords: cadastre, factor extraction, land administration merger, land registry, q-methodology, rotation

Procedia PDF Downloads 194
765 Trading off Accuracy for Speed in Powerdrill

Authors: Filip Buruiana, Alexander Hall, Reimar Hofmann, Thomas Hofmann, Silviu Ganceanu, Alexandru Tudorica

Abstract:

In-memory column-stores make interactive analysis feasible for many big data scenarios. PowerDrill is a system used internally at Google for exploration in logs data. Even though it is a highly parallelized column-store and uses in memory caching, interactive response times cannot be achieved for all datasets (note that it is common to analyze data with 50 billion records in PowerDrill). In this paper, we investigate two orthogonal approaches to optimize performance at the expense of an acceptable loss of accuracy. Both approaches can be implemented as outer wrappers around existing database engines and so they should be easily applicable to other systems. For the first optimization we show that memory is the limiting factor in executing queries at speed and therefore explore possibilities to improve memory efficiency. We adapt some of the theory behind data sketches to reduce the size of particularly expensive fields in our largest tables by a factor of 4.5 when compared to a standard compression algorithm. This saves 37% of the overall memory in PowerDrill and introduces a 0.4% relative error in the 90th percentile for results of queries with the expensive fields. We additionally evaluate the effects of using sampling on accuracy and propose a simple heuristic for annotating individual result-values as accurate (or not). Based on measurements of user behavior in our real production system, we show that these estimates are essential for interpreting intermediate results before final results are available. For a large set of queries this effectively brings down the 95th latency percentile from 30 to 4 seconds.

Keywords: big data, in-memory column-store, high-performance SQL queries, approximate SQL queries

Procedia PDF Downloads 259
764 Development and Optimization of Colon Targeted Drug Delivery System of Ayurvedic Churna Formulation Using Eudragit L100 and Ethyl Cellulose as Coating Material

Authors: Anil Bhandari, Imran Khan Pathan, Peeyush K. Sharma, Rakesh K. Patel, Suresh Purohit

Abstract:

The purpose of this study was to prepare time and pH dependent release tablets of Ayurvedic Churna formulation and evaluate their advantages as colon targeted drug delivery system. The Vidangadi Churna was selected for this study which contains Embelin and Gallic acid. Embelin is used in Helminthiasis as therapeutic agent. Embelin is insoluble in water and unstable in gastric environment so it was formulated in time and pH dependent tablets coated with combination of two polymers Eudragit L100 and ethyl cellulose. The 150mg of core tablet of dried extract and lactose were prepared by wet granulation method. The compression coating was used in the polymer concentration of 150mg for both the layer as upper and lower coating tablet was investigated. The results showed that no release was found in 0.1 N HCl and pH 6.8 phosphate buffers for initial 5 hours and about 98.97% of the drug was released in pH 7.4 phosphate buffer in total 17 hours. The in vitro release profiles of drug from the formulation could be best expressed first order kinetics as highest linearity (r2= 0.9943). The results of the present study have demonstrated that the time and pH dependent tablets system is a promising vehicle for preventing rapid hydrolysis in gastric environment and improving oral bioavailability of Embelin and Gallic acid for treatment of Helminthiasis.

Keywords: embelin, gallic acid, Vidangadi Churna, colon targeted drug delivery

Procedia PDF Downloads 360
763 Statistical Assessment of Models for Determination of Soil–Water Characteristic Curves of Sand Soils

Authors: S. J. Matlan, M. Mukhlisin, M. R. Taha

Abstract:

Characterization of the engineering behavior of unsaturated soil is dependent on the soil-water characteristic curve (SWCC), a graphical representation of the relationship between water content or degree of saturation and soil suction. A reasonable description of the SWCC is thus important for the accurate prediction of unsaturated soil parameters. The measurement procedures for determining the SWCC, however, are difficult, expensive, and time-consuming. During the past few decades, researchers have laid a major focus on developing empirical equations for predicting the SWCC, with a large number of empirical models suggested. One of the most crucial questions is how precisely existing equations can represent the SWCC. As different models have different ranges of capability, it is essential to evaluate the precision of the SWCC models used for each particular soil type for better SWCC estimation. It is expected that better estimation of SWCC would be achieved via a thorough statistical analysis of its distribution within a particular soil class. With this in view, a statistical analysis was conducted in order to evaluate the reliability of the SWCC prediction models against laboratory measurement. Optimization techniques were used to obtain the best-fit of the model parameters in four forms of SWCC equation, using laboratory data for relatively coarse-textured (i.e., sandy) soil. The four most prominent SWCCs were evaluated and computed for each sample. The result shows that the Brooks and Corey model is the most consistent in describing the SWCC for sand soil type. The Brooks and Corey model prediction also exhibit compatibility with samples ranging from low to high soil water content in which subjected to the samples that evaluated in this study.

Keywords: soil-water characteristic curve (SWCC), statistical analysis, unsaturated soil, geotechnical engineering

Procedia PDF Downloads 338
762 The Importance of Visual Communication in Artificial Intelligence

Authors: Manjitsingh Rajput

Abstract:

Visual communication plays an important role in artificial intelligence (AI) because it enables machines to understand and interpret visual information, similar to how humans do. This abstract explores the importance of visual communication in AI and emphasizes the importance of various applications such as computer vision, object emphasis recognition, image classification and autonomous systems. In going deeper, with deep learning techniques and neural networks that modify visual understanding, In addition to AI programming, the abstract discusses challenges facing visual interfaces for AI, such as data scarcity, domain optimization, and interpretability. Visual communication and other approaches, such as natural language processing and speech recognition, have also been explored. Overall, this abstract highlights the critical role that visual communication plays in advancing AI capabilities and enabling machines to perceive and understand the world around them. The abstract also explores the integration of visual communication with other modalities like natural language processing and speech recognition, emphasizing the critical role of visual communication in AI capabilities. This methodology explores the importance of visual communication in AI development and implementation, highlighting its potential to enhance the effectiveness and accessibility of AI systems. It provides a comprehensive approach to integrating visual elements into AI systems, making them more user-friendly and efficient. In conclusion, Visual communication is crucial in AI systems for object recognition, facial analysis, and augmented reality, but challenges like data quality, interpretability, and ethics must be addressed. Visual communication enhances user experience, decision-making, accessibility, and collaboration. Developers can integrate visual elements for efficient and accessible AI systems.

Keywords: visual communication AI, computer vision, visual aid in communication, essence of visual communication.

Procedia PDF Downloads 95
761 Heat Sink Optimization for a High Power Wearable Thermoelectric Module

Authors: Zohreh Soleimani, Sally Salome Shahzad, Stamatis Zoras

Abstract:

As a result of current energy and environmental issues, the human body is known as one of the promising candidate for converting wasted heat to electricity (Seebeck effect). Thermoelectric generator (TEG) is one of the most prevalent means of harvesting body heat and converting that to eco-friendly electrical power. However, the uneven distribution of the body heat and its curvature geometry restrict harvesting adequate amount of energy. To perfectly transform the heat radiated by the body into power, the most direct solution is conforming the thermoelectric generators (TEG) with the arbitrary surface of the body and increase the temperature difference across the thermoelectric legs. Due to this, a computational survey through COMSOL Multiphysics is presented in this paper with the main focus on the impact of integrating a flexible wearable TEG with a corrugated shaped heat sink on the module power output. To eliminate external parameters (temperature, air flow, humidity), the simulations are conducted within indoor thermal level and when the wearer is stationary. The full thermoelectric characterization of the proposed TEG fabricated by a wavy shape heat sink has been computed leading to a maximum power output of 25µW/cm2 at a temperature gradient nearly 13°C. It is noteworthy that for the flexibility of the proposed TEG and heat sink, the applicability and efficiency of the module stay high even on the curved surfaces of the body. As a consequence, the results demonstrate the superiority of such a TEG to the most state of the art counterparts fabricated with no heat sink and offer a new train of thought for the development of self-sustained and unobtrusive wearable power suppliers which generate energy from low grade dissipated heat from the body.

Keywords: device simulation, flexible thermoelectric module, heat sink, human body heat

Procedia PDF Downloads 151
760 Prosecution as Persecution: Exploring the Enduring Legacy of Judicial Harassment of Human Rights Defenders and Political Opponents in Zimbabwe, Cases from 2013-2016

Authors: Bellinda R. Chinowawa

Abstract:

As part of a wider strategy to stifle civil society, Governments routinely resort to judicial harassment through the use of civil and criminal to impugn the integrity of human rights defenders and that of perceived political opponents. This phenomenon is rife in militarised or autocratic regimes where there is no tolerance for dissenting voices. Zimbabwe, ostensibly a presidential republic founded on the values of transparency, equality, freedom, is characterised by brutal suppression of perceived political opponents and those who assert their basic human rights. This is done through a wide range of tactics including unlawful arrests and detention, torture and other cruel, inhuman degrading treatment and enforced disappearances. Professionals including, journalists and doctors are similarly not spared from state attack. For human rights defenders, the most widely used tool of repression is that of judicial harassment where the judicial system is used to persecute them. This can include the levying of criminal charges, civil lawsuits and unnecessary administrative proceedings. Charges preferred against range from petty offences such as criminal nuisance to more serious charges of terrorism and subverting a constitutional government. Additionally, government sponsored individuals and organisations file strategic lawsuits with pecuniary implications order to intimidate and silence critics and engender self-censorship. Some HRDs are convicted and sentenced to prison terms, despite not being criminals in a true sense. While others are acquitted judicial harassment diverts energy and resources away from their human rights work. Through a consideration of statistical data reported by human rights organisations and face to face interviews with a cross section of human rights defenders, the article will map the incidence of judicial harassment in Zimbabwe. The article will consider the multi-level sociological and contextual factors which influence the Government of Zimbabwe to have easy recourse to criminal law and the debilitating effect of these actions on HRDs. These factors include the breakdown of the rule of law resulting in state capture of the judiciary, the proven efficacy of judicial harassment from colonial times to date, and the lack of an adequate redress mechanism at international level. By mapping the use of the judiciary as a tool of repression, from the inception of modern day Zimbabwe to date, it is hoped that HRDs will realise that they are part of a greater community of activists throughout the ages and should emboldened in the realisation that it is an age old tactic used by fallen regimes which should not deter them from calling for accountability.

Keywords: autocratic regime, colonial legacy, judicial harassment, human rights defenders

Procedia PDF Downloads 233
759 Aerodynamic Modeling Using Flight Data at High Angle of Attack

Authors: Rakesh Kumar, A. K. Ghosh

Abstract:

The paper presents the modeling of linear and nonlinear longitudinal aerodynamics using real flight data of Hansa-3 aircraft gathered at low and high angles of attack. The Neural-Gauss-Newton (NGN) method has been applied to model the linear and nonlinear longitudinal dynamics and estimate parameters from flight data. Unsteady aerodynamics due to flow separation at high angles of attack near stall has been included in the aerodynamic model using Kirchhoff’s quasi-steady stall model. NGN method is an algorithm that utilizes Feed Forward Neural Network (FFNN) and Gauss-Newton optimization to estimate the parameters and it does not require any a priori postulation of mathematical model or solving of equations of motion. NGN method was validated on real flight data generated at moderate angles of attack before application to the data at high angles of attack. The estimates obtained from compatible flight data using NGN method were validated by comparing with wind tunnel values and the maximum likelihood estimates. Validation was also carried out by comparing the response of measured motion variables with the response generated by using estimates a different control input. Next, NGN method was applied to real flight data generated by executing a well-designed quasi-steady stall maneuver. The results obtained in terms of stall characteristics and aerodynamic parameters were encouraging and reasonably accurate to establish NGN as a method for modeling nonlinear aerodynamics from real flight data at high angles of attack.

Keywords: parameter estimation, NGN method, linear and nonlinear, aerodynamic modeling

Procedia PDF Downloads 446
758 Determination of Genetic Markers, Microsatellites Type, Liked to Milk Production Traits in Goats

Authors: Mohamed Fawzy Elzarei, Yousef Mohammed Al-Dakheel, Ali Mohamed Alseaf

Abstract:

Modern molecular techniques, like single marker analysis for linked traits to these markers, can provide us with rapid and accurate genetic results. In the last two decades of the last century, the applications of molecular techniques were reached a faraway point in cattle, sheep, and pig. In goats, especially in our region, the application of molecular techniques is still far from other species. As reported by many researchers, microsatellites marker is one of the suitable markers for lie studies. The single marker linked to traits of interest is one technique allowed us to early select animals without the necessity for mapping the entire genome. Simplicity, applicability, and low cost of this technique gave this technique a wide range of applications in many areas of genetics and molecular biology. Also, this technique provides a useful approach for evaluating genetic differentiation, particularly in populations that are poorly known genetically. The expected breeding value (EBV) and yield deviation (YD) are considered as the most parameters used for studying the linkage between quantitative characteristics and molecular markers, since these values are raw data corrected for the non-genetic factors. A total of 17 microsatellites markers (from chromosomes 6, 14, 18, 20 and 23) were used in this study to search for areas that could be responsible for genetic variability for some milk traits and search of chromosomal regions that explain part of the phenotypic variance. Results of single-marker analyses were used to identify the linkage between microsatellite markers and variation in EBVs of these traits, Milk yield, Protein percentage, Fat percentage, Litter size and weight at birth, and litter size and weight at weaning. The estimates of the parameters from forward and backward solutions using stepwise regression procedure on milk yield trait, only two markers, OARCP9 and AGLA29, showed a highly significant effect (p≤0.01) in backward and forward solutions. The forward solution for different equations conducted that R2 of these equations were highly depending on only two partials regressions coefficient (βi,) for these markers. For the milk protein trait, four marker showed significant effect BMS2361, CSSM66 (p≤0.01), BMS2626, and OARCP9 (p≤0.05). By the other way, four markers (MCM147, BM1225, INRA006, andINRA133) showed highly significant effect (p≤0.01) in both backward and forward solutions in association with milk fat trait. For both litter size at birth and at weaning traits, only one marker (BM143(p≤0.01) and RJH1 (p≤0.05), respectively) showed a significant effect in backward and forward solutions. The estimates of the parameters from forward and backward solution using stepwise regression procedure on litter weight at birth (LWB) trait only one marker (MCM147) showed highly significant effect (p≤0.01) and two marker (ILSTS011, CSSM66) showed a significant effect (p≤0.05) in backward and forward solutions.

Keywords: microsatellites marker, estimated breeding value, stepwise regression, milk traits

Procedia PDF Downloads 93
757 Integrated Two Stage Processing of Biomass Conversion to Hydroxymethylfurfural Esters Using Ionic Liquid as Green Solvent and Catalyst: Synthesis of Mono Esters

Authors: Komal Kumar, Sreedevi Upadhyayula

Abstract:

In this study, a two-stage process was established for the synthesis of HMF esters using ionic liquid acid catalyst. Ionic liquid catalyst with different strength of the Bronsted acidity was prepared in the laboratory and characterized using 1H NMR, FT-IR, and 13C NMR spectroscopy. Solid acid catalyst from the ionic liquid catalyst was prepared using the immobilization method. The acidity of the synthesized acid catalyst was measured using Hammett function and titration method. Catalytic performance was evaluated for the biomass conversion to 5-hydroxymethylfurfural (5-HMF) and levulinic acid (LA) in methyl isobutyl ketone (MIBK)-water biphasic system. A good yield of 5-HMF and LA was found at the different composition of MIBK: Water. In the case of MIBK: Water ratio 10:1, good yield of 5-HMF was observed at ambient temperature 150˚C. Upgrading of 5-HMF into monoesters from the reaction of 5-HMF and reactants using biomass-derived monoacid were performed. Ionic liquid catalyst with -SO₃H functional group was found to be best efficient in comparative of a solid acid catalyst for the esterification reaction and biomass conversion. A good yield of 5-HMF esters with high 5-HMF conversion was found to be at 105˚C using the best active catalyst. In this process, process A was the hydrothermal conversion of cellulose and monomer into 5-HMF and LA using acid catalyst. And the process B was the esterification followed by using similar acid catalyst. All monoesters of 5-HMF synthesized here can be used in chemical, cross linker for adhesive or coatings and pharmaceutical industry. A theoretical density functional theory (DFT) study for the optimization of the ionic liquid structure was performed using the Gaussian 09 program to find out the minimum energy configuration of ionic liquid catalyst.

Keywords: biomass conversion, 5-HMF, Ionic liquid, HMF ester

Procedia PDF Downloads 251