Search results for: radial basis function networks
8220 Hybrid Wind Solar Gas Reliability Optimization Using Harmony Search under Performance and Budget Constraints
Authors: Meziane Rachid, Boufala Seddik, Hamzi Amar, Amara Mohamed
Abstract:
Today’s energy industry seeks maximum benefit with maximum reliability. In order to achieve this goal, design engineers depend on reliability optimization techniques. This work uses a harmony search algorithm (HS) meta-heuristic optimization method to solve the problem of wind-Solar-Gas power systems design optimization. We consider the case where redundant electrical components are chosen to achieve a desirable level of reliability. The electrical power components of the system are characterized by their cost, capacity and reliability. The reliability is considered in this work as the ability to satisfy the consumer demand which is represented as a piecewise cumulative load curve. This definition of the reliability index is widely used for power systems. The proposed meta-heuristic seeks for the optimal design of series-parallel power systems in which a multiple choice of wind generators, transformers and lines are allowed from a list of product available in the market. Our approach has the advantage to allow electrical power components with different parameters to be allocated in electrical power systems. To allow fast reliability estimation, a universal moment generating function (UMGF) method is applied. A computer program has been developed to implement the UMGF and the HS algorithm. An illustrative example is presented.Keywords: reliability optimization, harmony search optimization (HSA), universal generating function (UMGF)
Procedia PDF Downloads 5768219 Formulation Development, Process Optimization and Comparative study of Poorly Compressible Drugs Ibuprofen, Acetaminophen Using Direct Compression and Top Spray Granulation Technique
Authors: Abhishek Pandey
Abstract:
Ibuprofen and Acetaminophen is widely used as prescription & non-prescription medicine. Ibuprofen mainly used in the treatment of mild to moderate pain related to headache, migraine, postoperative condition and in the management of spondylitis, osteoarthritis and rheumatoid arthritis. Acetaminophen is used as an analgesic and antipyretic drug. Ibuprofen having high tendency of sticking to punches of tablet punching machine while Acetaminophen is not ordinarily compressible to tablet formulation because Acetaminophen crystals are very hard and brittle in nature and fracture very easily when compressed producing capping and laminating tablet defects therefore wet granulation method is used to make them compressible. The aim of study was to prepare Ibuprofen and Acetaminophen tablets by direct compression and top spray granulation technique. In this Investigation tablets were prepared by using directly compressible grade excipients. Dibasic calcium phosphate, lactose anhydrous (DCL21), microcrystalline cellulose (Avicel PH 101). In order to obtain best or optimized formulation, nine different formulations were generated among them batch F7, F8, F9 shows good results and within the acceptable limit. Formulation (F7) selected as optimize product on the basis of dissolution study. Furtherly, directly compressible granules of both drugs were prepared by using top spray granulation technique in fluidized bed processor equipment and compressed .In order to obtain best product process optimization was carried out by performing four trials in which various parameters like inlet air temperature, spray rate, peristaltic pump rpm, % LOD, properties of granules, blending time and hardness were optimized. Batch T3 coined as optimized batch on the basis physical & chemical evaluation. Finally formulations prepared by both techniques were compared.Keywords: direct compression, top spray granulation, process optimization, blending time
Procedia PDF Downloads 3638218 A Rule Adumbrated: Bailment on Terms
Authors: David Gibbs-Kneller
Abstract:
Only parties to a contract can enforce it. This is the privity of the contract. Carriage contracts frequently involve intermediated relationships. While the carrier and cargo-owner will agree on a contract for carriage, there is no privity or consideration between the cargo-owner and third parties. To overcome this, the contract utilizes ‘bailment on terms’ or the rule in Morris. Morris v C W Martin & Sons Ltd is authority for the following: A sub-bailee and bailor may rely on terms of a bailment where the bailor has consented to sub-bailment “on terms”. Bailment on terms can play a significant part in making litigation decisions and determining liability. It is used in standard form contracts and courts have also strived to find consent to bailment on terms in agreements so as to avoid the consequences of privity of contract. However, what this paper exposes is the false legal basis for this model. Lord Denning gave an account adumbrated of the law of bailments to justify the rule in Morris. What Lord Denning was really doing was objecting to the doctrine of privity. To do so, he wrongly asserted there was a lacuna in law that meant third parties could not avail themselves upon terms of a contract. Next, he provided a false analogy between purely contractual rights and possessory liens. Finally, he gave accounts of authorities to say they supported the rule in Morris when they did not. Surprisingly, subsequent case law on the point has not properly engaged with this reasoning. The Pioneer Container held that since the rule in Morris lay in bailments, the decision is not dependent on the doctrine of privity. Yet the basis for this statement was Morris. Once these reasons have been discounted, all bailment on terms rests on is the claim that the law of bailments is an independent source of law. Bailment on terms should not be retained, for it is contrary to established principles in the law of property, tort, and contract. That undermines the certainty of those principles by risking their collapse because there is nothing that keeps bailment on terms within the confines of bailments only. As such, bailment on terms is not good law and should not be used in standard form contracts or by the courts as a means of determining liability. If bailment on terms is a pragmatic rule to retain, it is recommended that rules governing carriage contracts should be amended.Keywords: bailment, carriage of goods, contract law, privity
Procedia PDF Downloads 2088217 Age Estimation from Teeth among North Indian Population: Comparison and Reliability of Qualitative and Quantitative Methods
Authors: Jasbir Arora, Indu Talwar, Daisy Sahni, Vidya Rattan
Abstract:
Introduction: Age estimation is a crucial step to build the identity of a person, both in case of deceased and alive. In adults, age can be estimated on the basis of six regressive (Attrition, Secondary dentine, Dentine transparency, Root resorption, Cementum apposition and Periodontal Disease) changes in teeth qualitatively using scoring system and quantitatively by micrometric method. The present research was designed to establish the reliability of qualitative (method 1) and quantitative (method 2) of age estimation among North Indians and to compare the efficacy of these two methods. Method: 250 single-rooted extracted teeth (18-75 yrs.) were collected from Department of Oral Health Sciences, PGIMER, Chandigarh. Before extraction, periodontal score of each tooth was noted. Labiolingual sections were prepared and examined under light microscope for regressive changes. Each parameter was scored using Gustafson’s 0-3 point score system (qualitative), and total score was calculated. For quantitative method, each regressive change was measured quantitatively in form of 18 micrometric parameters under microscope with the help of measuring eyepiece. Age was estimated using linear and multiple regression analysis in Gustafson’s method and Kedici’s method respectively. Estimated age was compared with actual age on the basis of absolute mean error. Results: In pooled data, by Gustafson’s method, significant correlation (r= 0.8) was observed between total score and actual age. Total score generated an absolute mean error of ±7.8 years. Whereas, for Kedici’s method, a value of correlation coefficient of r=0.5 (p<0.01) was observed between all the eighteen micrometric parameters and known age. Using multiple regression equation, age was estimated, and an absolute mean error of age was found to be ±12.18 years. Conclusion: Gustafson’s (qualitative) method was found to be a better predictor for age estimation among North Indians.Keywords: forensic odontology, age estimation, North India, teeth
Procedia PDF Downloads 2428216 An Experimental Determination of the Limiting Factors Governing the Operation of High-Hydrogen Blends in Domestic Appliances Designed to Burn Natural Gas
Authors: Haiqin Zhou, Robin Irons
Abstract:
The introduction of hydrogen into local networks may, in many cases, require the initial operation of those systems on natural gas/hydrogen blends, either because of a lack of sufficient hydrogen to allow a 100% conversion or because existing infrastructure imposes limitations on the % hydrogen that can be burned before the end-use technologies are replaced. In many systems, the largest number of end-use technologies are small-scale but numerous appliances used for domestic and industrial heating and cooking. In such a scenario, it is important to understand exactly how much hydrogen can be introduced into these appliances before their performance becomes unacceptable and what imposes that limitation. This study seeks to explore a range of significantly higher hydrogen blends and a broad range of factors that might limit operability or environmental acceptability. We will present tests from a burner designed for space heating and optimized for natural gas as an increasing % of hydrogen blends (increasing from 25%) were burned and explore the range of parameters that might govern the acceptability of operation. These include gaseous emissions (particularly NOx and unburned carbon), temperature, flame length, stability and general operational acceptability. Results will show emissions, Temperature, and flame length as a function of thermal load and percentage of hydrogen in the blend. The relevant application and regulation will ultimately determine the acceptability of these values, so it is important to understand the full operational envelope of the burners in question through the sort of extensive parametric testing we have carried out. The present dataset should represent a useful data source for designers interested in exploring appliance operability. In addition to this, we present data on two factors that may be absolutes in determining allowable hydrogen percentages. The first of these is flame blowback. Our results show that, for our system, the threshold between acceptable and unacceptable performance lies between 60 and 65% mol% hydrogen. Another factor that may limit operation, and which would be important in domestic applications, is the acoustic performance of these burners. We will describe a range of operational conditions in which hydrogen blend burners produce a loud and invasive ‘screech’. It will be important for equipment designers and users to find ways to avoid this or mitigate it if performance is to be deemed acceptable.Keywords: blends, operational, domestic appliances, future system operation.
Procedia PDF Downloads 238215 Prevalence of Clostridium perfringens β2-Toxin in Type a Isolates of Sheep and Goats
Authors: Mudassar Mohiuddin, Zahid Iqbal
Abstract:
Introduction: Clostridium perfringens is an important pathogen responsible for causing enteric diseases in both human and animals. The bacteria produce several toxins. These toxins play vital role in the pathogenesis of various fatal enteric diseases and are classified into five types, on the basis of the differential production of Alpha, Beta, Epsilon and Iota toxins. In addition to the so-called major toxins, there are other toxins like beta2 toxin, produced by some strains of C. perfringens which may play a role in the pathogenesis of disease. Aim of the study: In this study a multiplex PCR assay was developed and used for detection of cpb2 gene to identify the Beta2 harboring isolates among different types of C. perfringens. Objectives: The primary objective of this study was to identify the prevalence of β2-toxin gene in local isolates of Clostridium perfringens. Methodology: This was an experimental study. Random sampling technique was used. A total of 97 sheep and goats were included in this study. All were Pakistani local breeds. The samples were collected during the period from Sep, 2014 to Mar, 2015 from selected districts of Punjab province (Pakistan). Faecal samples were cultured in cooked meat media. The identification of Clostridium perfringens was made on the basis of biochemical tests. Multiplex PCR was performed to identify the toxin genes. Results: A total of 43 C. perfringens isolates were genotyped using multiplex PCR assay. The gene encoding C. perfringens β2-toxin (cpb2) was present in more than 50% of the isolates genotyped. However, the prevalence of this gene varied between sheep and goat isolates. Conclusion: The present study suggests the high occurrence of C. perfringens b2-toxin (cpb2) in the local isolates of Pakistan. As β2-toxin is present in both healthy and diseased animals, so further studies are suggested to establish the role of β2-toxin in pathogenesis of the clostridial enteric diseases.Keywords: beta 2 toxin gene, clostridium perfringens, enteric diseases, goats, multiplex PCR, sheep
Procedia PDF Downloads 4628214 Stochastic Pi Calculus in Financial Markets: An Alternate Approach to High Frequency Trading
Authors: Jerome Joshi
Abstract:
The paper presents the modelling of financial markets using the Stochastic Pi Calculus model. The Stochastic Pi Calculus model is mainly used for biological applications; however, the feature of this model promotes its use in financial markets, more prominently in high frequency trading. The trading system can be broadly classified into exchange, market makers or intermediary traders and fundamental traders. The exchange is where the action of the trade is executed, and the two types of traders act as market participants in the exchange. High frequency trading, with its complex networks and numerous market participants (intermediary and fundamental traders) poses a difficulty while modelling. It involves the participants to seek the advantage of complex trading algorithms and high execution speeds to carry out large volumes of trades. To earn profits from each trade, the trader must be at the top of the order book quite frequently by executing or processing multiple trades simultaneously. This would require highly automated systems as well as the right sentiment to outperform other traders. However, always being at the top of the book is also not best for the trader, since it was the reason for the outbreak of the ‘Hot – Potato Effect,’ which in turn demands for a better and more efficient model. The characteristics of the model should be such that it should be flexible and have diverse applications. Therefore, a model which has its application in a similar field characterized by such difficulty should be chosen. It should also be flexible in its simulation so that it can be further extended and adapted for future research as well as be equipped with certain tools so that it can be perfectly used in the field of finance. In this case, the Stochastic Pi Calculus model seems to be an ideal fit for financial applications, owing to its expertise in the field of biology. It is an extension of the original Pi Calculus model and acts as a solution and an alternative to the previously flawed algorithm, provided the application of this model is further extended. This model would focus on solving the problem which led to the ‘Flash Crash’ which is the ‘Hot –Potato Effect.’ The model consists of small sub-systems, which can be integrated to form a large system. It is designed in way such that the behavior of ‘noise traders’ is considered as a random process or noise in the system. While modelling, to get a better understanding of the problem, a broader picture is taken into consideration with the trader, the system, and the market participants. The paper goes on to explain trading in exchanges, types of traders, high frequency trading, ‘Flash Crash,’ ‘Hot-Potato Effect,’ evaluation of orders and time delay in further detail. For the future, there is a need to focus on the calibration of the module so that they would interact perfectly with other modules. This model, with its application extended, would provide a basis for researchers for further research in the field of finance and computing.Keywords: concurrent computing, high frequency trading, financial markets, stochastic pi calculus
Procedia PDF Downloads 778213 Segmented Pupil Phasing with Deep Learning
Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan
Abstract:
Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.Keywords: wavefront sensing, deep learning, deployable telescope, space telescope
Procedia PDF Downloads 1048212 Rational Bureaucracy and E-Government: A Philosophical Study of Universality of E-Government
Authors: Akbar Jamali
Abstract:
Hegel is the first great political philosopher who specifically contemplates on bureaucracy. For Hegel bureaucracy is the function of the state. Since state, essentially is a rational organization, its function; namely, bureaucracy must be rational. Since, what is rational is universal; Hegel had to explain how the bureaucracy could be understood as universal. Hegel discusses bureaucracy in his treatment of ‘executive power’. He analyses modern bureaucracy as a form of political organization, its constituent members, and its relation to the social environment. Therefore, the essence of bureaucracy in Hegel’s philosophy is the implementation of law and rules. Hegel argues that unlike the other social classes that are particular because they look for their own private interest, bureaucracy as a class is a ‘universal’ because their orientation is the interest of the state. State for Hegel is essentially rational and universal. It is the actualization of ‘objective Spirit’. Marx criticizes Hegel’s argument on the universality of state and bureaucracy. For Marx state is equal to bureaucracy, it constitutes a social class that based on the interest of bourgeois class that dominates the society and exploits proletarian class. Therefore, the main disagreement between these political philosophers is: whether the state (bureaucracy) is universal or particular. Growing e-government in modern state as an important aspect of development leads us to contemplate on the particularity and universality of e-government. In this article, we will argue that e-government essentially is universal. E-government, in itself, is impartial; therefore, it cannot be particular. The development of e-government eliminates many side effects of the private, personal or particular interest of the individuals who work as bureaucracy. Finally, we will argue that more a state is developed more it is universal. Therefore, development of e-government makes the state a more universal and affects the modern philosophical debate on the particularity or universality of bureaucracy and state.Keywords: particularity, universality, rational bureaucracy, impartiality
Procedia PDF Downloads 2498211 Optimization of Air Pollution Control Model for Mining
Authors: Zunaira Asif, Zhi Chen
Abstract:
The sustainable measures on air quality management are recognized as one of the most serious environmental concerns in the mining region. The mining operations emit various types of pollutants which have significant impacts on the environment. This study presents a stochastic control strategy by developing the air pollution control model to achieve a cost-effective solution. The optimization method is formulated to predict the cost of treatment using linear programming with an objective function and multi-constraints. The constraints mainly focus on two factors which are: production of metal should not exceed the available resources, and air quality should meet the standard criteria of the pollutant. The applicability of this model is explored through a case study of an open pit metal mine, Utah, USA. This method simultaneously uses meteorological data as a dispersion transfer function to support the practical local conditions. The probabilistic analysis and the uncertainties in the meteorological conditions are accomplished by Monte Carlo simulation. Reasonable results have been obtained to select the optimized treatment technology for PM2.5, PM10, NOx, and SO2. Additional comparison analysis shows that baghouse is the least cost option as compared to electrostatic precipitator and wet scrubbers for particulate matter, whereas non-selective catalytical reduction and dry-flue gas desulfurization are suitable for NOx and SO2 reduction respectively. Thus, this model can aid planners to reduce these pollutants at a marginal cost by suggesting control pollution devices, while accounting for dynamic meteorological conditions and mining activities.Keywords: air pollution, linear programming, mining, optimization, treatment technologies
Procedia PDF Downloads 2088210 A Study on Computational Fluid Dynamics (CFD)-Based Design Optimization Techniques Using Multi-Objective Evolutionary Algorithms (MOEA)
Authors: Ahmed E. Hodaib, Mohamed A. Hashem
Abstract:
In engineering applications, a design has to be as fully perfect as possible in some defined case. The designer has to overcome many challenges in order to reach the optimal solution to a specific problem. This process is called optimization. Generally, there is always a function called “objective function” that is required to be maximized or minimized by choosing input parameters called “degrees of freedom” within an allowed domain called “search space” and computing the values of the objective function for these input values. It becomes more complex when we have more than one objective for our design. As an example for Multi-Objective Optimization Problem (MOP): A structural design that aims to minimize weight and maximize strength. In such case, the Pareto Optimal Frontier (POF) is used, which is a curve plotting two objective functions for the best cases. At this point, a designer should make a decision to choose the point on the curve. Engineers use algorithms or iterative methods for optimization. In this paper, we will discuss the Evolutionary Algorithms (EA) which are widely used with Multi-objective Optimization Problems due to their robustness, simplicity, suitability to be coupled and to be parallelized. Evolutionary algorithms are developed to guarantee the convergence to an optimal solution. An EA uses mechanisms inspired by Darwinian evolution principles. Technically, they belong to the family of trial and error problem solvers and can be considered global optimization methods with a stochastic optimization character. The optimization is initialized by picking random solutions from the search space and then the solution progresses towards the optimal point by using operators such as Selection, Combination, Cross-over and/or Mutation. These operators are applied to the old solutions “parents” so that new sets of design variables called “children” appear. The process is repeated until the optimal solution to the problem is reached. Reliable and robust computational fluid dynamics solvers are nowadays commonly utilized in the design and analyses of various engineering systems, such as aircraft, turbo-machinery, and auto-motives. Coupling of Computational Fluid Dynamics “CFD” and Multi-Objective Evolutionary Algorithms “MOEA” has become substantial in aerospace engineering applications, such as in aerodynamic shape optimization and advanced turbo-machinery design.Keywords: mathematical optimization, multi-objective evolutionary algorithms "MOEA", computational fluid dynamics "CFD", aerodynamic shape optimization
Procedia PDF Downloads 2568209 Preliminary Study of Hand Gesture Classification in Upper-Limb Prosthetics Using Machine Learning with EMG Signals
Authors: Linghui Meng, James Atlas, Deborah Munro
Abstract:
There is an increasing demand for prosthetics capable of mimicking natural limb movements and hand gestures, but precise movement control of prosthetics using only electrode signals continues to be challenging. This study considers the implementation of machine learning as a means of improving accuracy and presents an initial investigation into hand gesture recognition using models based on electromyographic (EMG) signals. EMG signals, which capture muscle activity, are used as inputs to machine learning algorithms to improve prosthetic control accuracy, functionality and adaptivity. Using logistic regression, a machine learning classifier, this study evaluates the accuracy of classifying two hand gestures from the publicly available Ninapro dataset using two-time series feature extraction algorithms: Time Series Feature Extraction (TSFE) and Convolutional Neural Networks (CNNs). Trials were conducted using varying numbers of EMG channels from one to eight to determine the impact of channel quantity on classification accuracy. The results suggest that although both algorithms can successfully distinguish between hand gesture EMG signals, CNNs outperform TSFE in extracting useful information for both accuracy and computational efficiency. In addition, although more channels of EMG signals provide more useful information, they also require more complex and computationally intensive feature extractors and consequently do not perform as well as lower numbers of channels. The findings also underscore the potential of machine learning techniques in developing more effective and adaptive prosthetic control systems.Keywords: EMG, machine learning, prosthetic control, electromyographic prosthetics, hand gesture classification, CNN, computational neural networks, TSFE, time series feature extraction, channel count, logistic regression, ninapro, classifiers
Procedia PDF Downloads 318208 The Effect of Particulate Matter on Cardiomyocyte Apoptosis Through Mitochondrial Fission
Authors: Tsai-chun Lai, Szu-ju Fu, Tzu-lin Lee, Yuh-Lien Chen
Abstract:
There is much evidence that exposure to fine particulate matter (PM) from air pollution increases the risk of cardiovascular morbidity and mortality. According to previous reports, PM in the air enters the respiratory tract, contacts the alveoli, and enters the blood circulation, leading to the progression of cardiovascular disease. PM pollution may also lead to cardiometabolic disturbances, increasing the risk of cardiovascular disease. The effects of PM on cardiac function and mitochondrial damage are currently unknown. We used mice and rat cardiomyocytes (H9c2) as animal and in vitro cell models, respectively, to simulate an air pollution environment using PM. These results indicate that the apoptosis-related factor PUMA, a regulator of apoptosis upregulated by p53, is increased in mice treated with PM. Apoptosis was aggravated in cardiomyocytes treated with PM, as measured by TUNEL assay and Annexin V/PI. Western blot results showed that CASPASE3 was significantly increased and BCL2 (B-cell lymphoid 2) was significantly decreased under PM treatment. Concurrent exposure to PM increases mitochondrial reactive oxygen species (ROS) production by MitoSOX Red staining. Furthermore, using Mitotracker staining, PM treatment significantly shortened mitochondrial length, indicating mitochondrial fission. The expression of mitochondrial fission-related proteins p-DRP1 (phosphodynamics-related protein 1) and FIS1 (mitochondrial fission 1 protein) was significantly increased. Based on these results, the exposure to PM worsens mitochondrial function and leads to cardiomyocyte apoptosis.Keywords: particulate matter, cardiomyocyte, apoptosis, mitochondria
Procedia PDF Downloads 1048207 Computational Fluid Dynamic Modeling of Mixing Enhancement by Stimulation of Ferrofluid under Magnetic Field
Authors: Neda Azimi, Masoud Rahimi, Faezeh Mohammadi
Abstract:
Computational fluid dynamics (CFD) simulation was performed to investigate the effect of ferrofluid stimulation on hydrodynamic and mass transfer characteristics of two immiscible liquid phases in a Y-micromixer. The main purpose of this work was to develop a numerical model that is able to simulate hydrodynamic of the ferrofluid flow under magnetic field and determine its effect on mass transfer characteristics. A uniform external magnetic field was applied perpendicular to the flow direction. The volume of fluid (VOF) approach was used for simulating the multiphase flow of ferrofluid and two-immiscible liquid flows. The geometric reconstruction scheme (Geo-Reconstruct) based on piecewise linear interpolation (PLIC) was used for reconstruction of the interface in the VOF approach. The mass transfer rate was defined via an equation as a function of mass concentration gradient of the transported species and added into the phase interaction panel using the user-defined function (UDF). The magnetic field was solved numerically by Fluent MHD module based on solving the magnetic induction equation method. CFD results were validated by experimental data and good agreements have been achieved, which maximum relative error for extraction efficiency was about 7.52 %. It was showed that ferrofluid actuation by a magnetic field can be considered as an efficient mixing agent for liquid-liquid two-phase mass transfer in microdevices.Keywords: CFD modeling, hydrodynamic, micromixer, ferrofluid, mixing
Procedia PDF Downloads 1968206 Modeling the International Economic Relations Development: The Prospects for Regional and Global Economic Integration
Authors: M. G. Shilina
Abstract:
The interstate economic interaction phenomenon is complex. ‘Economic integration’, as one of its types, can be explored through the prism of international law, the theories of the world economy, politics and international relations. The most objective study of the phenomenon requires a comprehensive multifactoral approach. In new geopolitical realities, the problems of coexistence and possible interconnection of various mechanisms of interstate economic interaction are actively discussed. Currently, the Eurasian continent states support the direction to economic integration. At the same time, the existing international economic law fragmentation in Eurasia is seen as the important problem. The Eurasian space is characterized by a various types of interstate relations: international agreements (multilateral and bilateral), and a large number of cooperation formats (from discussion platforms to organizations aimed at deep integration). For their harmonization, it is necessary to have a clear vision to the phased international economic relations regulation options. In the conditions of rapid development of international economic relations, the modeling (including prognostic) can be optimally used as the main scientific method for presenting the phenomenon. On the basis of this method, it is possible to form the current situation vision and the best options for further action. In order to determine the most objective version of the integration development, the combination of several approaches were used. The normative legal approach- the descriptive method of legal modeling- was taken as the basis for the analysis. A set of legal methods was supplemented by the international relations science prognostic methods. The key elements of the model are the international economic organizations and states' associations existing in the Eurasian space (the Eurasian Economic Union (EAEU), the European Union (EU), the Shanghai Cooperation Organization (SCO), Chinese project ‘One belt-one road’ (OBOR), the Commonwealth of Independent States (CIS), BRICS, etc.). A general term for the elements of the model is proposed - the interstate interaction mechanisms (IIM). The aim of building a model of current and future Eurasian economic integration is to show optimal options for joint economic development of the states and IIMs. The long-term goal of this development is the new economic and political space, so-called the ‘Great Eurasian Community’. The process of achievement this long-term goal consists of successive steps. Modeling the integration architecture and dividing the interaction into stages led us to the following conclusion: the SCO is able to transform Eurasia into a single economic space. Gradual implementation of the complex phased model, in which the SCO+ plays a key role, will allow building an effective economic integration for all its participants, to create an economically strong community. The model can have practical value for politicians, lawyers, economists and other participants involved in the economic integration process. A clear, systematic structure can serve as a basis for further governmental action.Keywords: economic integration, The Eurasian Economic Union, The European Union, The Shanghai Cooperation Organization, The Silk Road Economic Belt
Procedia PDF Downloads 1508205 Constructions of Linear and Robust Codes Based on Wavelet Decompositions
Authors: Alla Levina, Sergey Taranov
Abstract:
The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability
Procedia PDF Downloads 4898204 Numerical Analysis of Core-Annular Blood Flow in Microvessels at Low Reynolds Numbers
Authors: L. Achab, F. Iachachene
Abstract:
In microvessels, red blood cells (RBCs) exhibit a tendency to migrate towards the vessel center, establishing a core-annular flow pattern. The core region, marked by a high concentration of RBCs, is governed by significantly non-Newtonian viscosity. Conversely, the annular layer, composed of cell-free plasma, is characterized by Newtonian low viscosity. This property enables the plasma layer to act as a lubricant for the vessel walls, efficiently reducing resistance to the movement of blood cells. In this study, we investigate the factors influencing blood flow in microvessels and the thickness of the annular plasma layer using a non-miscible fluids approach in a 2D axisymmetric geometry. The governing equations of an incompressible unsteady flow are solved numerically through the Volume of Fluid (VOF) method to track the interface between the two immiscible fluids. To model blood viscosity in the core region, we adopt the Quemada constitutive law which is accurately captures the shear-thinning blood rheology over a wide range of shear rates. Our results are then compared to an established theoretical approach under identical flow conditions, particularly concerning the radial velocity profile and the thickness of the annular plasma layer. The simulation findings for low Reynolds numbers, demonstrate a notable agreement with the theoretical solution, emphasizing the pivotal role of blood’s rheological properties in the core region in determining the thickness of the annular plasma layer.Keywords: core-annular flows, microvessels, Quemada model, plasma layer thickness, volume of fluid method
Procedia PDF Downloads 568203 An Analysis of Twitter Use of Slow Food Movement in the Context of Online Activism
Authors: Kubra Sultan Yuzuncuyil, Aytekin İsman, Berkay Bulus
Abstract:
With the developments of information and communication technologies, the forms of molding public opinion have changed. In the presence of Internet, the notion of activism has been endowed with digital codes. Activists have engaged the use of Internet into their campaigns and the process of creating collective identity. Activist movements have been incorporating the relevance of new communication technologies for their goals and opposition. Creating and managing activism through Internet is called Online Activism. In this main, Slow Food Movement which was emerged within the philosophy of defending regional, fair and sustainable food has been engaging Internet into their activist campaign. This movement supports the idea that a new food system which allows strong connections between plate and planet is possible. In order to make their voices heard, it has utilized social networks and develop particular skills in the framework online activism. This study analyzes online activist skills of Slow Food Movement (SFM) develop and attempts to measure its effectiveness. To achieve this aim, it adopts the model proposed by Sivitandies and Shah and conduct both qualitiative and quantiative content analysis on social network use of Slow Food Movement. In this regard, the sample is chosen as the official profile and analyzed between in a three month period respectively March-May 2017. It was found that SFM develops particular techniques that appeal to the model of Sivitandies and Shah. The prominent skill in this regard was found as hyperlink abbreviation and use of multimedia elements. On the other hand, there are inadequacies in hashtag and interactivity use. The importance of this study is that it highlights and discusses how online activism can be engaged into a social movement. It also reveals current online activism skills of SFM and their effectiveness. Furthermore, it makes suggestions to enhance the related abilities and strengthen its voice on social networks.Keywords: slow food movement, Twitter, internet, online activism
Procedia PDF Downloads 2818202 Plastic Waste Sorting by the People of Dakar
Authors: E. Gaury, P. Mandausch, O. Picot, A. R. Thomas, L. Veisblat, L. Ralambozanany, C. Delsart
Abstract:
In Dakar, demographic and spatial growth was accompanied by a 50% increase in household waste between 1988 and 2008 in the city. In addition, a change in the nature of household waste was observed between 1990 and 2007. The share of plastic increased by 15% between 2004 and 2007 in Dakar. Plastics represent the seventh category of household waste, the most produced per year in Senegal. The share of plastic in household and similar waste is 9% in Senegal. Waste management in the city of Dakar is a complex process involving a multitude of formal and informal actors with different perceptions and objectives. The objective of this study was to understand the motivations that could lead to sorting action, as well as the perception of plastic waste sorting within the Dakar population (households and institutions). The problematic of this study was as follows: what may be the factors playing a role in the sorting action? In an attempt to answer this, two approaches have been developed: (1) An exploratory qualitative study by semi-structured interviews with two groups of individuals concerned by the sorting of plastic waste: on the one hand, the experts in charge of waste management and on the other the households-producers of waste plastics. This study served as the basis for formulating the hypotheses and thus for the quantitative analysis. (2) A quantitative study using a questionnaire survey method among households producing plastic waste in order to test the previously formulated hypotheses. The objective was to have quantitative results representative of the population of Dakar in relation to the behavior and the process inherent in the adoption of the plastic waste sorting action. The exploratory study shows that the perception of state responsibility varies between institutions and households. Public institutions perceive this as a shared responsibility because the problem of plastic waste affects many sectors (health, environmental education, etc.). Their involvement is geared more towards raising awareness and educating young people. As state action is limited, the emergence of private companies in this sector seems logical as they are setting up collection networks to develop a recycling activity. The state plays a moral support role in these activities and encourages companies to do more. The study of the understanding of the action of sorting plastic waste by the population of Dakar through a quantitative analysis was able to demonstrate the attitudes and constraints inherent in the adoption of plastic waste sorting.Cognitive attitude, knowledge, and visible consequences have been shown to correlate positively with sorting behavior. Thus, it would seem that the population of Dakar is more sensitive to what they see and what they know to adopt sorting behavior.It has also been shown that the strongest constraints that could slow down sorting behavior were the complexity of the process, too much time and the lack of infrastructure in which to deposit plastic waste.Keywords: behavior, Dakar, plastic waste, waste management
Procedia PDF Downloads 958201 Quality Assessment of New Zealand Mānuka Honeys Using Hyperspectral Imaging Combined with Deep 1D-Convolutional Neural Networks
Authors: Hien Thi Dieu Truong, Mahmoud Al-Sarayreh, Pullanagari Reddy, Marlon M. Reis, Richard Archer
Abstract:
New Zealand mānuka honey is a honeybee product derived mainly from Leptospermum scoparium nectar. The potent antibacterial activity of mānuka honey derives principally from methylglyoxal (MGO), in addition to the hydrogen peroxide and other lesser activities present in all honey. MGO is formed from dihydroxyacetone (DHA) unique to L. scoparium nectar. Mānuka honey also has an idiosyncratic phenolic profile that is useful as a chemical maker. Authentic mānuka honey is highly valuable, but almost all honey is formed from natural mixtures of nectars harvested by a hive over a time period. Once diluted by other nectars, mānuka honey irrevocably loses value. We aimed to apply hyperspectral imaging to honey frames before bulk extraction to minimise the dilution of genuine mānuka by other honey and ensure authenticity at the source. This technology is non-destructive and suitable for an industrial setting. Chemometrics using linear Partial Least Squares (PLS) and Support Vector Machine (SVM) showed limited efficacy in interpreting chemical footprints due to large non-linear relationships between predictor and predictand in a large sample set, likely due to honey quality variability across geographic regions. Therefore, an advanced modelling approach, one-dimensional convolutional neural networks (1D-CNN), was investigated for analysing hyperspectral data for extraction of biochemical information from honey. The 1D-CNN model showed superior prediction of honey quality (R² = 0.73, RMSE = 2.346, RPD= 2.56) to PLS (R² = 0.66, RMSE = 2.607, RPD= 1.91) and SVM (R² = 0.67, RMSE = 2.559, RPD=1.98). Classification of mono-floral manuka honey from multi-floral and non-manuka honey exceeded 90% accuracy for all models tried. Overall, this study reveals the potential of HSI and deep learning modelling for automating the evaluation of honey quality in frames.Keywords: mānuka honey, quality, purity, potency, deep learning, 1D-CNN, chemometrics
Procedia PDF Downloads 1398200 Assessment of Psychological Needs and Characteristics of Elderly Population for Developing Information and Communication Technology Services
Authors: Seung Ah Lee, Sunghyun Cho, Kyong Mee Chung
Abstract:
Rapid population aging became a worldwide demographic phenomenon due to rising life expectancy and declining fertility rates. Considering the current increasing rate of population aging, it is assumed that Korean society enters into a ‘super-aged’ society in 10 years, in which people aged 65 years or older account for more than 20% of entire population. In line with this trend, ICT services aimed to help elderly people to improve the quality of life have been suggested. However, existing ICT services mainly focus on supporting health or nursing care and are somewhat limited to meet a variety of specialized needs and challenges of this population. It is pointed out that the majority of services have been driven by technology-push policies. Given that the usage of ICT services greatly vary on individuals’ socio-economic status (SES), physical and psychosocial needs, this study systematically categorized elderly population into sub-groups and identified their needs and characteristics related to ICT usage in detail. First, three assessment criteria (demographic variables including SES, cognitive functioning level, and emotional functioning level) were identified based on previous literature, experts’ opinions, and focus group interview. Second, survey questions for needs assessment were developed based on the criteria and administered to 600 respondents from a national probability sample. The questionnaire consisted of 67 items concerning demographic information, experience on ICT services and information technology (IT) devices, quality of life and cognitive functioning, etc. As the result of survey, age (60s, 70s, 80s), education level (college graduates or more, middle and high school, less than primary school) and cognitive functioning level (above the cut-off, below the cut-off) were considered the most relevant factors for categorization and 18 sub-groups were identified. Finally, 18 sub-groups were clustered into 3 groups according to following similarities; computer usage rate, difficulties in using ICT, and familiarity with current or previous job. Group 1 (‘active users’) included those who with high cognitive function and educational level in their 60s and 70s. They showed favorable and familiar attitudes toward ICT services and used the services for ‘joyful life’, ‘intelligent living’ and ‘relationship management’. Group 2 (‘potential users’), ranged from age of 60s to 80s with high level of cognitive function and mostly middle to high school graduates, reported some difficulties in using ICT and their expectations were lower than in group 1 despite they were similar to group 1 in areas of needs. Group 3 (‘limited users’) consisted of people with the lowest education level or cognitive function, and 90% of group reported difficulties in using ICT. However, group 3 did not differ from group 2 regarding the level of expectation for ICT services and their main purpose of using ICT was ‘safe living’. This study developed a systematic needs assessment tool and identified three sub-groups of elderly ICT users based on multi-criteria. It is implied that current cognitive function plays an important role in using ICT and determining needs among the elderly population. Implications and limitations were further discussed.Keywords: elderly population, ICT, needs assessment, population aging
Procedia PDF Downloads 1438199 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability
Authors: Chin-Chia Jane
Abstract:
In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.Keywords: quality of service, reliability, transportation network, travel time
Procedia PDF Downloads 2218198 Identifying Dynamic Structural Parameters of Soil-Structure System Based on Data Recorded during Strong Earthquakes
Authors: Vahidreza Mahmoudabadi, Omid Bahar, Mohammad Kazem Jafari
Abstract:
In many applied engineering problems, structural analysis is usually conducted by assuming a rigid bed, while imposing the effect of structure bed flexibility can affect significantly on the structure response. This article focuses on investigation and evaluation of the effects arising from considering a soil-structure system in evaluation of dynamic characteristics of a steel structure with respect to elastic and inelastic behaviors. The recorded structure acceleration during Taiwan’s strong Chi-Chi earthquake on different floors of the structure was our evaluation criteria. The respective structure is an eight-story steel bending frame structure designed using a displacement-based direct method assuring weak beam - strong column function. The results indicated that different identification methods i.e. reverse Fourier transform or transfer functions, is capable to determine some of the dynamic parameters of the structure precisely, rather than evaluating all of them at once (mode frequencies, mode shapes, structure damping, structure rigidity, etc.). Response evaluation based on the input and output data elucidated that the structure first mode is not significantly affected, even considering the soil-structure interaction effect, but the upper modes have been changed. Also, it was found that the response transfer function of the different stories, in which plastic hinges have occurred in the structure components, provides similar results.Keywords: bending steel frame structure, dynamic characteristics, displacement-based design, soil-structure system, system identification
Procedia PDF Downloads 5038197 Shape Management Method of Large Structure Based on Octree Space Partitioning
Authors: Gichun Cha, Changgil Lee, Seunghee Park
Abstract:
The objective of the study is to construct the shape management method contributing to the safety of the large structure. In Korea, the research of the shape management is lack because of the new attempted technology. Terrestrial Laser Scanning (TLS) is used for measurements of large structures. TLS provides an efficient way to actively acquire accurate the point clouds of object surfaces or environments. The point clouds provide a basis for rapid modeling in the industrial automation, architecture, construction or maintenance of the civil infrastructures. TLS produce a huge amount of point clouds. Registration, Extraction and Visualization of data require the processing of a massive amount of scan data. The octree can be applied to the shape management of the large structure because the scan data is reduced in the size but, the data attributes are maintained. The octree space partitioning generates the voxel of 3D space, and the voxel is recursively subdivided into eight sub-voxels. The point cloud of scan data was converted to voxel and sampled. The experimental site is located at Sungkyunkwan University. The scanned structure is the steel-frame bridge. The used TLS is Leica ScanStation C10/C5. The scan data was condensed 92%, and the octree model was constructed with 2 millimeter in resolution. This study presents octree space partitioning for handling the point clouds. The basis is created by shape management of the large structures such as double-deck tunnel, building and bridge. The research will be expected to improve the efficiency of structural health monitoring and maintenance. "This work is financially supported by 'U-City Master and Doctor Course Grant Program' and the National Research Foundation of Korea(NRF) grant funded by the Korea government (MSIP) (NRF- 2015R1D1A1A01059291)."Keywords: 3D scan data, octree space partitioning, shape management, structural health monitoring, terrestrial laser scanning
Procedia PDF Downloads 2978196 On Bianchi Type Cosmological Models in Lyra’s Geometry
Authors: R. K. Dubey
Abstract:
Bianchi type cosmological models have been studied on the basis of Lyra’s geometry. Exact solution has been obtained by considering a time dependent displacement field for constant deceleration parameter and varying cosmological term of the universe. The physical behavior of the different models has been examined for different cases.Keywords: Bianchi type-I cosmological model, variable gravitational coupling, cosmological constant term, Lyra's model
Procedia PDF Downloads 3548195 Muhammad`s Vision of Interaction with Supernatural Beings According to the Hadith in Comparison to Parallels of Other Cultures
Authors: Vladimir A. Rozov
Abstract:
Comparative studies of religion and ritual could contribute better understanding of human culture universalities. Belief in supernatural beings seems to be a common feature of the religion. A significant part of the Islamic concepts that concern supernatural beings is based on a tradition based on the Hadiths. They reflect, among other things, his ideas about a proper way to interact with supernatural beings. These ideas to a large extent follow from the pre-Islamic religious experience of the Arabs and had been reflected in a number of ritual actions. Some of those beliefs concern a particular function of clothing. For example, it is known that Muhammad was wrapped in clothes during the revelation of the Quran. The same thing was performed by pre-Islamic soothsayers (kāhin) and by rival opponents of Muhammad during their trances. Muhammad also turned the clothes inside out during religious rituals (prayer for rain). Besides these specific ways of clothing which prove the external similarity of Muhammad with the soothsayers and other people who claimed the connection with supernatural forces, the pre-Islamic soothsayers had another characteristic feature which is physical flaws. In this regard, it is worth to note Muhammad's so-called "Seal the Prophecy" (h̠ ātam an- nubūwwa) -protrusion or outgrowth on his back. Another interesting feature of Muhammad's behavior was his attitude to eating onion and garlic. In particular, the Prophet didn`t eat them and forbade people who had tasted these vegetables to enter mosques, until the smell ceases to be felt. The reason for this ban on eating onion and garlic is caused by a belief that the smell of these products prevents communication with otherworldly forces. The materials of the Hadith also suggest that Muhammad shared faith in the apotropical properties of water. Both of these ideas have parallels in other cultures of the world. Muhammad's actions supposed to provide an interaction with the supernatural beings are not accidental. They have parallels in the culture of pre-Islamic Arabia as well as in many past and present world cultures. The latter fact can be explained by the similarity of the universal human beliefs in supernatural beings and how they should be interacted with. Later a number of similar ideas shared by the Prophet Muhammad was legitimized by the Islamic tradition and formed the basis of popular Islamic rituals. Thus, these parallels emphasize the commonality of human notions of supernatural beings and also demonstrate the significance of the pre-Islamic cultural context in analyzing the genesis of Islamic religious beliefs.Keywords: hadith, Prophet Muhammad, ritual, supernatural beings
Procedia PDF Downloads 3898194 Economic Analysis of Cowpea (Unguiculata spp) Production in Northern Nigeria: A Case Study of Kano Katsina and Jigawa States
Authors: Yakubu Suleiman, S. A. Musa
Abstract:
Nigeria is the largest cowpea producer in the world, accounting for about 45%, followed by Brazil with about 17%. Cowpea is grown in Kano, Bauchi, Katsina, Borno in the north, Oyo in the west, and to the lesser extent in Enugu in the east. This study was conducted to determine the input–output relationship of Cowpea production in Kano, Katsina, and Jigawa states of Nigeria. The data were collected with the aid of 1000 structured questionnaires that were randomly distributed to Cowpea farmers in the three states mentioned above of the study area. The data collected were analyzed using regression analysis (Cobb–Douglass production function model). The result of the regression analysis revealed the coefficient of multiple determinations, R2, to be 72.5% and the F ration to be 106.20 and was found to be significant (P < 0.01). The regression coefficient of constant is 0.5382 and is significant (P < 0.01). The regression coefficient with respect to labor and seeds were 0.65554 and 0.4336, respectively, and they are highly significant (P < 0.01). The regression coefficient with respect to fertilizer is 0.26341 which is significant (P < 0.05). This implies that a unit increase of any one of the variable inputs used while holding all other variables inputs constants, will significantly increase the total Cowpea output by their corresponding coefficient. This indicated that farmers in the study area are operating in stage II of the production function. The result revealed that Cowpea farmer in Kano, Jigawa and Katsina States realized a profit of N15,997, N34,016 and N19,788 per hectare respectively. It is hereby recommended that more attention should be given to Cowpea production by government and research institutions.Keywords: coefficient, constant, inputs, regression
Procedia PDF Downloads 4108193 Prospects in Development of Ecofriendly Biopesticides in Management of Postharvest Fungal Deterioration of Cassava (Manihot esculenta Crantz)
Authors: Anderson Chidi Amadioha, Promise Chidi Kenkwo, A. A. Markson
Abstract:
Cassava (Manihot esculenta Crantz) is an important food and cash crop that provide cheap source of carbohydrate for food, feed and raw material for industries hence a commodity for feature economic development of developing countries. Despite the importance, its production potentials is undermined by disease agents that greatly reduce yield and render it unfit for human consumption and industrial use. Pathogenicity tests on fungal isolates from infected cassava revealed Aspergillus flavus, Rhizopus stolonifer, Aspergillus niger, and Trichodderma viride as rot-causing organisms. Water and ethanol extracts of Piper guineense, Ocimum graticimum, Cassia alata, and Tagetes erecta at 50% concentration significantly inhibited the radial growth of the pathogens in vitro and their development and spread in vivo. Low cassava rot incidence and severity was recorded when the extracts were applied before than after spray inoculating with spore suspension (1x105 spores/ml of distilled water) of the pathogenic organisms. The plant materials are readily available, and their extracts are biodegradable and cost effective. The fungitoxic potentials of extracts of these plant materials could be exploited as potent biopesticides in the management of postharvest fungal deterioration of cassava especially in developing countries where synthetic fungicides are not only scarce but also expensive for resource poor farmers who produce over 95% of the food consumed.Keywords: cassava, biopesticides, in vitro, in vivo, pathogens, plant extracts
Procedia PDF Downloads 1808192 A Study on Legal Regimes Alternatives from the Aspect of Shenzhen Global Ocean Central City Construction
Authors: Jinsong Zhao, Lin Zhao
Abstract:
Shenzhen, one of the fastest growing cities in the world, has been building a global ocean central city since 2017, facing many challenges, especially how to innovate new legal regimes to meet the future demands of the development of global shipping. First, the current legal regime of bills of lading as a document of title was established by English law in the 18th century but limited to the period of marine transportation from port of loading to port of discharge (namely, port to port). The e-commerce era is asking for such a function to be extended from port to port to door to door. Secondly, the function of the port has also been upgraded from the traditional loading and unloading of goods to a much wider area, such as being custody of warehousing goods for its mortgage bank, and therefore its legal status is changing, so it is necessary to amend the law of ports and harbours and innovate the rights and responsibilities of the port under its new role as the custody. Thirdly, the development of new marine energy has made more and more offshore floating wind power and floating photovoltaic devices face new legal issues such as legal status, nationality and ownership registration, mortgage, maritime lien, and possessory lien. Fourthly, the jurisdiction of the above issues, as well as conflicts of law and the applicable law, are also questions pending answers. This paper will discuss these issues of private international law, especially the innovation of new legal regimes with an aim to solve the above problems.Keywords: maritime law, bills of lading, e-commerce, port law, marine clean energy
Procedia PDF Downloads 408191 Deciphering the Gut Microbiome's Role in Early-Life Immune Development
Authors: Xia Huo
Abstract:
Children are more vulnerable to environmental toxicants compared to adults, and their developing immune system is among the most sensitive targets regarding toxicity of environmental toxicants. Studies have found that exposure to environmental toxicants is associated with impaired immune function in children, but only a few studies have focused on the relationship between environmental toxicant exposure and vaccine antibody potency and immunoglobulin (Ig) levels in children. These studies investigated the associations of exposure to polychlorinated biphenyls (PCBs), perfluorinated compounds (PFCs), heavy metals (Pb, Cd, As, Hg) and PM2.5 with the serum-specific antibody concentrations and Ig levels against different vaccines, such as anti-Hib, tetanus, diphtheria toxoid, and analyze the possible mechanisms underlying exposure-related alterations of antibody titers and Ig levels against different vaccines. Results suggest that exposure to these toxicants is generally associated with decreased potency of antibodies produced from childhood immunizations and an overall deficiency in the protection the vaccines provide. Toxicant exposure is associated with vaccination failure and decreased antibody titers, and increased risk of immune-related diseases in children by altering specific immunoglobulin levels. Age, sex, nutritional status, and co-exposure may influence the effects of toxicants on the immune function in children. Epidemiological evidence suggests that exposure-induced changes to humoral immunerelated tissue/cells/molecules response to vaccines may have predominant roles in the inverse associations between antibody responsiveness to vaccines and environmental toxicants. These results help us to conduct better immunization policies for children under environmental toxicant burden.Keywords: environmental toxicants, immunotoxicity, vaccination, antibodies, children's health
Procedia PDF Downloads 59