Search results for: robust optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4368

Search results for: robust optimization

918 Correlation Between Ore Mineralogy and the Dissolution Behavior of K-Feldspar

Authors: Adrian Keith Caamino, Sina Shakibania, Lena Sunqvist-Öqvist, Jan Rosenkranz, Yousef Ghorbani

Abstract:

Feldspar minerals are one of the main components of the earth’s crust. They are tectosilicate, meaning that they mainly contain aluminum and silicon. Besides aluminum and silicon, they contain either potassium, sodium, or calcium. Accordingly, feldspar minerals are categorized into three main groups: K-feldspar, Na-feldspar, and Ca-feldspar. In recent years, the trend to use K-feldspar has grown tremendously, considering its potential to produce potash and alumina. However, the feldspar minerals, in general, are difficult to decompose for the dissolution of their metallic components. Several methods, including intensive milling, leaching under elevated pressure and temperature, thermal pretreatment, and the use of corrosive leaching reagents, have been proposed to improve its low dissolving efficiency. In this study, as part of the POTASSIAL EU project, to overcome the low dissolution efficiency of the K-feldspar components, mechanical activation using intensive milling followed by leaching using hydrochloric acid (HCl) was practiced. Grinding operational parameters, namely time, rotational speed, and ball-to-sample weight ratio, were studied using the Taguchi optimization method. Then, the mineralogy of the grinded samples was analyzed using a scanning electron microscope (SEM) equipped with automated quantitative mineralogy. After grinding, the prepared samples were subjected to HCl leaching. In the end, the dissolution efficiency of the main elements and impurities of different samples were correlated to the mineralogical characterization results. K-feldspar component dissolution is correlated with ore mineralogy, which provides insight into how to best optimize leaching conditions for selective dissolution. Further, it will have an effect on purifying steps taken afterward and the final value recovery procedures

Keywords: K-feldspar, grinding, automated mineralogy, impurity, leaching

Procedia PDF Downloads 57
917 Public-Private Partnership Projects in Canada: A Case Study Approach

Authors: Samuel Carpintero

Abstract:

Public-private partnerships (PPP) arrangements have emerged all around the world as a response to infrastructure deficits and the need to refurbish existing infrastructure. The motivations of governments for embarking on PPPs for the delivery of public infrastructure are manifold, and include on-time and on-budget delivery as well as access to private project management expertise. The PPP formula has been used by some State governments in United States and Canada, where the participation of private companies in financing and managing infrastructure projects has increased significantly in the last decade, particularly in the transport sector. On the one hand, this paper examines the various ways used in these two countries in the implementation of PPP arrangements, with a particular focus on risk transfer. The examination of risk transfer in this paper is carried out with reference to the following key PPP risk categories: construction risk, revenue risk, operating risk and availability risk. The main difference between both countries is that in Canada the demand risk remains usually within the public sector whereas in the United States this risk is usually transferred to the private concessionaire. The aim is to explore which lessons can be learnt from both models than might be useful for other countries. On the other hand, the paper also analyzes why the Spanish companies have been so successful in winning PPP contracts in North America during the past decade. Contrary to the Latin American PPP market, the Spanish companies do not have any cultural advantage in the case of the United States and Canada. Arguably, some relevant reasons for the success of the Spanish groups are their extensive experience in PPP projects (that dates back to the late 1960s in some cases), their high technical level (that allows them to be aggressive in their bids), and their good position and track-record in the financial markets. The article’s empirical base consists of data provided by official sources of both countries as well as information collected through face-to-face interviews with public and private representatives of the stakeholders participating in some of the PPP schemes. Interviewees include private project managers of the concessionaires, representatives of banks involved as financiers in the projects, and experts in the PPP industry with close knowledge of the North American market. Unstructured in-depth interviews have been adopted as a means of investigation for this study because of its powers to achieve honest and robust responses and to ensure realism in the collection of an overall impression of stakeholders’ perspectives.

Keywords: PPP, concession, infrastructure, construction

Procedia PDF Downloads 274
916 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows

Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid

Abstract:

Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.

Keywords: erodible beds, finite element method, finite volume method, nonlinear elasticity, shallow water equations, stresses in soil

Procedia PDF Downloads 110
915 Particle Size Dependent Enhancement of Compressive Strength and Carbonation Efficiency in Steel Slag Cementitious Composites

Authors: Jason Ting Jing Cheng, Lee Foo Wei, Yew Ming Kun, Chin Ren Jie, Yip Chun Chieh

Abstract:

The utilization of industrial by-products, such as steel slag in cementitious materials, not only mitigates environmental impact but also enhances material properties. This study investigates the dual influence of steel slag particle size on the compressive strength and carbonation efficiency of cementitious composites. Through a systematic experimental approach, steel slag particles were incorporated into cement at varying sizes, and the resulting composites were subjected to mechanical and carbonation tests. Scanning electron microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDX) are conducted in this paper. The findings reveal a positive correlation between increased particle size and compressive strength, attributed to the improved interfacial transition zone and packing density. Conversely, smaller particle sizes exhibited enhanced carbonation efficiency, likely due to the increased surface area facilitating the carbonation reaction. The presence of higher silica and calcium content in finer particles was confirmed by EDX, which contributed to the accelerated carbonation process. This study underscores the importance of particle size optimization in designing sustainable cementitious materials with balanced mechanical performance and carbon sequestration potential. The insights gained from the advanced analytical techniques offer a comprehensive understanding of the mechanisms at play, paving the way for the strategic use of steel slag in eco-friendly construction practices.

Keywords: steel slag, carbonation efficiency, particle size enhancement, compressive strength

Procedia PDF Downloads 26
914 Analysis of a Multiejector Cooling System in a Truck at Different Loads

Authors: Leonardo E. Pacheco, Carlos A. Díaz

Abstract:

An alternative way of addressing the difficult to recover the useless heat is through an ejector refrigeration cycle for vehicles applications. A group of thermo-compressor supply the mechanical compressor function at conventional refrigeration compression system. The thermo-compressor group recovers the thermal energy from waste streams (exhaust gases product in internal combustion motors, gases burned in wellhead among others) to eliminate the power consumption of the mechanical compressor. These types of alternative cooling system (air-conditioners) present a kind of advantages in both the increase in energy efficiency and the improvement of the COP of the system being studied from their its mechanical simplicity (decrease of moving parts). An ejector refrigeration cycle represents a significant step forward in the optimization of the efficient use of energy in the process of air conditioning and an alternative to reduce the environmental impacts. On one side, with the energy recycling decreases the temperature of the gases thrown into the atmosphere, which contributes to the principal beneficiaries of the average temperature of the planet. In parallel, mitigating the environmental impact caused by the production and handling of conventional cooling fluids commonly available in the market, causing the destruction of the ozone layer. This work had studied the operation of the multiejector cooling system for a truck with a 420 HP engine at different rotation speed. The operation condition limits and the COP of multi-ejector cooling systems applied in a truck are analyzed for a variable rpm range from to 800–1800 rpm.

Keywords: ejector system, exhaust gas, multiejector cooling system, recovery energy

Procedia PDF Downloads 232
913 Battery Grading Algorithm in 2nd-Life Repurposing LI-Ion Battery System

Authors: Ya L. V., Benjamin Ong Wei Lin, Wanli Niu, Benjamin Seah Chin Tat

Abstract:

This article introduces a methodology that improves reliability and cyclability of 2nd-life Li-ion battery system repurposed as an energy storage system (ESS). Most of the 2nd-life retired battery systems in the market have module/pack-level state-of-health (SOH) indicator, which is utilized for guiding appropriate depth-of-discharge (DOD) in the application of ESS. Due to the lack of cell-level SOH indication, the different degrading behaviors among various cells cannot be identified upon reaching retired status; in the end, considering end-of-life (EOL) loss and pack-level DOD, the repurposed ESS has to be oversized by > 1.5 times to complement the application requirement of reliability and cyclability. This proposed battery grading algorithm, using non-invasive methodology, is able to detect outlier cells based on historical voltage data and calculate cell-level historical maximum temperature data using semi-analytic methodology. In this way, the individual battery cell in the 2nd-life battery system can be graded in terms of SOH on basis of the historical voltage fluctuation and estimated historical maximum temperature variation. These grades will have corresponding DOD grades in the application of the repurposed ESS to enhance system reliability and cyclability. In all, this introduced battery grading algorithm is non-invasive, compatible with all kinds of retired Li-ion battery systems which lack of cell-level SOH indication, as well as potentially being embedded into battery management software for preventive maintenance and real-time cyclability optimization.

Keywords: battery grading algorithm, 2nd-life repurposing battery system, semi-analytic methodology, reliability and cyclability

Procedia PDF Downloads 177
912 Effect of Injection Moulding Process Parameter on Tensile Strength of Using Taguchi Method

Authors: Gurjeet Singh, M. K. Pradhan, Ajay Verma

Abstract:

The plastic industry plays very important role in the economy of any country. It is generally among the leading share of the economy of the country. Since metals and their alloys are very rarely available on the earth. So to produce plastic products and components, which finds application in many industrial as well as household consumer products is beneficial. Since 50% plastic products are manufactured by injection moulding process. For production of better quality product, we have to control quality characteristics and performance of the product. The process parameters plays a significant role in production of plastic, hence the control of process parameter is essential. In this paper the effect of the parameters selection on injection moulding process has been described. It is to define suitable parameters in producing plastic product. Selecting the process parameter by trial and error is neither desirable nor acceptable, as it is often tends to increase the cost and time. Hence optimization of processing parameter of injection moulding process is essential. The experiments were designed with Taguchi’s orthogonal array to achieve the result with least number of experiments. Here Plastic material polypropylene is studied. Tensile strength test of material is done on universal testing machine, which is produced by injection moulding machine. By using Taguchi technique with the help of MiniTab-14 software the best value of injection pressure, melt temperature, packing pressure and packing time is obtained. We found that process parameter packing pressure contribute more in production of good tensile plastic product.

Keywords: injection moulding, tensile strength, poly-propylene, Taguchi

Procedia PDF Downloads 254
911 Computational Approach to Identify Novel Chemotherapeutic Agents against Multiple Sclerosis

Authors: Syed Asif Hassan, Tabrej Khan

Abstract:

Multiple sclerosis (MS) is a chronic demyelinating autoimmune disorder, of the central nervous system (CNS). In the present scenario, the current therapies either do not halt the progression of the disease or have side effects which limit the usage of current Disease Modifying Therapies (DMTs) for a longer period of time. Therefore, keeping the current treatment failure schema, we are focusing on screening novel analogues of the available DMTs that specifically bind and inhibit the Sphingosine1-phosphate receptor1 (S1PR1) thereby hindering the lymphocyte propagation toward CNS. The novel drug-like analogs molecule will decrease the frequency of relapses (recurrence of the symptoms associated with MS) with higher efficacy and lower toxicity to human system. In this study, an integrated approach involving ligand-based virtual screening protocol (Ultrafast Shape Recognition with CREDO Atom Types (USRCAT)) to identify the non-toxic drug like analogs of the approved DMTs were employed. The potency of the drug-like analog molecules to cross the Blood Brain Barrier (BBB) was estimated. Besides, molecular docking and simulation using Auto Dock Vina 1.1.2 and GOLD 3.01 were performed using the X-ray crystal structure of Mtb LprG protein to calculate the affinity and specificity of the analogs with the given LprG protein. The docking results were further confirmed by DSX (DrugScore eXtented), a robust program to evaluate the binding energy of ligands bound to the ligand binding domain of the Mtb LprG lipoprotein. The ligand, which has a higher hypothetical affinity, also has greater negative value. Further, the non-specific ligands were screened out using the structural filter proposed by Baell and Holloway. Based on the USRCAT, Lipinski’s values, toxicity and BBB analysis, the drug-like analogs of fingolimod and BG-12 showed that RTL and CHEMBL1771640, respectively are non-toxic and permeable to BBB. The successful docking and DSX analysis showed that RTL and CHEMBL1771640 could bind to the binding pocket of S1PR1 receptor protein of human with greater affinity than as compared to their parent compound (Fingolimod). In this study, we also found that all the drug-like analogs of the standard MS drugs passed the Bell and Holloway filter.

Keywords: antagonist, binding affinity, chemotherapeutics, drug-like, multiple sclerosis, S1PR1 receptor protein

Procedia PDF Downloads 236
910 Machine Learning Model to Predict TB Bacteria-Resistant Drugs from TB Isolates

Authors: Rosa Tsegaye Aga, Xuan Jiang, Pavel Vazquez Faci, Siqing Liu, Simon Rayner, Endalkachew Alemu, Markos Abebe

Abstract:

Tuberculosis (TB) is a major cause of disease globally. In most cases, TB is treatable and curable, but only with the proper treatment. There is a time when drug-resistant TB occurs when bacteria become resistant to the drugs that are used to treat TB. Current strategies to identify drug-resistant TB bacteria are laboratory-based, and it takes a longer time to identify the drug-resistant bacteria and treat the patient accordingly. But machine learning (ML) and data science approaches can offer new approaches to the problem. In this study, we propose to develop an ML-based model to predict the antibiotic resistance phenotypes of TB isolates in minutes and give the right treatment to the patient immediately. The study has been using the whole genome sequence (WGS) of TB isolates as training data that have been extracted from the NCBI repository and contain different countries’ samples to build the ML models. The reason that different countries’ samples have been included is to generalize the large group of TB isolates from different regions in the world. This supports the model to train different behaviors of the TB bacteria and makes the model robust. The model training has been considering three pieces of information that have been extracted from the WGS data to train the model. These are all variants that have been found within the candidate genes (F1), predetermined resistance-associated variants (F2), and only resistance-associated gene information for the particular drug. Two major datasets have been constructed using these three information. F1 and F2 information have been considered as two independent datasets, and the third information is used as a class to label the two datasets. Five machine learning algorithms have been considered to train the model. These are Support Vector Machine (SVM), Random forest (RF), Logistic regression (LR), Gradient Boosting, and Ada boost algorithms. The models have been trained on the datasets F1, F2, and F1F2 that is the F1 and the F2 dataset merged. Additionally, an ensemble approach has been used to train the model. The ensemble approach has been considered to run F1 and F2 datasets on gradient boosting algorithm and use the output as one dataset that is called F1F2 ensemble dataset and train a model using this dataset on the five algorithms. As the experiment shows, the ensemble approach model that has been trained on the Gradient Boosting algorithm outperformed the rest of the models. In conclusion, this study suggests the ensemble approach, that is, the RF + Gradient boosting model, to predict the antibiotic resistance phenotypes of TB isolates by outperforming the rest of the models.

Keywords: machine learning, MTB, WGS, drug resistant TB

Procedia PDF Downloads 25
909 Surfactant-Assisted Aqueous Extraction of Residual Oil from Palm-Pressed Mesocarp Fibre

Authors: Rabitah Zakaria, Chan M. Luan, Nor Hakimah Ramly

Abstract:

The extraction of vegetable oil using aqueous extraction process assisted by ionic extended surfactant has been investigated as an alternative to hexane extraction. However, the ionic extended surfactant has not been commercialised and its safety with respect to food processing is uncertain. Hence, food-grade non-ionic surfactants (Tween 20, Span 20, and Span 80) were proposed for the extraction of residual oil from palm-pressed mesocarp fibre. Palm-pressed mesocarp fibre contains a significant amount of residual oil ( 5-10 wt %) and its recovery is beneficial as the oil contains much higher content of vitamin E, carotenoids, and sterols compared to crude palm oil. In this study, the formulation of food-grade surfactants using a combination of high hydrophilic-lipophilic balance (HLB) surfactants and low HLB surfactants to produce micro-emulsion with very low interfacial tension (IFT) was investigated. The suitable surfactant formulation was used in the oil extraction process and the efficiency of the extraction was correlated with the IFT, droplet size and viscosity. It was found that a ternary surfactant mixture with a HLB value of 15 (82% Tween 20, 12% Span 20 and 6% Span 80) was able to produce micro-emulsion with very low IFT compared to other HLB combinations. Results suggested that the IFT and droplet size highly affect the oil recovery efficiency. Finally, optimization of the operating parameters shows that the highest extraction efficiency of 78% was achieved at 1:31 solid to liquid ratio, 2 wt % surfactant solution, temperature of 50˚C, and 50 minutes contact time.

Keywords: food-grade surfactants, aqueous extraction of residual oil, palm-pressed mesocarp fibre, interfacial tension

Procedia PDF Downloads 371
908 Estimation of Scour Using a Coupled Computational Fluid Dynamics and Discrete Element Model

Authors: Zeinab Yazdanfar, Dilan Robert, Daniel Lester, S. Setunge

Abstract:

Scour has been identified as the most common threat to bridge stability worldwide. Traditionally, scour around bridge piers is calculated using the empirical approaches that have considerable limitations and are difficult to generalize. The multi-physic nature of scouring which involves turbulent flow, soil mechanics and solid-fluid interactions cannot be captured by simple empirical equations developed based on limited laboratory data. These limitations can be overcome by direct numerical modeling of coupled hydro-mechanical scour process that provides a robust prediction of bridge scour and valuable insights into the scour process. Several numerical models have been proposed in the literature for bridge scour estimation including Eulerian flow models and coupled Euler-Lagrange models incorporating an empirical sediment transport description. However, the contact forces between particles and the flow-particle interaction haven’t been taken into consideration. Incorporating collisional and frictional forces between soil particles as well as the effect of flow-driven forces on particles will facilitate accurate modeling of the complex nature of scour. In this study, a coupled Computational Fluid Dynamics and Discrete Element Model (CFD-DEM) has been developed to simulate the scour process that directly models the hydro-mechanical interactions between the sediment particles and the flowing water. This approach obviates the need for an empirical description as the fundamental fluid-particle, and particle-particle interactions are fully resolved. The sediment bed is simulated as a dense pack of particles and the frictional and collisional forces between particles are calculated, whilst the turbulent fluid flow is modeled using a Reynolds Averaged Navier Stocks (RANS) approach. The CFD-DEM model is validated against experimental data in order to assess the reliability of the CFD-DEM model. The modeling results reveal the criticality of particle impact on the assessment of scour depth which, to the authors’ best knowledge, hasn’t been considered in previous studies. The results of this study open new perspectives to the scour depth and time assessment which is the key to manage the failure risk of bridge infrastructures.

Keywords: bridge scour, discrete element method, CFD-DEM model, multi-phase model

Procedia PDF Downloads 107
907 Research on Spatial Distribution of Service Facilities Based on Innovation Function: A Case Study of Zhejiang University Zijin Co-Maker Town

Authors: Zhang Yuqi

Abstract:

Service facilities are the boosters for the cultivation and development of innovative functions in innovative cluster areas. At the same time, reasonable service facilities planning can better link the internal functional blocks. This paper takes Zhejiang University Zijin Co-Maker Town as the research object, based on the combination of network data mining and field research and verification, combined with the needs of its internal innovative groups. It studies the distribution characteristics and existing problems of service facilities and then proposes a targeted planning suggestion. The main conclusions are as follows: (1) From the perspective of view, the town is rich in general life-supporting services, but lacking of provision targeted and distinctive service facilities for innovative groups; (2) From the perspective of scale structure, small-scale street shops are the main business form, lack of large-scale service center; (3) From the perspective of spatial structure, service facilities layout of each functional block is too fragile to fit the characteristics of 2aggregation- distribution' of innovation and entrepreneurial activities; (4) The goal of optimizing service facilities planning should be guided for fostering function of innovation and entrepreneurship and meet the actual needs of the innovation and entrepreneurial groups.

Keywords: the cultivation of innovative function, Zhejiang University Zijin Co-Maker Town, service facilities, network data mining, space optimization advice

Procedia PDF Downloads 86
906 Prioritizing Temporary Shelter Areas for Disaster Affected People Using Hybrid Decision Support Model

Authors: Ashish Trivedi, Amol Singh

Abstract:

In the recent years, the magnitude and frequency of disasters have increased at an alarming rate. Every year, more than 400 natural disasters affect global population. A large-scale disaster leads to destruction or damage to houses, thereby rendering a notable number of residents homeless. Since humanitarian response and recovery process takes considerable time, temporary establishments are arranged in order to provide shelter to affected population. These shelter areas are vital for an effective humanitarian relief; therefore, they must be strategically planned. Choosing the locations of temporary shelter areas for accommodating homeless people is critical to the quality of humanitarian assistance provided after a large-scale emergency. There has been extensive research on the facility location problem both in theory and in application. In order to deliver sufficient relief aid within a relatively short timeframe, humanitarian relief organisations pre-position warehouses at strategic locations. However, such approaches have received limited attention from the perspective of providing shelters to disaster-affected people. In present research work, this aspect of humanitarian logistics is considered. The present work proposes a hybrid decision support model to determine relative preference of potential shelter locations by assessing them based on key subjective criteria. Initially, the factors that are kept in mind while locating potential areas for establishing temporary shelters are identified by reviewing extant literature and through consultation from a panel of disaster management experts. In order to determine relative importance of individual criteria by taking into account subjectivity of judgements, a hybrid approach of fuzzy sets and Analytic Hierarchy Process (AHP) was adopted. Further, Technique for order preference by similarity to ideal solution (TOPSIS) was applied on an illustrative data set to evaluate potential locations for establishing temporary shelter areas for homeless people in a disaster scenario. The contribution of this work is to propose a range of possible shelter locations for a humanitarian relief organization, using a robust multi criteria decision support framework.

Keywords: AHP, disaster preparedness, fuzzy set theory, humanitarian logistics, TOPSIS, temporary shelters

Procedia PDF Downloads 175
905 Implications of Human Cytomegalovirus as a Protective Factor in the Pathogenesis of Breast Cancer

Authors: Marissa Dallara, Amalia Ardeljan, Lexi Frankel, Nadia Obaed, Naureen Rashid, Omar Rashid

Abstract:

Human Cytomegalovirus (HCMV) is a ubiquitous virus that remains latent in approximately 60% of individuals in developed countries. Viral load is kept at a minimum due to a robust immune response that is produced in most individuals who remain asymptomatic. HCMV has been recently implicated in cancer research because it may impose oncomodulatory effects on tumor cells of which it infects, which could have an impact on the progression of cancer. HCMV has been implicated in increased pathogenicity of certain cancers such as gliomas, but in contrast, it can also exhibit anti-tumor activity. HCMV seropositivity has been recorded in tumor cells, but this may also have implications in decreased pathogenesis of certain forms of cancer such as leukemia as well as increased pathogenesis in others. This study aimed to investigate the correlation between cytomegalovirus and the incidence of breast cancer. Methods The data used in this project was extracted from a Health Insurance Portability and Accountability Act (HIPAA) compliant national database to analyze the patients infected versus patients not infection with cytomegalovirus using ICD-10, ICD-9 codes. Permission to utilize the database was given by Holy Cross Health, Fort Lauderdale, for the purpose of academic research. Data analysis was conducted using standard statistical methods. Results The query was analyzed for dates ranging from January 2010 to December 2019, which resulted in 14,309 patients in both the infected and control groups, respectively. The two groups were matched by age range and CCI score. The incidence of breast cancer was 1.642% and 235 patients in the cytomegalovirus group compared to 4.752% and 680 patients in the control group. The difference was statistically significant by a p-value of less than 2.2x 10^-16 with an odds ratio of 0.43 (0.4 to 0.48) with a 95% confidence interval. Investigation into the effects of HCMV treatment modalities, including Valganciclovir, Cidofovir, and Foscarnet, on breast cancer in both groups was conducted, but the numbers were insufficient to yield any statistically significant correlations. Conclusion This study demonstrates a statistically significant correlation between cytomegalovirus and a reduced incidence of breast cancer. If HCMV can exert anti-tumor effects on breast cancer and inhibit growth, it could potentially be used to formulate immunotherapy that targets various types of breast cancer. Further evaluation is warranted to assess the implications of cytomegalovirus in reducing the incidence of breast cancer.

Keywords: human cytomegalovirus, breast cancer, immunotherapy, anti-tumor

Procedia PDF Downloads 185
904 Cache Analysis and Software Optimizations for Faster on-Chip Network Simulations

Authors: Khyamling Parane, B. M. Prabhu Prasad, Basavaraj Talawar

Abstract:

Fast simulations are critical in reducing time to market in CMPs and SoCs. Several simulators have been used to evaluate the performance and power consumed by Network-on-Chips. Researchers and designers rely upon these simulators for design space exploration of NoC architectures. Our experiments show that simulating large NoC topologies take hours to several days for completion. To speed up the simulations, it is necessary to investigate and optimize the hotspots in simulator source code. Among several simulators available, we choose Booksim2.0, as it is being extensively used in the NoC community. In this paper, we analyze the cache and memory system behaviour of Booksim2.0 to accurately monitor input dependent performance bottlenecks. Our measurements show that cache and memory usage patterns vary widely based on the input parameters given to Booksim2.0. Based on these measurements, the cache configuration having least misses has been identified. To further reduce the cache misses, we use software optimization techniques such as removal of unused functions, loop interchanging and replacing post-increment operator with pre-increment operator for non-primitive data types. The cache misses were reduced by 18.52%, 5.34% and 3.91% by employing above technology respectively. We also employ thread parallelization and vectorization to improve the overall performance of Booksim2.0. The OpenMP programming model and SIMD are used for parallelizing and vectorizing the more time-consuming portions of Booksim2.0. Speedups of 2.93x and 3.97x were observed for the Mesh topology with 30 × 30 network size by employing thread parallelization and vectorization respectively.

Keywords: cache behaviour, network-on-chip, performance profiling, vectorization

Procedia PDF Downloads 173
903 Optimization of the Drinking Water Treatment Process Improvement of the Treated Water Quality by Using the Sludge Produced by the Water Treatment Plant

Authors: M. Derraz, M. Farhaoui

Abstract:

Problem statement: In the water treatment processes, the coagulation and flocculation processes produce sludge according to the level of the water turbidity. The aluminum sulfate is the most common coagulant used in water treatment plants of Morocco as well as many countries. It is difficult to manage Sludge produced by the treatment plant. However, it can be used in the process to improve the quality of the treated water and reduce the aluminum sulfate dose. Approach: In this study, the effectiveness of sludge was evaluated at different turbidity levels (low, medium, and high turbidity) and coagulant dosage to find optimal operational conditions. The influence of settling time was also studied. A set of jar test experiments was conducted to find the sludge and aluminum sulfate dosages in order to improve the produced water quality for different turbidity levels. Results: Results demonstrated that using sludge produced by the treatment plant can improve the quality of the produced water and reduce the aluminum sulfate using. The aluminum sulfate dosage can be reduced from 40 to 50% according to the turbidity level (10, 20, and 40 NTU). Conclusions/Recommendations: Results show that sludge can be used in order to reduce the aluminum sulfate dosage and improve the quality of treated water. The highest turbidity removal efficiency is observed within 6 mg/l of aluminum sulfate and 35 mg/l of sludge in low turbidity, 20 mg/l of aluminum sulfate and 50 mg/l of sludge in medium turbidity and 20 mg/l of aluminum sulfate and 60 mg/l of sludge in high turbidity. The turbidity removal efficiency is 97.56%, 98.96%, and 99.47% respectively for low, medium and high turbidity levels.

Keywords: coagulation process, coagulant dose, sludge reuse, turbidity removal

Procedia PDF Downloads 210
902 Effect of Saponin Enriched Soapwort Powder on Structural and Sensorial Properties of Turkish Delight

Authors: Ihsan Burak Cam, Ayhan Topuz

Abstract:

Turkish delight has been produced by bleaching the plain delight mix (refined sugar, water and starch) via soapwort extract and powdered sugar. Soapwort extract which contains high amount of saponin, is an additive used in Turkish delight and tahini halvah production to improve consistency, chewiness and color due to its bioactive saponin content by acting as emulsifier. In this study, soapwort powder has been produced by determining optimum process conditions of soapwort extract by using response-surface method. This extract has been enriched with saponin by reverse osmosis (contains %63 saponin in dry bases). Büchi mini spray dryer B-290 was used to produce spray-dried soapwort powder (aw=0.254) from the enriched soapwort concentrate. Processing steps optimization and saponin content enrichment of soapwort extract has been tested on Turkish Delight production. Delight samples, produced by soapwort powder and commercial extract (control), were compared in chewiness, springiness, stickiness, adhesiveness, hardness, color and sensorial characteristics. According to the results, all textural properties except hardness of delights produced by powder were found to be statistically different than control samples. Chewiness, springiness, stickiness, adhesiveness and hardness values of samples (delights produced by the powder / control delights) were determined to be 361.9/1406.7, 0.095/0.251, -120.3/-51.7, 781.9/1869.3, 3427.3g/3118.4g, respectively. According to the quality analysis that has been ran with the end products it has been determined that; there is no statistically negative effect of the soapwort extract and the soapwort powder on the color and the appearance of Turkish Delight.

Keywords: saponin, delight, soapwort powder, spray drying

Procedia PDF Downloads 230
901 Optimization of Personnel Selection Problems via Unconstrained Geometric Programming

Authors: Vildan Kistik, Tuncay Can

Abstract:

From a business perspective, cost and profit are two key factors for businesses. The intent of most businesses is to minimize the cost to maximize or equalize the profit, so as to provide the greatest benefit to itself. However, the physical system is very complicated because of technological constructions, rapid increase of competitive environments and similar factors. In such a system it is not easy to maximize profits or to minimize costs. Businesses must decide on the competence and competence of the personnel to be recruited, taking into consideration many criteria in selecting personnel. There are many criteria to determine the competence and competence of a staff member. Factors such as the level of education, experience, psychological and sociological position, and human relationships that exist in the field are just some of the important factors in selecting a staff for a firm. Personnel selection is a very important and costly process in terms of businesses in today's competitive market. Although there are many mathematical methods developed for the selection of personnel, unfortunately the use of these mathematical methods is rarely encountered in real life. In this study, unlike other methods, an exponential programming model was established based on the possibilities of failing in case the selected personnel was started to work. With the necessary transformations, the problem has been transformed into unconstrained Geometrical Programming problem and personnel selection problem is approached with geometric programming technique. Personnel selection scenarios for a classroom were established with the help of normal distribution and optimum solutions were obtained. In the most appropriate solutions, the personnel selection process for the classroom has been achieved with minimum cost.

Keywords: geometric programming, personnel selection, non-linear programming, operations research

Procedia PDF Downloads 251
900 A Prediction Model for Dynamic Responses of Building from Earthquake Based on Evolutionary Learning

Authors: Kyu Jin Kim, Byung Kwan Oh, Hyo Seon Park

Abstract:

The seismic responses-based structural health monitoring system has been performed to prevent seismic damage. Structural seismic damage of building is caused by the instantaneous stress concentration which is related with dynamic characteristic of earthquake. Meanwhile, seismic response analysis to estimate the dynamic responses of building demands significantly high computational cost. To prevent the failure of structural members from the characteristic of the earthquake and the significantly high computational cost for seismic response analysis, this paper presents an artificial neural network (ANN) based prediction model for dynamic responses of building considering specific time length. Through the measured dynamic responses, input and output node of the ANN are formed by the length of specific time, and adopted for the training. In the model, evolutionary radial basis function neural network (ERBFNN), that radial basis function network (RBFN) is integrated with evolutionary optimization algorithm to find variables in RBF, is implemented. The effectiveness of the proposed model is verified through an analytical study applying responses from dynamic analysis for multi-degree of freedom system to training data in ERBFNN.

Keywords: structural health monitoring, dynamic response, artificial neural network, radial basis function network, genetic algorithm

Procedia PDF Downloads 281
899 Culturable Diversity of Halophilic Bacteria in Chott Tinsilt, Algeria

Authors: Nesrine Lenchi, Salima Kebbouche-Gana, Laddada Belaid, Mohamed Lamine Khelfaoui, Mohamed Lamine Gana

Abstract:

Saline lakes are extreme hypersaline environments that are considered five to ten times saltier than seawater (150 – 300 g L-1 salt concentration). Hypersaline regions differ from each other in terms of salt concentration, chemical composition and geographical location, which determine the nature of inhabitant microorganisms. In order to explore the diversity of moderate and extreme halophiles Bacteria in Chott Tinsilt (East of Algeria), an isolation program was performed. In the first time, water samples were collected from the saltern during pre-salt harvesting phase. Salinity, pH and temperature of the sampling site were determined in situ. Chemical analysis of water sample indicated that Na +and Cl- were the most abundant ions. Isolates were obtained by plating out the samples in complex and synthetic media. In this study, seven halophiles cultures of Bacteria were isolated. Isolates were studied for Gram’s reaction, cell morphology and pigmentation. Enzymatic assays (oxidase, catalase, nitrate reductase and urease), and optimization of growth conditions were done. The results indicated that the salinity optima varied from 50 to 250 g L-1, whereas the optimum of temperature range from 25°C to 35°C. Molecular identification of the isolates was performed by sequencing the 16S rRNA gene. The results showed that these cultured isolates included members belonging to the Halomonas, Staphylococcus, Salinivibrio, Idiomarina, Halobacillus Thalassobacillus and Planococcus genera and what may represent a new bacterial genus.

Keywords: bacteria, Chott, halophilic, 16S rRNA

Procedia PDF Downloads 254
898 The Analysis of Emergency Shutdown Valves Torque Data in Terms of Its Use as a Health Indicator for System Prognostics

Authors: Ewa M. Laskowska, Jorn Vatn

Abstract:

Industry 4.0 focuses on digital optimization of industrial processes. The idea is to use extracted data in order to build a decision support model enabling use of those data for real time decision making. In terms of predictive maintenance, the desired decision support tool would be a model enabling prognostics of system's health based on the current condition of considered equipment. Within area of system prognostics and health management, a commonly used health indicator is Remaining Useful Lifetime (RUL) of a system. Because the RUL is a random variable, it has to be estimated based on available health indicators. Health indicators can be of different types and come from different sources. They can be process variables, equipment performance variables, data related to number of experienced failures, etc. The aim of this study is the analysis of performance variables of emergency shutdown valves (ESV) used in oil and gas industry. ESV is inspected periodically, and at each inspection torque and time of valve operation are registered. The data will be analyzed by means of machine learning or statistical analysis. The purpose is to investigate whether the available data could be used as a health indicator for a prognostic purpose. The second objective is to examine what is the most efficient way to incorporate the data into predictive model. The idea is to check whether the data can be applied in form of explanatory variables in Markov process or whether other stochastic processes would be a more convenient to build an RUL model based on the information coming from registered data.

Keywords: emergency shutdown valves, health indicator, prognostics, remaining useful lifetime, RUL

Procedia PDF Downloads 69
897 Internal Financing Constraints and Corporate Investment: Evidence from Indian Manufacturing Firms

Authors: Gaurav Gupta, Jitendra Mahakud

Abstract:

This study focuses on the significance of internal financing constraints on the determination of corporate fixed investments in the case of Indian manufacturing companies. Financing constraints companies which have less internal fund or retained earnings face more transaction and borrowing costs due to imperfections in the capital market. The period of study is 1999-2000 to 2013-2014 and we consider 618 manufacturing companies for which the continuous data is available throughout the study period. The data is collected from PROWESS data base maintained by Centre for Monitoring Indian Economy Pvt. Ltd. Panel data methods like fixed effect and random effect methods are used for the analysis. The Likelihood Ratio test, Lagrange Multiplier test, and Hausman test results conclude the suitability of the fixed effect model for the estimation. The cash flow and liquidity of the company have been used as the proxies for the internal financial constraints. In accordance with various theories of corporate investments, we consider other firm specific variable like firm age, firm size, profitability, sales and leverage as the control variables in the model. From the econometric analysis, we find internal cash flow and liquidity have the significant and positive impact on the corporate investments. The variables like cost of capital, sales growth and growth opportunities are found to be significantly determining the corporate investments in India, which is consistent with the neoclassical, accelerator and Tobin’s q theory of corporate investment. To check the robustness of results, we divided the sample on the basis of cash flow and liquidity. Firms having cash flow greater than zero are put under one group, and firms with cash flow less than zero are put under another group. Also, the firms are divided on the basis of liquidity following the same approach. We find that the results are robust to both types of companies having positive and negative cash flow and liquidity. The results for other variables are also in the same line as we find for the whole sample. These findings confirm that internal financing constraints play a significant role for determination of corporate investment in India. The findings of this study have the implications for the corporate managers to focus on the projects having higher expected cash inflows to avoid the financing constraints. Apart from that, they should also maintain adequate liquidity to minimize the external financing costs.

Keywords: cash flow, corporate investment, financing constraints, panel data method

Procedia PDF Downloads 220
896 Establishing a Sustainable Construction Industry: Review of Barriers That Inhibit Adoption of Lean Construction in Lesotho

Authors: Tsepiso Mofolo, Luna Bergh

Abstract:

The Lesotho construction industry fails to embrace environmental practices, which has then lead to excessive consumption of resources, land degradation, air and water pollution, loss of habitats, and high energy usage. The industry is highly inefficient, and this undermines its capability to yield the optimum contribution to social, economic and environmental developments. Sustainable construction is, therefore, imperative to ensure the cultivation of benefits from all these intrinsic themes of sustainable development. The development of a sustainable construction industry requires a holistic approach that takes into consideration the interaction between Lean Construction principles, socio-economic and environmental policies, technological advancement and the principles of construction or project management. Sustainable construction is a cutting-edge phenomenon, forming a component of a subjectively defined concept called sustainable development. Sustainable development can be defined in terms of attitudes and judgments to assist in ensuring long-term environmental, social and economic growth in society. The key concept of sustainable construction is Lean Construction. Lean Construction emanates from the principles of the Toyota Production System (TPS), namely the application and adaptation of the fundamental concepts and principles that focus on waste reduction, the increase in value to the customer, and continuous improvement. The focus is on the reduction of socio-economic waste, and protestation of environmental degradation by reducing carbon dioxide emission footprint. Lean principles require a fundamental change in the behaviour and attitudes of the parties involved in order to overcome barriers to cooperation. Prevalent barriers to adoption of Lean Construction in Lesotho are mainly structural - such as unavailability of financing, corruption, operational inefficiency or wastage, lack of skills and training and inefficient construction legislation and political interferences. The consequential effects of these problems trigger down to quality, cost and time of the project - which then result in an escalation of operational costs due to the cost of rework or material wastage. Factor and correlation analysis of these barriers indicate that they are highly correlated, which then poses a detrimental potential to the country’s welfare, environment and construction safety. It is, therefore, critical for Lesotho’s construction industry to develop a robust governance through bureaucracy reforms and stringent law enforcement.

Keywords: construction industry, sustainable development, sustainable construction industry, lean construction, barriers to sustainable construction

Procedia PDF Downloads 254
895 A Targeted Maximum Likelihood Estimation for a Non-Binary Causal Variable: An Application

Authors: Mohamed Raouf Benmakrelouf, Joseph Rynkiewicz

Abstract:

Targeted maximum likelihood estimation (TMLE) is well-established method for causal effect estimation with desirable statistical properties. TMLE is a doubly robust maximum likelihood based approach that includes a secondary targeting step that optimizes the target statistical parameter. A causal interpretation of the statistical parameter requires assumptions of the Rubin causal framework. The causal effect of binary variable, E, on outcomes, Y, is defined in terms of comparisons between two potential outcomes as E[YE=1 − YE=0]. Our aim in this paper is to present an adaptation of TMLE methodology to estimate the causal effect of a non-binary categorical variable, providing a large application. We propose coding on the initial data in order to operate a binarization of the interest variable. For each category, we get a transformation of the non-binary interest variable into a binary variable, taking value 1 to indicate the presence of category (or group of categories) for an individual, 0 otherwise. Such a dummy variable makes it possible to have a pair of potential outcomes and oppose a category (or a group of categories) to another category (or a group of categories). Let E be a non-binary interest variable. We propose a complete disjunctive coding of our variable E. We transform the initial variable to obtain a set of binary vectors (dummy variables), E = (Ee : e ∈ {1, ..., |E|}), where each vector (variable), Ee, takes the value of 0 when its category is not present, and the value of 1 when its category is present, which allows to compute a pairwise-TMLE comparing difference in the outcome between one category and all remaining categories. In order to illustrate the application of our strategy, first, we present the implementation of TMLE to estimate the causal effect of non-binary variable on outcome using simulated data. Secondly, we apply our TMLE adaptation to survey data from the French Political Barometer (CEVIPOF), to estimate the causal effect of education level (A five-level variable) on a potential vote in favor of the French extreme right candidate Jean-Marie Le Pen. Counterfactual reasoning requires us to consider some causal questions (additional causal assumptions). Leading to different coding of E, as a set of binary vectors, E = (Ee : e ∈ {2, ..., |E|}), where each vector (variable), Ee, takes the value of 0 when the first category (reference category) is present, and the value of 1 when its category is present, which allows to apply a pairwise-TMLE comparing difference in the outcome between the first level (fixed) and each remaining level. We confirmed that the increase in the level of education decreases the voting rate for the extreme right party.

Keywords: statistical inference, causal inference, super learning, targeted maximum likelihood estimation

Procedia PDF Downloads 72
894 Conditions of the Anaerobic Digestion of Biomass

Authors: N. Boontian

Abstract:

Biological conversion of biomass to methane has received increasing attention in recent years. Grasses have been explored for their potential anaerobic digestion to methane. In this review, extensive literature data have been tabulated and classified. The influences of several parameters on the potential of these feedstocks to produce methane are presented. Lignocellulosic biomass represents a mostly unused source for biogas and ethanol production. Many factors, including lignin content, crystallinity of cellulose, and particle size, limit the digestibility of the hemicellulose and cellulose present in the lignocellulosic biomass. Pretreatments have used to improve the digestibility of the lignocellulosic biomass. Each pretreatment has its own effects on cellulose, hemicellulose and lignin, the three main components of lignocellulosic biomass. Solid-state anaerobic digestion (SS-AD) generally occurs at solid concentrations higher than 15%. In contrast, liquid anaerobic digestion (AD) handles feedstocks with solid concentrations between 0.5% and 15%. Animal manure, sewage sludge, and food waste are generally treated by liquid AD, while organic fractions of municipal solid waste (OFMSW) and lignocellulosic biomass such as crop residues and energy crops can be processed through SS-AD. An increase in operating temperature can improve both the biogas yield and the production efficiency, other practices such as using AD digestate or leachate as an inoculant or decreasing the solid content may increase biogas yield but have negative impact on production efficiency. Focus is placed on substrate pretreatment in anaerobic digestion (AD) as a means of increasing biogas yields using today’s diversified substrate sources.

Keywords: anaerobic digestion, lignocellulosic biomass, methane production, optimization, pretreatment

Procedia PDF Downloads 359
893 Commercial Automobile Insurance: A Practical Approach of the Generalized Additive Model

Authors: Nicolas Plamondon, Stuart Atkinson, Shuzi Zhou

Abstract:

The insurance industry is usually not the first topic one has in mind when thinking about applications of data science. However, the use of data science in the finance and insurance industry is growing quickly for several reasons, including an abundance of reliable customer data, ferocious competition requiring more accurate pricing, etc. Among the top use cases of data science, we find pricing optimization, customer segmentation, customer risk assessment, fraud detection, marketing, and triage analytics. The objective of this paper is to present an application of the generalized additive model (GAM) on a commercial automobile insurance product: an individually rated commercial automobile. These are vehicles used for commercial purposes, but for which there is not enough volume to apply pricing to several vehicles at the same time. The GAM model was selected as an improvement over GLM for its ease of use and its wide range of applications. The model was trained using the largest split of the data to determine model parameters. The remaining part of the data was used as testing data to verify the quality of the modeling activity. We used the Gini coefficient to evaluate the performance of the model. For long-term monitoring, commonly used metrics such as RMSE and MAE will be used. Another topic of interest in the insurance industry is to process of producing the model. We will discuss at a high level the interactions between the different teams with an insurance company that needs to work together to produce a model and then monitor the performance of the model over time. Moreover, we will discuss the regulations in place in the insurance industry. Finally, we will discuss the maintenance of the model and the fact that new data does not come constantly and that some metrics can take a long time to become meaningful.

Keywords: insurance, data science, modeling, monitoring, regulation, processes

Procedia PDF Downloads 56
892 A Review on Stormwater Harvesting and Reuse

Authors: Fatema Akram, Mohammad G. Rasul, M. Masud K. Khan, M. Sharif I. I. Amir

Abstract:

Australia is a country of some 7,700 million square kilometres with a population of about 22.6 million. At present water security is a major challenge for Australia. In some areas the use of water resources is approaching and in some parts it is exceeding the limits of sustainability. A focal point of proposed national water conservation programs is the recycling of both urban storm-water and treated wastewater. But till now it is not widely practiced in Australia, and particularly storm-water is neglected. In Australia, only 4% of storm-water and rainwater is recycled, whereas less than 1% of reclaimed wastewater is reused within urban areas. Therefore, accurately monitoring, assessing and predicting the availability, quality and use of this precious resource are required for better management. As storm-water is usually of better quality than untreated sewage or industrial discharge, it has better public acceptance for recycling and reuse, particularly for non-potable use such as irrigation, watering lawns, gardens, etc. Existing storm-water recycling practice is far behind of research and no robust technologies developed for this purpose. Therefore, there is a clear need for using modern technologies for assessing feasibility of storm-water harvesting and reuse. Numerical modelling has, in recent times, become a popular tool for doing this job. It includes complex hydrological and hydraulic processes of the study area. The hydrologic model computes storm-water quantity to design the system components, and the hydraulic model helps to route the flow through storm-water infrastructures. Nowadays water quality module is incorporated with these models. Integration of Geographic Information System (GIS) with these models provides extra advantage of managing spatial information. However for the overall management of a storm-water harvesting project, Decision Support System (DSS) plays an important role incorporating database with model and GIS for the proper management of temporal information. Additionally DSS includes evaluation tools and Graphical user interface. This research aims to critically review and discuss all the aspects of storm-water harvesting and reuse such as available guidelines of storm-water harvesting and reuse, public acceptance of water reuse, the scopes and recommendation for future studies. In addition to these, this paper identifies, understand and address the importance of modern technologies capable of proper management of storm-water harvesting and reuse.

Keywords: storm-water management, storm-water harvesting and reuse, numerical modelling, geographic information system, decision support system, database

Procedia PDF Downloads 345
891 Integrated Free Space Optical Communication and Optical Sensor Network System with Artificial Intelligence Techniques

Authors: Yibeltal Chanie Manie, Zebider Asire Munyelet

Abstract:

5G and 6G technology offers enhanced quality of service with high data transmission rates, which necessitates the implementation of the Internet of Things (IoT) in 5G/6G architecture. In this paper, we proposed the integration of free space optical communication (FSO) with fiber sensor networks for IoT applications. Recently, free-space optical communications (FSO) are gaining popularity as an effective alternative technology to the limited availability of radio frequency (RF) spectrum. FSO is gaining popularity due to flexibility, high achievable optical bandwidth, and low power consumption in several applications of communications, such as disaster recovery, last-mile connectivity, drones, surveillance, backhaul, and satellite communications. Hence, high-speed FSO is an optimal choice for wireless networks to satisfy the full potential of 5G/6G technology, offering 100 Gbit/s or more speed in IoT applications. Moreover, machine learning must be integrated into the design, planning, and optimization of future optical wireless communication networks in order to actualize this vision of intelligent processing and operation. In addition, fiber sensors are important to achieve real-time, accurate, and smart monitoring in IoT applications. Moreover, we proposed deep learning techniques to estimate the strain changes and peak wavelength of multiple Fiber Bragg grating (FBG) sensors using only the spectrum of FBGs obtained from the real experiment.

Keywords: optical sensor, artificial Intelligence, Internet of Things, free-space optics

Procedia PDF Downloads 34
890 Comparative Study of the Effects of Process Parameters on the Yield of Oil from Melon Seed (Cococynthis citrullus) and Coconut Fruit (Cocos nucifera)

Authors: Ndidi F. Amulu, Patrick E. Amulu, Gordian O. Mbah, Callistus N. Ude

Abstract:

Comparative analysis of the properties of melon seed, coconut fruit and their oil yield were evaluated in this work using standard analytical technique AOAC. The results of the analysis carried out revealed that the moisture contents of the samples studied are 11.15% (melon) and 7.59% (coconut). The crude lipid content are 46.10% (melon) and 55.15% (coconut).The treatment combinations used (leaching time, leaching temperature and solute: solvent ratio) showed significant difference (p < 0.05) in yield between the samples, with melon oil seed flour having a higher percentage range of oil yield (41.30 – 52.90%) and coconut (36.25 – 49.83%). The physical characterization of the extracted oil was also carried out. The values gotten for refractive index are 1.487 (melon seed oil) and 1.361 (coconut oil) and viscosities are 0.008 (melon seed oil) and 0.002 (coconut oil). The chemical analysis of the extracted oils shows acid value of 1.00mg NaOH/g oil (melon oil), 10.050mg NaOH/g oil (coconut oil) and saponification value of 187.00mg/KOH (melon oil) and 183.26mg/KOH (coconut oil). The iodine value of the melon oil gave 75.00mg I2/g and 81.00mg I2/g for coconut oil. A standard statistical package Minitab version 16.0 was used in the regression analysis and analysis of variance (ANOVA). The statistical software mentioned above was also used to optimize the leaching process. Both samples gave high oil yield at the same optimal conditions. The optimal conditions to obtain highest oil yield ≥ 52% (melon seed) and ≥ 48% (coconut seed) are solute - solvent ratio of 40g/ml, leaching time of 2hours and leaching temperature of 50oC. The two samples studied have potential of yielding oil with melon seed giving the higher yield.

Keywords: Coconut, Melon, Optimization, Processing

Procedia PDF Downloads 417
889 Optimization of Gastro-Retentive Matrix Formulation and Its Gamma Scintigraphic Evaluation

Authors: Swapnila V. Shinde, Hemant P. Joshi, Sumit R. Dhas, Dhananjaysingh B. Rajput

Abstract:

The objective of the present study is to develop hydro-dynamically balanced system for atenolol, β-blocker as a single unit floating tablet. Atenolol shows pH dependent solubility resulting into a bioavailability of 36%. Thus, site specific oral controlled release floating drug delivery system was developed. Formulation includes novice use of rate controlling polymer such as locust bean gum (LBG) in combination of HPMC K4M and gas generating agent sodium bicarbonate. Tablet was prepared by direct compression method and evaluated for physico-mechanical properties. The statistical method was utilized to optimize the effect of independent variables, namely amount of HPMC K4M, LBG and three dependent responses such as cumulative drug release, floating lag time, floating time. Graphical and mathematical analysis of the results allowed the identification and quantification of the formulation variables influencing the selected responses. To study the gastrointestinal transit of the optimized gastro-retentive formulation, in vivo gamma scintigraphy was carried out in six healthy rabbits, after radio labeling the formulation with 99mTc. The transit profiles demonstrated that the dosage form was retained in the stomach for more than 5 hrs. The study signifies the potential of the developed system for stomach targeted delivery of atenolol with improved bioavailability.

Keywords: floating tablet, factorial design, gamma scintigraphy, antihypertensive model drug, HPMC, locust bean gum

Procedia PDF Downloads 252