Search results for: optimized asset allocation
424 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning
Authors: Shayla He
Abstract:
Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.Keywords: homeless, prediction, model, RNN
Procedia PDF Downloads 121423 Synthesis of Double Dye-Doped Silica Nanoparticles and Its Application in Paper-Based Chromatography
Authors: Ka Ho Yau, Jan Frederick Engels, Kwok Kei Lai, Reinhard Renneberg
Abstract:
Lateral flow test is a prevalent technology in various sectors such as food, pharmacology and biomedical sciences. Colloidal gold (CG) is widely used as the signalling molecule because of the ease of synthesis, bimolecular conjugation and its red colour due to intrinsic SPRE. However, the production of colloidal gold is costly and requires vigorous conditions. The stability of colloidal gold are easily affected by environmental factors such as pH, high salt content etc. Silica nanoparticles are well known for its ease of production and stability over a wide range of solvents. Using reverse micro-emulsion (w/o), silica nanoparticles with different sizes can be produced precisely by controlling the amount of water. By incorporating different water-soluble dyes, a rainbow colour of the silica nanoparticles could be produced. Conjugation with biomolecules such as antibodies can be achieved after surface modification of the silica nanoparticles with organosilane. The optimum amount of the antibodies to be labelled was determined by Bradford Assay. In this work, we have demonstrated the ability of the dye-doped silica nanoparticles as a signalling molecule in lateral flow test, which showed a semi-quantitative measurement of the analyte. The image was further analysed for the LOD=10 ng of the analyte. The working range and the linear range of the test were from 0 to 2.15μg/mL and from 0 to 1.07 μg/mL (R2=0.988) respectively. The performance of the tests was comparable to those using colloidal gold with the advantages of lower cost, enhanced stability and having a wide spectrum of colours. The positives lines can be imaged by naked eye or by using a mobile phone camera for a better quantification. Further research has been carried out in multicolour detection of different biomarkers simultaneously. The preliminary results were promising as there was little cross-reactivity being observed for an optimized system. This approach provides a platform for multicolour detection for a set of biomarkers that enhances the accuracy of diseases diagnostics.Keywords: colorimetric detection, immunosensor, paper-based biosensor, silica
Procedia PDF Downloads 385422 Comparison of Cyclone Design Methods for Removal of Fine Particles from Plasma Generated Syngas
Authors: Mareli Hattingh, I. Jaco Van der Walt, Frans B. Waanders
Abstract:
A waste-to-energy plasma system was designed by Necsa for commercial use to create electricity from unsorted municipal waste. Fly ash particles must be removed from the syngas stream at operating temperatures of 1000 °C and recycled back into the reactor for complete combustion. A 2D2D high efficiency cyclone separator was chosen for this purpose. During this study, two cyclone design methods were explored: The Classic Empirical Method (smaller cyclone) and the Flow Characteristics Method (larger cyclone). These designs were optimized with regard to efficiency, so as to remove at minimum 90% of the fly ash particles of average size 10 μm by 50 μm. Wood was used as feed source at a concentration of 20 g/m3 syngas. The two designs were then compared at room temperature, using Perspex test units and three feed gases of different densities, namely nitrogen, helium and air. System conditions were imitated by adapting the gas feed velocity and particle load for each gas respectively. Helium, the least dense of the three gases, would simulate higher temperatures, whereas air, the densest gas, simulates a lower temperature. The average cyclone efficiencies ranged between 94.96% and 98.37%, reaching up to 99.89% in individual runs. The lowest efficiency attained was 94.00%. Furthermore, the design of the smaller cyclone proved to be more robust, while the larger cyclone demonstrated a stronger correlation between its separation efficiency and the feed temperatures. The larger cyclone can be assumed to achieve slightly higher efficiencies at elevated temperatures. However, both design methods led to good designs. At room temperature, the difference in efficiency between the two cyclones was almost negligible. At higher temperatures, however, these general tendencies are expected to be amplified so that the difference between the two design methods will become more obvious. Though the design specifications were met for both designs, the smaller cyclone is recommended as default particle separator for the plasma system due to its robust nature.Keywords: Cyclone, design, plasma, renewable energy, solid separation, waste processing
Procedia PDF Downloads 214421 Increase of the Nanofiber Degradation Rate Using PCL-PEO and PCL-PVP as a Shell in the Electrospun Core-Shell Nanofibers Using the Needleless Blades
Authors: Matej Buzgo, Erico Himawan, Ksenija JašIna, Aiva Simaite
Abstract:
Electrospinning is a versatile and efficient technology for producing nanofibers for biomedical applications. One of the most common polymers used for the preparation of nanofibers for regenerative medicine and drug delivery applications is polycaprolactone (PCL). PCL is a biocompatible and bioabsorbable material that can be used to stimulate the regeneration of various tissues. It is also a common material used for the development of drug delivery systems by blending the polymer with small active molecules. However, for many drug delivery applications, e.g. cancer immunotherapy, PCL biodegradation rate that may exceed 9 months is too long, and faster nanofiber dissolution is needed. In this paper, we investigate the dissolution and small molecule release rates of PCL blends with two hydrophilic polymers: polyethylene oxide (PEO) or polyvinylpyrrolidone (PVP). We show that adding hydrophilic polymer to the PCL reduces the water contact angle, increases the dissolution rate, and strengthens the interactions between the hydrophilic drug and polymer matrix that further sustain its release. Finally using this method, we were also able to increase the nanofiber degradation rate when PCL-PEO and PCL-PVP were used as a shell in the electrospun core-shell nanofibers and spread up the release of active proteins from their core. Electrospinning can be used for the preparation of the core-shell nanofibers, where active ingredients are encapsulated in the core and their release rate is regulated by the shell. However, such fibers are usually prepared by coaxial electrospinning that is an extremely low-throughput technique. An alternative is emulsion electrospinning that could be upscaled using needleless blades. In this work, we investigate the possibility of using emulsion electrospinning for encapsulation and sustained release of the growth factors for the development of the organotypic skin models. The core-shell nanofibers were prepared using the optimized formulation and the release rate of proteins from the fibers was investigated for 2 weeks – typical cell culture conditions.Keywords: electrospinning, polycaprolactone (PCL), polyethylene oxide (PEO), polyvinylpyrrolidone (PVP)
Procedia PDF Downloads 273420 Synthesis and Characterization of Iron and Aluminum-Containing AFm Phases
Authors: Aurore Lechevallier, Mohend Chaouche, Jerome Soudier, Guillaume Renaudin
Abstract:
The cement industry accounts for 8% of the global CO₂ emissions, and approximately 60% of these emissions are associated with the Portland cement clinker production from the decarbonization of limestone (CaCO3). Their impact on the greenhouse effect results in growing social awareness. Therefore, CO2 footprint becomes a product selection choice, and substituting Portland cement with a lower CO2-footprint alternative binder is sought. In this context, new hydraulic binders have been studied as a potential Ordinary Portland Cement substitute. Many of them are composed of iron oxides and aluminum oxides, present in the Ca₄Al₂-xFe₂+ₓO₁₀-like phase and forming Ca-LDH (i.e. AFM) as a hydration product. It has become essential to study the possible existence of Fe/Al AFM solid solutions to characterize the hydration process properly. Ca₂Al₂-xFex(OH)₆.X.nH₂O layered AFM samples intercalated with either nitrate or chloride X anions were synthesized based on the co-precipitation method under a nitrogen atmosphere to avoid the carbonation effect.AFM samples intercalated with carbonate anions were synthesized based on the anionic exchange process, using AFM-NO₃ as the source material. These three AFM samples were synthesized with varying Fe/Al molar ratios. The experimental conditions were optimized to make possible the formation of Al-AFM and Fe-AFM using the same parameters (namely pH value and salt concentration). Rietveld refinements were performed to demonstrate the existence of a solid solution between the two trivalent metallic end members. Spectroscopic analyses were used to confirm the intercalation of the targeted anion; secondary electron images were taken to analyze the AFM samples’ morphology, and energy dispersive X-ray spectroscopy (EDX) was carried out to determine the elemental composition of the AFM samples. Results of this study make it possible to quantify the Al/Fe ratio of the AFM phases precipitated in our hydraulic binder, thanks to the determined Vegard's law characteristic to the corresponding solid solutionsKeywords: AFm phase, iron-rich binder, low-carbon cement, solid solution
Procedia PDF Downloads 138419 An Empirical Analysis of Farmers Field Schools and Effect on Tomato Productivity in District Malakand Khyber Pakhtunkhwa-Pakistan
Authors: Mahmood Iqbal, Khalid Nawab, Tachibana Satoshi
Abstract:
Farmer Field School (FFS) is constantly aims to assist farmers to determine and learn about field ecology and integrated crop management. The study was conducted to examine the change in productivity of tomato crop in the study area; to determine increase in per acre yield of the crop, and find out reduction in per acre input cost. A study of tomato crop was conducted in ten villages namely Jabban, Bijligar Colony, Palonow, Heroshah, Zara Maira, Deghar Ghar, Sidra Jour, Anar Thangi, Miangano Korona and Wartair of district Malakand. From each village 15 respondents were selected randomly on the basis of identical allocation making sample size of 150 respondents. The research was based on primary as well as secondary data. Primary data was collected from farmers while secondary data were taken from Agriculture Extension Department Dargai, District Malakand. Interview schedule was planned and each farmer was interviewed personally. The study was based on comparison of cost, yield and income of tomato before and after FFS. Paired t-test and Statistical Package for Social Sciences (SPSS) was used for analysis; outcome of the study show that integrated pest management project has brought a positive change in the attitude of farmers of the project area through FFS approach. In district Malakand 66.0% of the respondents were between the age group of 31-50 years, 11.3% of respondents had primary level of education, 12.7% of middle level, 28.7% metric level, 3.3% of intermediate level and 2.0% of graduate level of education while 42.0% of respondents were illiterate and have no education. Average land holding size of farmers was 6.47 acres, cost of seed, crop protection from insect pest and crop protection from diseases was reduced by Rs. 210.67, Rs. 2584.43 and Rs. 3044.16 respectively, the cost of fertilizers and cost of farm yard manure was increased by Rs.1548.87 and Rs. 1151.40 respectively while tomato yield was increased by 1585.03 kg/acre from 7663.87 to 9248.90 kg/acre. The role of FFS initiate by integrated pest management project through department of agriculture extension for the development of agriculture was worth mentioning. It has brought enhancement in crop yield of tomato and their income through FFS approach. On the basis of results of the research studies, integrated pest management project should spread their developmental activities for maximum participation of the complete rural masses through participatory FFS approach.Keywords: agriculture, Farmers field schools, extension education, tomato
Procedia PDF Downloads 613418 Construction Strategy of Urban Public Space in Driverless Era
Authors: Yang Ye, Hongfei Qiu, Yaqi Li
Abstract:
The planning and construction of traditional cities are oriented by cars, which leads to the problems of insufficient urban public space, fragmentation, and low utilization efficiency. With the development of driverless technology, the urban structure will change from the traditional single-core grid structure to the multi-core model. In terms of traffic organization, with the release of land for traffic facilities, public space will become more continuous and integrated with traffic space. In the context of driverless technology, urban public reconstruction is characterized by modularization and high efficiency, and its planning and layout features accord with points (service facilities), lines (smart lines), surfaces (activity centers). The public space of driverless urban roads will provide diversified urban public facilities and services. The intensive urban layout makes the commercial public space realize the functions of central activities and style display, respectively, in the interior (building atrium) and the exterior (building periphery). In addition to recreation function, urban green space can also utilize underground parking space to realize efficient dispatching of shared cars. The roads inside the residential community will be integrated into the urban landscape, providing conditions for the community public activity space with changing time sequence and improving the efficiency of space utilization. The intervention of driverless technology will change the thinking of traditional urban construction and turn it into a human-oriented one. As a result, urban public space will be richer, more connected, more efficient, and the urban space justice will be optimized. By summarizing the frontier research, this paper discusses the impact of unmanned driving on cities, especially urban public space, which is beneficial for landscape architects to cope with the future development and changes of the industry and provides a reference for the related research and practice.Keywords: driverless, urban public space, construction strategy, urban design
Procedia PDF Downloads 114417 The New World Kirkpatrick Model as an Evaluation Tool for a Publication Writing Programme
Authors: Eleanor Nel
Abstract:
Research output is an indicator of institutional performance (and quality), resulting in increased pressure on academic institutions to perform in the research arena. Research output is further utilised to obtain research funding. Resultantly, academic institutions face significant pressure from governing bodies to provide evidence on the return for research investments. Research output has thus become a substantial discourse within institutions, mainly due to the processes linked to evaluating research output and the associated allocation of research funding. This focus on research outputs often surpasses the development of robust, widely accepted tools to additionally measure research impact at institutions. A publication writing programme, for enhancing research output, was launched at a South African university in 2011. Significant amounts of time, money, and energy have since been invested in the programme. Although participants provided feedback after each session, no formal review was conducted to evaluate the research output directly associated with the programme. Concerns in higher education about training costs, learning results, and the effect on society have increased the focus on value for money and the need to improve training, research performance, and productivity. Furthermore, universities rely on efficient and reliable monitoring and evaluation systems, in addition to the need to demonstrate accountability. While publishing does not occur immediately, achieving a return on investment from the intervention is critical. A multi-method study, guided by the New World Kirkpatrick Model (NWKM), was conducted to determine the impact of the publication writing programme for the period of 2011 to 2018. Quantitative results indicated a total of 314 academics participating in 72 workshops over the study period. To better understand the quantitative results, an open-ended questionnaire and semi-structured interviews were conducted with nine participants from a particular faculty as a convenience sample. The purpose of the research was to collect information to develop a comprehensive framework for impact evaluation that could be used to enhance the current design and delivery of the programme. The qualitative findings highlighted the critical role of a multi-stakeholder strategy in strengthening support before, during, and after a publication writing programme to improve the impact and research outputs. Furthermore, monitoring on-the-job learning is critical to ingrain the new skills academics have learned during the writing workshops and to encourage them to be accountable and empowered. The NWKM additionally provided essential pointers on how to link the results more effectively from publication writing programmes to institutional strategic objectives to improve research performance and quality, as well as what should be included in a comprehensive evaluation framework.Keywords: evaluation, framework, impact, research output
Procedia PDF Downloads 76416 Fexofenadine Hydrochloride Orodispersisble Tablets: Formulation and in vitro/in vivo Evaluation in Healthy Human Volunteers
Authors: Soad Ali Yehia, Mohamed Shafik El-Ridi, Mina Ibrahim Tadros, Nolwa Gamal El-Sherif
Abstract:
Fexofenadine hydrochloride (FXD) is a slightly soluble, bitter-tasting, drug having an oral bioavailability of 35%. The maximum plasma concentration is reached 2.6 hours (Tmax) post-dose. The current work aimed to develop taste-masked FXD orodispersible tablets (ODTs) to increase extent of drug absorption and reduce Tmax. Taste masking was achieved via solid dispersion (SD) with chitosan (CS) or sodium alginate (ALG). FT-IR, DSC and XRD were performed to identify physicochemical interactions and FXD crystallinity. Taste-masked FXD-ODTs were developed via addition of superdisintegrants (crosscarmelose sodium or sodium starch glycolate, 5% and 10%, w/w) or sublimable agents (camphor, menthol or thymol; 10% and 20%, w/w) to FXD-SDs. ODTs were evaluated for weight variation, drug-content, friability, wetting time, disintegration time and drug release. Camphor-based (20%, w/w) FXD-ODT (F12) was optimized (F23) by incorporation of a more hydrophilic lubricant, sodium stearyl fumarate (Pruv®). The topography of the latter formula was examined via scanning electron microscopy (SEM). The in vivo estimation of FXD pharmacokinetics, relative to Allegra® tablets, was evaluated in healthy human volunteers. Based on the gustatory sensation test in healthy volunteers, FXD:CS (1:1) and FXD:ALG (1:0.5) SDs were selected. Taste-masked FXD-ODTs had appropriate physicochemical properties and showed short wetting and disintegration times. Drug release profiles of F23 and phenylalanine-containing Allegra® ODT were similar (f2 = 96) showing a complete release in two minutes. SEM micrographs revealed pores following camphor sublimation. Compared to Allegra® tablets, pharmacokinetic studies in healthy volunteers proved F23 ability to increase extent of FXD absorption (14%) and reduce Tmax to 1.83 h.Keywords: fexofenadine hydrochloride, taste masking, chitosan, orodispersible
Procedia PDF Downloads 344415 Screening and Optimization of Conditions for Pectinase Production by Aspergillus Flavus
Authors: Rumaisa Shahid, Saad Aziz Durrani, Shameel Pervez, Ibatsam Khokhar
Abstract:
Food waste is a prevalent issue in Pakistan, with over 40 percent of food discarded annually. Despite their decay, rotting fruits retain residual nutritional value consumed by microorganisms, notably fungi and bacteria. Fungi, preferred for their extracellular enzyme release, are gaining prominence, particularly for pectinase production. This enzyme offers several advantages, including clarifying juices by breaking down pectic compounds. In this study, three Aspergillus flavus isolates derived from decomposed fruits and manure were selected for pectinase production. The primary aim was to isolate fungi from diverse waste sources, identify the isolates and assess their capacity for pectinase production. The identification was done through morphological characteristics with the help of Light microscopy and Scanning Electron Microscopy (SEM). Pectinolytic potential was screened using pectin minimal salt agar (PMSA) medium, comparing clear zone diameters among isolates. Identification relied on morphological characteristics. Optimizing substrate (lemon and orange peel powder) concentrations, pH, temperature, and incubation period aimed to enhance pectinase yield. Spectrophotometry enabled quantitative analysis. The temperature was set at room temperature (28 ºC). The optimal conditions for Aspergillus flavus strain AF1(isolated from mango) included a pH of 5, an incubation period of 120 hours, and substrate concentrations of 3.3% for orange peels and 6.6% for lemon peels. For AF2 and AF3 (both isolated from soil), the ideal pH and incubation period were the same as AF1 i.e. pH 5 and 120 hours. However, their optimized substrate concentrations varied, with AF2 showing maximum activity at 3.3% for orange peels and 6.6% for lemon peels, while AF3 exhibited its peak activity at 6.6% for orange peels and 8.3% for lemon peels. Among the isolates, AF1 demonstrated superior performance under these conditions, comparatively.Keywords: pectinase, lemon peel, orange peel, aspergillus flavus
Procedia PDF Downloads 72414 The Influence of Polysaccharide Isolated from Morinda citrifolia Fruit to the Growth of Vero, He-La and T47D Cell Lines against Doxorubicin in vitro
Authors: Ediati Budi Cahyono, Triana Hertiani, Nauval Arrazy Asawimanda, Wahyu Puji Pratomo
Abstract:
Background: Doxorubicin is widely used as a chemotherapeutic drug despite having many side effects. It may cause macrophage dysfunction and decreasing proliferation of lymphocyte. Noni (Morinda citrifolia) fruit which has rich of polysaccharide content has potential as antitumor and immunostimulant effect. The isolation of polysaccharide from Noni fruit has been optimized according to four different methods based on macrophage and lymphocyte activities. We found the highest polysaccharide content from one of the four methods isolation. A method of polysaccharide isolation which has the highest immunostimulant effect was used for further observation as co-chemotherapy. The aim of the study: was to evaluate the isolated polysaccharide from the method of choice as co-chemotherapy of doxorubicin for the growth of Vero, He-La, and T47D cell lines in vitro. The method: in vitro growth assay of Vero, He-La, and T47D cell lines was done using MTT-reduction method, and apoptosis test was done by double staining method to evaluate the induction apoptotic effect of the combination. Every group was treated with doxorubicin and isolated polysaccharide from method of choice with 4 variances of concentrations (25 µg/ml, 50 µg/ml, 100 µg/ml and 200 µg/ml) a long with negative control (doxorubicin only) and normal control (without doxorubicin or polysaccharide administration). Results: The combination of polysaccharide fraction in the concentration of 100μg/ml with 2μmol of doxorubicin against He-La and T47D cell lines influenced the highest cytotoxic effect by suppressing cell viability comparing with doxorubicin only. The combination of polysaccharide fraction in the concentration of 100μg/ml with 2μmol of doxorubicin-induced apoptotic effect the He-La cell line comparing with doxorubicin only. The result of the study: it can be concluded that the combination of polysaccharide fraction and doxorubicin effect more selective toward He-La and T47D cell lines than to Vero cell line. It can be suggested isolated polysaccharide from the method of choice has co-chemotherapy activity against doxorubicin.Keywords: polysaccharide, noni fruit, doxorubicin, cancer cell lines, vero cell line
Procedia PDF Downloads 251413 Identifying and Optimizing the Critical Excipients in Moisture Activated Dry Granulation Process for Two Anti TB Drugs of Different Aqueous Solubilities
Authors: K. Srujana, Vinay U. Rao, M. Sudhakar
Abstract:
Isoniazide (INH) a freely water soluble and pyrazinamide (Z) a practically water insoluble first line anti tubercular (TB) drugs were identified as candidates for optimizing the Moisture Activated Dry Granulation (MADG) process. The work focuses on identifying the effect of binder type and concentration as well as the effect of magnesium stearate level on critical quality attributes of Disintegration time (DT) and in vitro dissolution test when the tablets are processed by the MADG process. Also, the level of the drug concentration, binder concentration and fluid addition during the agglomeration stage of the MADG process was evaluated and optimized. For INH, it was identified that for tablets with HPMC as binder at both 2% w/w and 5% w/w level and Magnesium stearate upto 1%w/w as lubrication the DT is within 1 minute and the dissolution rate is the fastest (> 80% in 15 minutes) as compared to when PVP or pregelatinized starch is used as binder. Regarding the process, fast disintegrating and rapidly dissolving tablets are obtained when the level of drug, binder and fluid uptake in agglomeration stage is 25% w/w 0% w/w binder and 0.033%. w/w. At the other 2 levels of these three ingredients, the DT is significantly impacted and dissolution is also slower. For pyrazinamide,it was identified that for the tablets with 2% w/w level of each of PVP as binder and Cross Caramellose Sodium disintegrant the DT is within 2 minutes and the dissolution rate is the fastest(>80 in 15 minutes)as compared to when HPMC or pregelatinized starch is used as binder. This may be attributed to the fact that PVP may be acting as a solubilizer for the practically insoluble Pyrazinamide. Regarding the process,fast dispersing and rapidly disintegrating tablets are obtained when the level of drug, binder and fluid uptake in agglomeration stage is 10% w/w,25% w/w binder and 1% w/w.At the other 2 levels of these three ingredients, the DT is significantly impacted and dissolution is comparatively slower and less complete.Keywords: agglomeration stage, isoniazide, MADG, moisture distribution stage, pyrazinamide
Procedia PDF Downloads 239412 A Photoredox (C)sp³-(C)sp² Coupling Method Comparison Study
Authors: Shasline Gedeon, Tiffany W. Ardley, Ying Wang, Nathan J. Gesmundo, Katarina A. Sarris, Ana L. Aguirre
Abstract:
Drug discovery and delivery involve drug targeting, an approach that helps find a drug against a chosen target through high throughput screening and other methods by way of identifying the physical properties of the potential lead compound. Physical properties of potential drug candidates have been an imperative focus since the unveiling of Lipinski's Rule of 5 for oral drugs. Throughout a compound's journey from discovery, clinical phase trials, then becoming a classified drug on the market, the desirable properties are optimized while minimizing/eliminating toxicity and undesirable properties. In the pharmaceutical industry, the ability to generate molecules in parallel with maximum efficiency is a substantial factor achieved through sp²-sp² carbon coupling reactions, e.g., Suzuki Coupling reactions. These reaction types allow for the increase of aromatic fragments onto a compound. More recent literature has found benefits to decreasing aromaticity, calling for more sp³-sp² carbon coupling reactions instead. The objective of this project is to provide a comparison between various sp³-sp² carbon coupling methods and reaction conditions, collecting data on production of the desired product. There were four different coupling methods being tested amongst three cores and 4-5 installation groups per method; each method ran under three distinct reaction conditions. The tested methods include the Photoredox Decarboxylative Coupling, the Photoredox Potassium Alkyl Trifluoroborate (BF3K) Coupling, the Photoredox Cross-Electrophile (PCE) Coupling, and the Weix Cross-Electrophile (WCE) Coupling. The results concluded that the Decarboxylative method was very difficult in yielding product despite the several literature conditions chosen. The BF3K and PCE methods produced competitive results. Amongst the two Cross-Electrophile coupling methods, the Photoredox method surpassed the Weix method on numerous accounts. The results will be used to build future libraries.Keywords: drug discovery, high throughput chemistry, photoredox chemistry, sp³-sp² carbon coupling methods
Procedia PDF Downloads 144411 Identification of Vehicle Dynamic Parameters by Using Optimized Exciting Trajectory on 3- DOF Parallel Manipulator
Authors: Di Yao, Gunther Prokop, Kay Buttner
Abstract:
Dynamic parameters, including the center of gravity, mass and inertia moments of vehicle, play an essential role in vehicle simulation, collision test and real-time control of vehicle active systems. To identify the important vehicle dynamic parameters, a systematic parameter identification procedure is studied in this work. In the first step of the procedure, a conceptual parallel manipulator (virtual test rig), which possesses three rotational degrees-of-freedom, is firstly proposed. To realize kinematic characteristics of the conceptual parallel manipulator, the kinematic analysis consists of inverse kinematic and singularity architecture is carried out. Based on the Euler's rotation equations for rigid body dynamics, the dynamic model of parallel manipulator and derivation of measurement matrix for parameter identification are presented subsequently. In order to reduce the sensitivity of parameter identification to measurement noise and other unexpected disturbances, a parameter optimization process of searching for optimal exciting trajectory of parallel manipulator is conducted in the following section. For this purpose, the 321-Euler-angles defined by parameterized finite-Fourier-series are primarily used to describe the general exciting trajectory of parallel manipulator. To minimize the condition number of measurement matrix for achieving better parameter identification accuracy, the unknown coefficients of parameterized finite-Fourier-series are estimated by employing an iterative algorithm based on MATLAB®. Meanwhile, the iterative algorithm will ensure the parallel manipulator still keeps in an achievable working status during the execution of optimal exciting trajectory. It is showed that the proposed procedure and methods in this work can effectively identify the vehicle dynamic parameters and could be an important application of parallel manipulator in the fields of parameter identification and test rig development.Keywords: parameter identification, parallel manipulator, singularity architecture, dynamic modelling, exciting trajectory
Procedia PDF Downloads 266410 Evolutionary Advantages of Loneliness with an Agent-Based Model
Authors: David Gottlieb, Jason Yoder
Abstract:
The feeling of loneliness is not uncommon in modern society, and yet, there is a fundamental lack of understanding in its origins and purpose in nature. One interpretation of loneliness is that it is a subjective experience that punishes a lack of social behavior, and thus its emergence in human evolution is seemingly tied to the survival of early human tribes. Still, a common counterintuitive response to loneliness is a state of hypervigilance, resulting in social withdrawal, which may appear maladaptive to modern society. So far, no computational model of loneliness’ effect during evolution yet exists; however, agent-based models (ABM) can be used to investigate social behavior, and applying evolution to agents’ behaviors can demonstrate selective advantages for particular behaviors. We propose an ABM where each agent contains four social behaviors, and one goal-seeking behavior, letting evolution select the best behavioral patterns for resource allocation. In our paper, we use an algorithm similar to the boid model to guide the behavior of agents, but expand the set of rules that govern their behavior. While we use cohesion, separation, and alignment for simple social movement, our expanded model adds goal-oriented behavior, which is inspired by particle swarm optimization, such that agents move relative to their personal best position. Since agents are given the ability to form connections by interacting with each other, our final behavior guides agent movement toward its social connections. Finally, we introduce a mechanism to represent a state of loneliness, which engages when an agent's perceived social involvement does not meet its expected social involvement. This enables us to investigate a minimal model of loneliness, and using evolution we attempt to elucidate its value in human survival. Agents are placed in an environment in which they must acquire resources, as their fitness is based on the total resource collected. With these rules in place, we are able to run evolution under various conditions, including resource-rich environments, and when disease is present. Our simulations indicate that there is strong selection pressure for social behavior under circumstances where there is a clear discrepancy between initial resource locations, and against social behavior when disease is present, mirroring hypervigilance. This not only provides an explanation for the emergence of loneliness, but also reflects the diversity of response to loneliness in the real world. In addition, there is evidence of a richness of social behavior when loneliness was present. By introducing just two resource locations, we observed a divergence in social motivation after agents became lonely, where one agent learned to move to the other, who was in a better resource position. The results and ongoing work from this project show that it is possible to glean insight into the evolutionary advantages of even simple mechanisms of loneliness. The model we developed has produced unexpected results and has led to more questions, such as the impact loneliness would have at a larger scale, or the effect of creating a set of rules governing interaction beyond adjacency.Keywords: agent-based, behavior, evolution, loneliness, social
Procedia PDF Downloads 97409 Advancing Women's Participation in SIDS' Renewable Energy Sector: A Multicriteria Evaluation Framework
Authors: Carolina Mayen Huerta, Clara Ivanescu, Paloma Marcos
Abstract:
Due to their unique geographic challenges and the imperative to combat climate change, Small Island Developing States (SIDS) are experiencing rapid growth in the renewable energy (RE) sector. However, women's representation in formal employment within this burgeoning field remains significantly lower than their male counterparts. Conventional methodologies often overlook critical geographic data that influence women's job prospects. To address this gap, this paper introduces a Multicriteria Evaluation (MCE) framework designed to identify spatially enabling environments and restrictions affecting women's access to formal employment and business opportunities in the SIDS' RE sector. The proposed MCE framework comprises 24 key factors categorized into four dimensions: Individual, Contextual, Accessibility, and Place Characterization. "Individual factors" encompass personal attributes influencing women's career development, including caregiving responsibilities, exposure to domestic violence, and disparities in education. "Contextual factors" pertain to the legal and policy environment, influencing workplace gender discrimination, financial autonomy, and overall gender empowerment. "Accessibility factors" evaluate women's day-to-day mobility, considering travel patterns, access to public transport, educational facilities, RE job opportunities, healthcare facilities, and financial services. Finally, "Place Characterization factors" enclose attributes of geographical locations or environments. This dimension includes walkability, public transport availability, safety, electricity access, digital inclusion, fragility, conflict, violence, water and sanitation, and climatic factors in specific regions. The analytical framework proposed in this paper incorporates a spatial methodology to visualize regions within countries where conducive environments for women to access RE jobs exist. In areas where these environments are absent, the methodology serves as a decision-making tool to reinforce critical factors, such as transportation, education, and internet access, which currently hinder access to employment opportunities. This approach is designed to equip policymakers and institutions with data-driven insights, enabling them to make evidence-based decisions that consider the geographic dimensions of disparity. These insights, in turn, can help ensure the efficient allocation of resources to achieve gender equity objectives.Keywords: gender, women, spatial analysis, renewable energy, access
Procedia PDF Downloads 69408 A Multicriteria Evaluation Framework for Enhancing Women's Participation in SIDS Renewable Energy Sector
Authors: Carolina Mayen Huerta, Clara Ivanescu, Paloma Marcos
Abstract:
Due to their unique geographic challenges and the imperative to combat climate change, Small Island Developing States (SIDS) are experiencing rapid growth in the renewable energy (RE) sector. However, women's representation in formal employment within this burgeoning field remains significantly lower than their male counterparts. Conventional methodologies often overlook critical geographic data that influence women's job prospects. To address this gap, this paper introduces a Multicriteria Evaluation (MCE) framework designed to identify spatially enabling environments and restrictions affecting women's access to formal employment and business opportunities in the SIDS' RE sector. The proposed MCE framework comprises 24 key factors categorized into four dimensions: Individual, Contextual, Accessibility, and Place Characterization. "Individual factors" encompass personal attributes influencing women's career development, including caregiving responsibilities, exposure to domestic violence, and disparities in education. "Contextual factors" pertain to the legal and policy environment, influencing workplace gender discrimination, financial autonomy, and overall gender empowerment. "Accessibility factors" evaluate women's day-to-day mobility, considering travel patterns, access to public transport, educational facilities, RE job opportunities, healthcare facilities, and financial services. Finally, "Place Characterization factors" enclose attributes of geographical locations or environments. This dimension includes walkability, public transport availability, safety, electricity access, digital inclusion, fragility, conflict, violence, water and sanitation, and climatic factors in specific regions. The analytical framework proposed in this paper incorporates a spatial methodology to visualize regions within countries where conducive environments for women to access RE jobs exist. In areas where these environments are absent, the methodology serves as a decision-making tool to reinforce critical factors, such as transportation, education, and internet access, which currently hinder access to employment opportunities. This approach is designed to equip policymakers and institutions with data-driven insights, enabling them to make evidence-based decisions that consider the geographic dimensions of disparity. These insights, in turn, can help ensure the efficient allocation of resources to achieve gender equity objectives.Keywords: gender, women, spatial analysis, renewable energy, access
Procedia PDF Downloads 83407 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions
Authors: Vikrant Gupta, Amrit Goswami
Abstract:
The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition
Procedia PDF Downloads 136406 Political Economy and Human Rights Engaging in Conversation
Authors: Manuel Branco
Abstract:
This paper argues that mainstream economics is one of the reasons that can explain the difficulty in fully realizing human rights because its logic is intrinsically contradictory to human rights, most especially economic, social and cultural rights. First, its utilitarianism, both in its cardinal and ordinal understanding, contradicts human rights principles. Maximizing aggregate utility along the lines of cardinal utility is a theoretical exercise that consists in ensuring as much as possible that gains outweigh losses in society. In this process an individual may get worse off, though. If mainstream logic is comfortable with this, human rights' logic does not. Indeed, universality is a key principle in human rights and for this reason the maximization exercise should aim at satisfying all citizens’ requests when goods and services necessary to secure human rights are at stake. The ordinal version of utilitarianism, in turn, contradicts the human rights principle of indivisibility. Contrary to ordinal utility theory that ranks baskets of goods, human rights do not accept ranking when these goods and services are necessary to secure human rights. Second, by relying preferably on market logic to allocate goods and services, mainstream economics contradicts human rights because the intermediation of money prices and the purpose of profit may cause exclusion, thus compromising the principle of universality. Finally, mainstream economics sees human rights mainly as constraints to the development of its logic. According to this view securing human rights would, then, be considered a cost weighing on economic efficiency and, therefore, something to be minimized. Fully realizing human rights needs, therefore, a different approach. This paper discusses a human rights-based political economy. This political economy, among other characteristics should give up mainstream economics narrow utilitarian approach, give up its belief that market logic should guide all exchanges of goods and services between human beings, and finally give up its view of human rights as constraints on rational choice and consequently on good economic performance. Giving up mainstream’s narrow utilitarian approach means, first embracing procedural utility and human rights-aimed consequentialism. Second, a more radical break can be imagined; non-utilitarian, or even anti-utilitarian, approaches may emerge, then, as alternatives, these two standpoints being not necessarily mutually exclusive, though. Giving up market exclusivity means embracing decommodification. More specifically, this means an approach that takes into consideration the value produced outside the market and an allocation process no longer necessarily centered on money prices. Giving up the view of human rights as constraints means, finally, to consider human rights as an expression of wellbeing and a manifestation of choice. This means, in turn, an approach that uses indicators of economic performance other than growth at the macro level and profit at the micro level, because what we measure affects what we do.Keywords: economic and social rights, political economy, economic theory, markets
Procedia PDF Downloads 152405 A Fast Optimizer for Large-scale Fulfillment Planning based on Genetic Algorithm
Authors: Choonoh Lee, Seyeon Park, Dongyun Kang, Jaehyeong Choi, Soojee Kim, Younggeun Kim
Abstract:
Market Kurly is the first South Korean online grocery retailer that guarantees same-day, overnight shipping. More than 1.6 million customers place an average of 4.7 million orders and add 3 to 14 products into a cart per month. The company has sold almost 30,000 kinds of various products in the past 6 months, including food items, cosmetics, kitchenware, toys for kids/pets, and even flowers. The company is operating and expanding multiple dry, cold, and frozen fulfillment centers in order to store and ship these products. Due to the scale and complexity of the fulfillment, pick-pack-ship processes are planned and operated in batches, and thus, the planning that decides the batch of the customers’ orders is a critical factor in overall productivity. This paper introduces a metaheuristic optimization method that reduces the complexity of batch processing in a fulfillment center. The method is an iterative genetic algorithm with heuristic creation and evolution strategies; it aims to group similar orders into pick-pack-ship batches to minimize the total number of distinct products. With a well-designed approach to create initial genes, the method produces streamlined plans, up to 13.5% less complex than the actual plans carried out in the company’s fulfillment centers in the previous months. Furthermore, our digital-twin simulations show that the optimized plans can reduce 3% of operation time for packing, which is the most complex and time-consuming task in the process. The optimization method implements a multithreading design on the Spring framework to support the company’s warehouse management systems in near real-time, finding a solution for 4,000 orders within 5 to 7 seconds on an AWS c5.2xlarge instance.Keywords: fulfillment planning, genetic algorithm, online grocery retail, optimization
Procedia PDF Downloads 83404 Application of Laser-Induced Breakdown Spectroscopy for the Evaluation of Concrete on the Construction Site and in the Laboratory
Authors: Gerd Wilsch, Tobias Guenther, Tobias Voelker
Abstract:
In view of the ageing of vital infrastructure facilities, a reliable condition assessment of concrete structures is becoming of increasing interest for asset owners to plan timely and appropriate maintenance and repair interventions. For concrete structures, reinforcement corrosion induced by penetrating chlorides is the dominant deterioration mechanism affecting the serviceability and, eventually, structural performance. The determination of the quantitative chloride ingress is required not only to provide valuable information on the present condition of a structure, but the data obtained can also be used for the prediction of its future development and associated risks. At present, wet chemical analysis of ground concrete samples by a laboratory is the most common test procedure for the determination of the chloride content. As the chloride content is expressed by the mass of the binder, the analysis should involve determination of both the amount of binder and the amount of chloride contained in a concrete sample. This procedure is laborious, time-consuming, and costly. The chloride profile obtained is based on depth intervals of 10 mm. LIBS is an economically viable alternative providing chloride contents at depth intervals of 1 mm or less. It provides two-dimensional maps of quantitative element distributions and can locate spots of higher concentrations like in a crack. The results are correlated directly to the mass of the binder, and it can be applied on-site to deliver instantaneous results for the evaluation of the structure. Examples for the application of the method in the laboratory for the investigation of diffusion and migration of chlorides, sulfates, and alkalis are presented. An example for the visualization of the Li transport in concrete is also shown. These examples show the potential of the method for a fast, reliable, and automated two-dimensional investigation of transport processes. Due to the better spatial resolution, more accurate input parameters for model calculations are determined. By the simultaneous detection of elements such as carbon, chlorine, sodium, and potassium, the mutual influence of the different processes can be determined in only one measurement. Furthermore, the application of a mobile LIBS system in a parking garage is demonstrated. It uses a diode-pumped low energy laser (3 mJ, 1.5 ns, 100 Hz) and a compact NIR spectrometer. A portable scanner allows a two-dimensional quantitative element mapping. Results show the quantitative chloride analysis on wall and floor surfaces. To determine the 2-D distribution of harmful elements (Cl, C), concrete cores were drilled, split, and analyzed directly on-site. Results obtained were compared and verified with laboratory measurements. The results presented show that the LIBS method is a valuable addition to the standard procedures - the wet chemical analysis of ground concrete samples. Currently, work is underway to develop a technical code of practice for the application of the method for the determination of chloride concentration in concrete.Keywords: chemical analysis, concrete, LIBS, spectroscopy
Procedia PDF Downloads 105403 In-vitro Metabolic Fingerprinting Using Plasmonic Chips by Laser Desorption/Ionization Mass Spectrometry
Authors: Vadanasundari Vedarethinam, Kun Qian
Abstract:
The metabolic analysis is more distal over proteomics and genomics engaging in clinics and needs rationally distinct techniques, designed materials, and device for clinical diagnosis. Conventional techniques such as spectroscopic techniques, biochemical analyzers, and electrochemical have been used for metabolic diagnosis. Currently, there are four major challenges including (I) long-term process in sample pretreatment; (II) difficulties in direct metabolic analysis of biosamples due to complexity (III) low molecular weight metabolite detection with accuracy and (IV) construction of diagnostic tools by materials and device-based platforms for real case application in biomedical applications. Development of chips with nanomaterial is promising to address these critical issues. Mass spectroscopy (MS) has displayed high sensitivity and accuracy, throughput, reproducibility, and resolution for molecular analysis. Particularly laser desorption/ ionization mass spectrometry (LDI MS) combined with devices affords desirable speed for mass measurement in seconds and high sensitivity with low cost towards large scale uses. We developed a plasmonic chip for clinical metabolic fingerprinting as a hot carrier in LDI MS by series of chips with gold nanoshells on the surface through controlled particle synthesis, dip-coating, and gold sputtering for mass production. We integrated the optimized chip with microarrays for laboratory automation and nanoscaled experiments, which afforded direct high-performance metabolic fingerprinting by LDI MS using 500 nL of serum, urine, cerebrospinal fluids (CSF) and exosomes. Further, we demonstrated on-chip direct in-vitro metabolic diagnosis of early-stage lung cancer patients using serum and exosomes without any pretreatment or purifications. To our best knowledge, this work initiates a bionanotechnology based platform for advanced metabolic analysis toward large-scale diagnostic use.Keywords: plasmonic chip, metabolic fingerprinting, LDI MS, in-vitro diagnostics
Procedia PDF Downloads 162402 Optimization of Polymerase Chain Reaction Condition to Amplify Exon 9 of PIK3CA Gene in Preventing False Positive Detection Caused by Pseudogene Existence in Breast Cancer
Authors: Dina Athariah, Desriani Desriani, Bugi Ratno Budiarto, Abinawanto Abinawanto, Dwi Wulandari
Abstract:
Breast cancer is a regulated by many genes. Defect in PIK3CA gene especially at position of exon 9 (E542K and E545K), called hot spot mutation induce early transformation of breast cells. The early detection of breast cancer based on mutation profile of this hot spot region would be hampered by the existence of pseudogene, marked by its substitution mutation at base 1658 (E545A) and deletion at 1659 that have been previously proven in several cancers. To the best of the authors’ knowledge, until recently no studies have been reported about pseudogene phenomenon in breast cancer. Here, we reported PCR optimization to to obtain true exon 9 of PIK3CA gene from its pseudogene hence increasing the validity of data. Material and methods: two genomic DNA with Dev and En code were used in this experiment. Two pairs of primer were design for Standard PCR method. The size of PCR products for each primer is 200bp and 400bp. While other primer was designed for Nested-PCR followed with DNA sequencing method. For Nested-PCR, we optimized the annealing temperature in first and second run of PCR, and the PCR cycle for first run PCR (15x versus 25x). Result: standard PCR using both primer pairs designed is failed to detect the true PIK3CA gene, appearing a substitution mutation at 1658 and deletion at 1659 of PCR product in sequence chromatogram indicated pseudogene. Meanwhile, Nested-PCR with optimum condition (annealing temperature for the first round at 55oC, annealing temperatung for the second round at 60,7oC with 15x PCR cycles) and could detect the true PIK3CA gene. Dev sample were identified as WT while En sample contain one substitution mutation at position 545 of exon 9, indicating amino acid changing from E to K. For the conclusion, pseudogene also exists in breast cancer and the apllication of optimazed Nested-PCR in this study could detect the true exon 9 of PIK3CA gene.Keywords: breast cancer, exon 9, hotspot mutation, PIK3CA, pseudogene
Procedia PDF Downloads 244401 Role of SiOx Interlayer on Lead Oxide Electrodeposited on Stainless Steel for Promoting Electrochemical Treatment of Wastewater Containing Textile Dye
Authors: Hanene Akrout, Ines Elaissaoui, Sabrina Grassini, Daniele Fulginiti, Latifa Bousselmi
Abstract:
The main objective of this work is to investigate the efficiency of depollution power related to PbO₂ layer deposited onto a stainless steel (SS) substrate with SiOx as interlayer. The elaborated electrode was used as anode for anodic oxidation of wastewater containing Amaranth dye, as recalcitrant organic pollutant model. SiOx interlayer was performed using Plasma Enhanced Chemical Vapor Deposition ‘PECVD’ in plasma fed with argon, oxygen, and tetraethoxysilane (TEOS, Si precursor) in different ratios, onto the SS substrate. PbO₂ layer was produced by pulsed electrodeposition on SS/SiOx. The morphological of different surfaces are depicted with Field Emission Scanning Electron Microscope (FESEM) and the composition of the lead oxide layer was investigated by X-Ray Diffractometry (XRD). The results showed that the SiOx interlayer with more rich oxygen content improved better the nucleation of β-PbO₂ form. Electrochemical Impedance Spectroscopy (EIS) measurements undertaken on different interfaces (at optimized conditions) revealed a decrease of Rfilm while CPE film increases for SiOx interlayer, characterized by a more inorganic nature and deposited in a plasma fed by higher O2-to-TEOS ratios. Quantitative determinations of the Amaranth dye degradation rate were performed in terms of colour and COD removals, reaching a 95% and an 80% respectively removal at pH = 2 in 300 min. Results proved the improvement of the degradation wastewater containing the amaranth dye. During the electrolysis, the Amaranth dye solution was sampled at 30 min intervals and analyzed by ‘High-performance Liquid Chromatography’ HPLC. The gradual degradation of the Amaranth dye confirmed by the decrease in UV absorption using the SS/SiOx(20:20:1)/PbO₂ anode, the reaction exhibited an apparent first-order kinetic for electrolysis time of 5 hours, with an initial rate constant of about 0.02 min⁻¹.Keywords: electrochemical treatment, PbO₂ anodes, COD removal, plasma
Procedia PDF Downloads 193400 Cross-Linked Amyloglucosidase Aggregates: A New Carrier Free Immobilization Strategy for Continuous Saccharification of Starch
Authors: Sidra Pervez, Afsheen Aman, Shah Ali Ul Qader
Abstract:
The importance of attaining an optimum performance of an enzyme is often a question of devising an effective method for its immobilization. Cross-linked enzyme aggregate (CLEAs) is a new approach for immobilization of enzymes using carrier free strategy. This method is exquisitely simple (involving precipitation of the enzyme from aqueous buffer followed by cross-linking of the resulting physical aggregates of enzyme molecules) and amenable to rapid optimization. Among many industrial enzymes, amyloglucosidase is an important amylolytic enzyme that hydrolyzes alpha (1→4) and alpha (1→6) glycosidic bonds in starch molecule and produce glucose as a sole end product. Glucose liberated by amyloglucosidase can be used for the production of ethanol and glucose syrups. Besides this amyloglucosidase can be widely used in various food and pharmaceuticals industries. For production of amyloglucosidase on commercial scale, filamentous fungi of genera Aspergillus are mostly used because they secrete large amount of enzymes extracellularly. The current investigation was based on isolation and identification of filamentous fungi from genus Aspergillus for the production of amyloglucosidase in submerged fermentation and optimization of cultivation parameters for starch saccharification. Natural isolates were identified as Aspergillus niger KIBGE-IB36, Aspergillus fumigatus KIBGE-IB33, Aspergillus flavus KIBGE-IB34 and Aspergillus terreus KIBGE-IB35 on taxonomical basis and 18S rDNA analysis and their sequence were submitted to GenBank. Among them, Aspergillus fumigatus KIBGE-IB33 was selected on the basis of maximum enzyme production. After optimization of fermentation conditions enzyme was immobilized on CLEA. Different parameters were optimized for maximum immobilization of amyloglucosidase. Data of enzyme stability (thermal and Storage) and reusability suggested the applicability of immobilized amyloglucosidase for continuous saccharification of starch in industrial processes.Keywords: aspergillus, immobilization, industrial processes, starch saccharification
Procedia PDF Downloads 496399 Microfluidic Based High Throughput Screening System for Photodynamic Therapy against Cancer Cells
Authors: Rina Lee, Chung-Hun Oh, Eunjin Lee, Jeongyun Kim
Abstract:
The Photodynamic therapy (PDT) is a treatment that uses a photosensitizer as a drug to damage and kill cancer cells. After injecting the photosensitizer into the bloodstream, the drug is absorbed by cancer cells selectively. Then the area to be treated is exposed to specific wavelengths of light and the photosensitizer produces a form of oxygen that kills nearby cancer cells. PDT is has an advantage to destroy the tumor with minimized side-effects on normal cells. But, PDT is not a completed method for cancer therapy. Because the mechanism of PDT is quite clear yet and the parameters such as intensity of light and dose of photosensitizer are not optimized for different types of cancers. To optimize these parameters, we suggest a novel microfluidic system to automatically control intensity of light exposure with a personal computer (PC). A polydimethylsiloxane (PDMS) microfluidic chip is composed with (1) a cell culture channels layer where cancer cells were trapped to be tested with various dosed photofrin (1μg/ml used for the test) as the photosensitizer and (2) a color dye layer as a neutral density (ND) filter to reduce intensity of light which exposes the cell culture channels filled with cancer cells. Eight different intensity of light (10%, 20%, …, 100%) are generated through various concentrations of blue dye filling the ND filter. As a light source, a light emitting diode (LED) with 635nm wavelength was placed above the developed PDMS microfluidic chip. The total time for light exposure was 30 minutes and HeLa and PC3 cell lines of cancer cells were tested. The cell viability of cells was evaluated with a Live/Dead assay kit (L-3224, Invitrogen, USA). The stronger intensity of light exposed, the lower viability of the cell was observed, and vice versa. Therefore, this system was demonstrated through investigating the PDT against cancer cell to optimize the parameters as critical light intensity and dose of photosensitizer. Our results suggest that the system can be used for optimizing the combinational parameters of light intensity and photosensitizer dose against diverse cancer cell types.Keywords: photodynamic therapy, photofrin, high throughput screening, hela
Procedia PDF Downloads 383398 Structural Design Optimization of Reinforced Thin-Walled Vessels under External Pressure Using Simulation and Machine Learning Classification Algorithm
Authors: Lydia Novozhilova, Vladimir Urazhdin
Abstract:
An optimization problem for reinforced thin-walled vessels under uniform external pressure is considered. The conventional approaches to optimization generally start with pre-defined geometric parameters of the vessels, and then employ analytic or numeric calculations and/or experimental testing to verify functionality, such as stability under the projected conditions. The proposed approach consists of two steps. First, the feasibility domain will be identified in the multidimensional parameter space. Every point in the feasibility domain defines a design satisfying both geometric and functional constraints. Second, an objective function defined in this domain is formulated and optimized. The broader applicability of the suggested methodology is maximized by implementing the Support Vector Machines (SVM) classification algorithm of machine learning for identification of the feasible design region. Training data for SVM classifier is obtained using the Simulation package of SOLIDWORKS®. Based on the data, the SVM algorithm produces a curvilinear boundary separating admissible and not admissible sets of design parameters with maximal margins. Then optimization of the vessel parameters in the feasibility domain is performed using the standard algorithms for the constrained optimization. As an example, optimization of a ring-stiffened closed cylindrical thin-walled vessel with semi-spherical caps under high external pressure is implemented. As a functional constraint, von Mises stress criterion is used but any other stability constraint admitting mathematical formulation can be incorporated into the proposed approach. Suggested methodology has a good potential for reducing design time for finding optimal parameters of thin-walled vessels under uniform external pressure.Keywords: design parameters, feasibility domain, von Mises stress criterion, Support Vector Machine (SVM) classifier
Procedia PDF Downloads 327397 Collaborative Environmental Management: A Case Study Research of Stakeholders' Collaboration in the Nigerian Oil-Producing Region
Authors: Favour Makuochukwu Orji, Yingkui Zhao
Abstract:
A myriad of environmental issues face the Nigerian industrial region, resulting from; oil and gas production, mining, manufacturing and domestic wastes. Amidst these, much effort has been directed by stakeholders in the Nigerian oil producing regions, because of the impacts of the region on the wider Nigerian economy. Research to date has suggested that collaborative environmental management could be an effective approach in managing environmental issues; but little attention has been given to the roles and practices of stakeholders in effecting a collaborative environmental management framework for the Nigerian oil-producing region. This paper produces a framework to expand and deepen knowledge relating to stakeholders aspects of collaborative roles in managing environmental issues in the Nigeria oil-producing region. The knowledge is derived from analysis of stakeholders’ practices – studied through multiple case studies using document analysis. Selected documents of key stakeholders – Nigerian government agencies, multi-national oil companies and host communities, were analyzed. Open and selective coding was employed manually during document analysis of data collected from the offices and websites of the stakeholders. The findings showed that the stakeholders have a range of roles, practices, interests, drivers and barriers regarding their collaborative roles in managing environmental issues. While they have interests for efficient resource use, compliance to standards, sharing of responsibilities, generating of new solutions, and shared objectives; there is evidence of major barriers which includes resource allocation, disjointed policy and regulation, ineffective monitoring, diverse socio- economic interests, lack of stakeholders’ commitment and limited knowledge sharing. However, host communities hold deep concerns over the collaborative roles of stakeholders for economic interests, particularly, where government agencies and multi-national oil companies are involved. With these barriers and concerns, a genuine stakeholders’ collaboration is found to be limited, and as a result, optimal environmental management practices and policies have not been successfully implemented in the Nigeria oil-producing region. A framework is produced that describes practices that characterize collaborative environmental management might be employed to satisfy the stakeholders’ interests. The framework recommends critical factors, based on the findings, which may guide a collaborative environmental management in the oil producing regions. The recommendations are designed to re-define the practices of stakeholders in managing environmental issues in the oil producing regions, not as something wholly new, but as an approach essential for implementing a sustainable environmental policy. This research outcome may clarify areas for future research as well as to contribute to industry guidance in the area of collaborative environmental management.Keywords: collaborative environmental management framework, case studies, document analysis, multinational oil companies, Nigerian oil producing regions, Nigerian government agencies, stakeholders analysis
Procedia PDF Downloads 174396 Artificial Neural Network Approach for Modeling and Optimization of Conidiospore Production of Trichoderma harzianum
Authors: Joselito Medina-Marin, Maria G. Serna-Diaz, Alejandro Tellez-Jurado, Juan C. Seck-Tuoh-Mora, Eva S. Hernandez-Gress, Norberto Hernandez-Romero, Iaina P. Medina-Serna
Abstract:
Trichoderma harzianum is a fungus that has been utilized as a low-cost fungicide for biological control of pests, and it is important to determine the optimal conditions to produce the highest amount of conidiospores of Trichoderma harzianum. In this work, the conidiospore production of Trichoderma harzianum is modeled and optimized by using Artificial Neural Networks (AANs). In order to gather data of this process, 30 experiments were carried out taking into account the number of hours of culture (10 distributed values from 48 to 136 hours) and the culture humidity (70, 75 and 80 percent), obtained as a response the number of conidiospores per gram of dry mass. The experimental results were used to develop an iterative algorithm to create 1,110 ANNs, with different configurations, starting from one to three hidden layers, and every hidden layer with a number of neurons from 1 to 10. Each ANN was trained with the Levenberg-Marquardt backpropagation algorithm, which is used to learn the relationship between input and output values. The ANN with the best performance was chosen in order to simulate the process and be able to maximize the conidiospores production. The obtained ANN with the highest performance has 2 inputs and 1 output, three hidden layers with 3, 10 and 10 neurons in each layer, respectively. The ANN performance shows an R2 value of 0.9900, and the Root Mean Squared Error is 1.2020. This ANN predicted that 644175467 conidiospores per gram of dry mass are the maximum amount obtained in 117 hours of culture and 77% of culture humidity. In summary, the ANN approach is suitable to represent the conidiospores production of Trichoderma harzianum because the R2 value denotes a good fitting of experimental results, and the obtained ANN model was used to find the parameters to produce the biggest amount of conidiospores per gram of dry mass.Keywords: Trichoderma harzianum, modeling, optimization, artificial neural network
Procedia PDF Downloads 159395 Parametric Evaluation for the Optimization of Gastric Emptying Protocols Used in Health Care Institutions
Authors: Yakubu Adamu
Abstract:
The aim of this research was to assess the factors contributing to the need for optimisation of the gastric emptying protocols in nuclear medicine and molecular imaging (SNMMI) procedures. The objective is to suggest whether optimisation is possible and provide supporting evidence for the current imaging protocols of gastric emptying examination used in nuclear medicine. The research involved the use of some selected patients with 30 dynamic series for the image processing using ImageJ, and by so doing, the calculated half-time, retention fraction to the 60 x1 minute, 5 minute and 10-minute protocol, and other sampling intervals were obtained. Results from the study IDs for the gastric emptying clearance half-time were classified into normal, abnormal fast, and abnormal slow categories. In the normal category, which represents 50% of the total gastric emptying image IDs processed, their clearance half-time was within the range of 49.5 to 86.6 minutes of the mean counts. Also, under the abnormal fast category, their clearance half-time fell between 21 to 43.3 minutes of the mean counts, representing 30% of the total gastric emptying image IDs processed, and the abnormal slow category had clearance half-time within the range of 138.6 to 138.6 minutes of the mean counts, representing 20%. The results indicated that the calculated retention fraction values from the 1, 5, and 10-minute sampling curves and the measured values of gastric emptying retention fraction from sampling curves of the study IDs had a normal retention fraction of <60% and decreased exponentially with an increase in time and it was evident with low percentages of retention fraction ratios of < 10% after the 4 hours. Thus, this study does not change categories suggesting that these values could feasibly be used instead of having to acquire actual images. Findings from the study suggest that the current gastric emptying protocol can be optimized by acquiring fewer images. The study recommended that the gastric emptying studies should be performed with imaging at a minimum of 0, 1, 2, and 4 hours after meal ingestion.Keywords: gastric emptying, retention fraction, clearance halftime, optimisation, protocol
Procedia PDF Downloads 5