Search results for: robust optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4405

Search results for: robust optimization

685 Optimization of Samarium Extraction via Nanofluid-Based Emulsion Liquid Membrane Using Cyanex 272 as Mobile Carrier

Authors: Maliheh Raji, Hossein Abolghasemi, Jaber Safdari, Ali Kargari

Abstract:

Samarium as a rare-earth element is playing a growing important role in high technology. Traditional methods for extraction of rare earth metals such as ion exchange and solvent extraction have disadvantages of high investment and high energy consumption. Emulsion liquid membrane (ELM) as an improved solvent extraction technique is an effective transport method for separation of various compounds from aqueous solutions. In this work, the extraction of samarium from aqueous solutions by ELM was investigated using response surface methodology (RSM). The organic membrane phase of the ELM was a nanofluid consisted of multiwalled carbon nanotubes (MWCNT), Span80 as surfactant, Cyanex 272 as mobile carrier, and kerosene as base fluid. 1 M nitric acid solution was used as internal aqueous phase. The effects of the important process parameters on samarium extraction were investigated, and the values of these parameters were optimized using the Central Composition Design (CCD) of RSM. These parameters were the concentration of MWCNT in nanofluid, the carrier concentration, and the volume ratio of organic membrane phase to internal phase (Roi). The three-dimensional (3D) response surfaces of samarium extraction efficiency were obtained to visualize the individual and interactive effects of the process variables. A regression model for % extraction was developed, and its adequacy was evaluated. The result shows that % extraction improves by using MWCNT nanofluid in organic membrane phase and extraction efficiency of 98.92% can be achieved under the optimum conditions. In addition, demulsification was successfully performed and the recycled membrane phase was proved to be effective in the optimum condition.

Keywords: Cyanex 272, emulsion liquid membrane, MWCNT nanofluid, response surface methology, Samarium

Procedia PDF Downloads 408
684 Comparison of Growth Medium Efficiency into Stevia (Stevia rebaudiana Bertoni) Shoot Biomass and Stevioside Content in Thin-Layer System, TIS RITA® Bioreactor, and Bubble Column Bioreactor

Authors: Nurhayati Br Tarigan, Rizkita Rachmi Esyanti

Abstract:

Stevia (Stevia rebaudiana Bertoni) has a great potential to be used as a natural sweetener because it contains steviol glycoside, which is approximately 100 - 300 times sweeter than sucrose, yet low calories. Vegetative and generative propagation of S. rebaudiana is inefficient to produce stevia biomass and stevioside. One of alternative for stevia propagation is in vitro shoot culture. This research was conducted to optimize the best medium for shoot growth and to compare the bioconversion efficiency and stevioside production of S. rebaudiana shoot culture cultivated in thin layer culture (TLC), recipient for automated temporary immersion system (TIS RITA®) bioreactor, and bubble column bioreactor. The result showed that 1 ppm of Kinetin produced a healthy shoot and the highest number of leaves compared to BAP. Shoots were then cultivated in TLC, TIS RITA® bioreactor, and bubble column bioreactor. Growth medium efficiency was determined by yield and productivity. TLC produced the highest growth medium efficiency of S. rebaudiana, the yield was 0.471 ± 0.117 gbiomass.gsubstrate-1, and the productivity was 0.599 ± 0.122 gbiomass.Lmedium-1.day-1. While TIS RITA® bioreactor produced the lowest yield and productivity, 0.182 ± 0.024 gbiomass.gsubstrate-1 and 0.041 ± 0.0002 gbiomass.Lmedium-1.day-1 respectively. The yield of bubble column bioreactor was 0.354 ± 0.204 gbiomass.gsubstrate-1 and the productivity was 0,099 ± 0,009 gbiomass.Lmedium-1.day-1. The stevioside content from the highest to the lowest was obtained from stevia shoot which was cultivated on TLC, TIS RITA® bioreactor, and bubble column bioreactor; the content was 93,44 μg/g, 42,57 μg/g, and 23,03 μg/g respectively. All three systems could be used to produce stevia shoot biomass, but optimization on the number of nutrition and oxygen intake was required in each system.

Keywords: bubble column, growth medium efficiency, Stevia rebaudiana, stevioside, TIS RITA®, TLC

Procedia PDF Downloads 255
683 Parametric Analysis of Lumped Devices Modeling Using Finite-Difference Time-Domain

Authors: Felipe M. de Freitas, Icaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende

Abstract:

The SPICE-based simulators are quite robust and widely used for simulation of electronic circuits, their algorithms support linear and non-linear lumped components and they can manipulate an expressive amount of encapsulated elements. Despite the great potential of these simulators based on SPICE in the analysis of quasi-static electromagnetic field interaction, that is, at low frequency, these simulators are limited when applied to microwave hybrid circuits in which there are both lumped and distributed elements. Usually the spatial discretization of the FDTD (Finite-Difference Time-Domain) method is done according to the actual size of the element under analysis. After spatial discretization, the Courant Stability Criterion calculates the maximum temporal discretization accepted for such spatial discretization and for the propagation velocity of the wave. This criterion guarantees the stability conditions for the leapfrogging of the Yee algorithm; however, it is known that for the field update, the stability of the complete FDTD procedure depends on factors other than just the stability of the Yee algorithm, because the FDTD program needs other algorithms in order to be useful in engineering problems. Examples of these algorithms are Absorbent Boundary Conditions (ABCs), excitation sources, subcellular techniques, grouped elements, and non-uniform or non-orthogonal meshes. In this work, the influence of the stability of the FDTD method in the modeling of concentrated elements such as resistive sources, resistors, capacitors, inductors and diode will be evaluated. In this paper is proposed, therefore, the electromagnetic modeling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-wide frequencies. The models of the resistive source, the resistor, the capacitor, the inductor, and the diode will be evaluated, among the mathematical models for lumped components in the LE-FDTD method (Lumped-Element Finite-Difference Time-Domain), through the parametric analysis of Yee cells size which discretizes the lumped components. In this way, it is sought to find an ideal cell size so that the analysis in FDTD environment is in greater agreement with the expected circuit behavior, maintaining the stability conditions of this method. Based on the mathematical models and the theoretical basis of the required extensions of the FDTD method, the computational implementation of the models in Matlab® environment is carried out. The boundary condition Mur is used as the absorbing boundary of the FDTD method. The validation of the model is done through the comparison between the obtained results by the FDTD method through the electric field values and the currents in the components, and the analytical results using circuit parameters.

Keywords: hybrid circuits, LE-FDTD, lumped element, parametric analysis

Procedia PDF Downloads 136
682 A Wearable Device to Overcome Post–Stroke Learned Non-Use; The Rehabilitation Gaming System for wearables: Methodology, Design and Usability

Authors: Javier De La Torre Costa, Belen Rubio Ballester, Martina Maier, Paul F. M. J. Verschure

Abstract:

After a stroke, a great number of patients experience persistent motor impairments such as hemiparesis or weakness in one entire side of the body. As a result, the lack of use of the paretic limb might be one of the main contributors to functional loss after clinical discharge. We aim to reverse this cycle by promoting the use of the paretic limb during activities of daily living (ADLs). To do so, we describe the key components of a system that is composed of a wearable bracelet (i.e., a smartwatch) and a mobile phone, designed to bring a set of neurorehabilitation principles that promote acquisition, retention and generalization of skills to the home of the patient. A fundamental question is whether the loss in motor function derived from learned–non–use may emerge as a consequence of decision–making processes for motor optimization. Our system is based on well-established rehabilitation strategies that aim to reverse this behaviour by increasing the reward associated with action execution as well as implicitly reducing the expected cost associated with the use of the paretic limb, following the notion of the reinforcement–induced movement therapy (RIMT). Here we validate an accelerometer–based measure of arm use, and its capacity to discriminate different activities that require increasing movement of the arm. We also show how the system can act as a personalized assistant by providing specific goals and adjusting them depending on the performance of the patients. The usability and acceptance of the device as a rehabilitation tool is tested using a battery of self–reported and objective measurements obtained from acute/subacute patients and healthy controls. We believe that an extension of these technologies will allow for the deployment of unsupervised rehabilitation paradigms during and beyond the hospitalization time.

Keywords: stroke, wearables, learned non use, hemiparesis, ADLs

Procedia PDF Downloads 196
681 Rapid Degradation of High-Concentration Methylene Blue in the Combined System of Plasma-Enhanced Photocatalysis Using TiO₂-Carbon

Authors: Teguh Endah Saraswati, Kusumandari Kusumandari, Candra Purnawan, Annisa Dinan Ghaisani, Aufara Mahayum

Abstract:

The present study aims to investigate the degradation of methylene blue (MB) using TiO₂-carbon (TiO₂-C) photocatalyst combined with dielectric discharge (DBD) plasma. The carbon materials used in the photocatalyst were activated carbon and graphite. The thin layer of TiO₂-C photocatalyst was prepared by ball milling method which was then deposited on the plastic sheet. The characteristic of TiO₂-C thin layer was analyzed using X-ray diffraction (XRD), scanning electron microscopy (SEM) with energy dispersive X-ray (EDX) spectroscopy, and UV-Vis diffuse reflectance spectrophotometer. The XRD diffractogram patterns of TiO₂-G thin layer in various weight compositions of 50:1, 50:3, and 50:5 show the 2θ peaks found around 25° and 27° are the main characteristic of TiO₂ and carbon. SEM analysis shows spherical and regular morphology of the photocatalyst. Analysis using UV-Vis diffuse reflectance shows TiO₂-C has narrower band gap energy. The DBD plasma reactor was generated using two electrodes of Cu tape connected with stainless steel mesh and Fe wire separated by a glass dielectric insulator, supplied by a high voltage 5 kV with an air flow rate of 1 L/min. The optimization of the weight composition of TiO₂-C thin layer was studied based on the highest reduction of the MB concentration achieved, examined by UV-Vis spectrophotometer. The changes in pH values and color of MB indicated the success of MB degradation. Moreover, the degradation efficiency of MB was also studied in various higher concentrations of 50, 100, 200, 300 ppm treated for 0, 2, 4, 6, 8, 10 min. The degradation efficiency of MB treated in combination system of photocatalysis and DBD plasma reached more than 99% in 6 min, in which the greater concentration of methylene blue dye, the lower degradation rate of methylene blue dye would be achieved.

Keywords: activated carbon, DBD plasma, graphite, methylene blue, photocatalysis

Procedia PDF Downloads 106
680 Category-Base Theory of the Optimum Signal Approximation Clarifying the Importance of Parallel Worlds in the Recognition of Human and Application to Secure Signal Communication with Feedback

Authors: Takuro Kida, Yuichi Kida

Abstract:

We show a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detailed algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory and it is indicated that introducing conversations with feedback does not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.

Keywords: signal prediction, pseudo inverse matrix, artificial intelligence, conditional optimization

Procedia PDF Downloads 141
679 Computational Feasibility Study of a Torsional Wave Transducer for Tissue Stiffness Monitoring

Authors: Rafael Muñoz, Juan Melchor, Alicia Valera, Laura Peralta, Guillermo Rus

Abstract:

A torsional piezoelectric ultrasonic transducer design is proposed to measure shear moduli in soft tissue with direct access availability, using shear wave elastography technique. The measurement of shear moduli of tissues is a challenging problem, mainly derived from a) the difficulty of isolating a pure shear wave, given the interference of multiple waves of different types (P, S, even guided) emitted by the transducers and reflected in geometric boundaries, and b) the highly attenuating nature of soft tissular materials. An immediate application, overcoming these drawbacks, is the measurement of changes in cervix stiffness to estimate the gestational age at delivery. The design has been optimized using a finite element model (FEM) and a semi-analytical estimator of the probability of detection (POD) to determine a suitable geometry, materials and generated waves. The technique is based on the time of flight measurement between emitter and receiver, to infer shear wave velocity. Current research is centered in prototype testing and validation. The geometric optimization of the transducer was able to annihilate the compressional wave emission, generating a quite pure shear torsional wave. Currently, mechanical and electromagnetic coupling between emitter and receiver signals are being the research focus. Conclusions: the design overcomes the main described problems. The almost pure shear torsional wave along with the short time of flight avoids the possibility of multiple wave interference. This short propagation distance reduce the effect of attenuation, and allow the emission of very low energies assuring a good biological security for human use.

Keywords: cervix ripening, preterm birth, shear modulus, shear wave elastography, soft tissue, torsional wave

Procedia PDF Downloads 335
678 The New World Kirkpatrick Model as an Evaluation Tool for a Publication Writing Programme

Authors: Eleanor Nel

Abstract:

Research output is an indicator of institutional performance (and quality), resulting in increased pressure on academic institutions to perform in the research arena. Research output is further utilised to obtain research funding. Resultantly, academic institutions face significant pressure from governing bodies to provide evidence on the return for research investments. Research output has thus become a substantial discourse within institutions, mainly due to the processes linked to evaluating research output and the associated allocation of research funding. This focus on research outputs often surpasses the development of robust, widely accepted tools to additionally measure research impact at institutions. A publication writing programme, for enhancing research output, was launched at a South African university in 2011. Significant amounts of time, money, and energy have since been invested in the programme. Although participants provided feedback after each session, no formal review was conducted to evaluate the research output directly associated with the programme. Concerns in higher education about training costs, learning results, and the effect on society have increased the focus on value for money and the need to improve training, research performance, and productivity. Furthermore, universities rely on efficient and reliable monitoring and evaluation systems, in addition to the need to demonstrate accountability. While publishing does not occur immediately, achieving a return on investment from the intervention is critical. A multi-method study, guided by the New World Kirkpatrick Model (NWKM), was conducted to determine the impact of the publication writing programme for the period of 2011 to 2018. Quantitative results indicated a total of 314 academics participating in 72 workshops over the study period. To better understand the quantitative results, an open-ended questionnaire and semi-structured interviews were conducted with nine participants from a particular faculty as a convenience sample. The purpose of the research was to collect information to develop a comprehensive framework for impact evaluation that could be used to enhance the current design and delivery of the programme. The qualitative findings highlighted the critical role of a multi-stakeholder strategy in strengthening support before, during, and after a publication writing programme to improve the impact and research outputs. Furthermore, monitoring on-the-job learning is critical to ingrain the new skills academics have learned during the writing workshops and to encourage them to be accountable and empowered. The NWKM additionally provided essential pointers on how to link the results more effectively from publication writing programmes to institutional strategic objectives to improve research performance and quality, as well as what should be included in a comprehensive evaluation framework.

Keywords: evaluation, framework, impact, research output

Procedia PDF Downloads 62
677 Project Production Control (PPC) Implementation for an Offshore Facilities Construction Project

Authors: Muhammad Hakim Bin Mat Tasir, Erwan Shahfizad Hasidan, Hamidah Makmor Bakry, M. Hafiz B. Izhar

Abstract:

Every key performance indicator used to monitor a project’s construction progress emphasizes trade productivity or specific commodity run-down curves. Examples include the productivity of welding by the number of joints completed per day, quantity of NDT (Non-Destructive Tests) inspection per day, etc. This perspective is based on progress and productivity; however, it does not enable a system perspective of how we produce. This paper uses a project production system perspective by which projects are a collection of production systems comprising the interconnected network of processes and operations that represent all the work activities to execute a project from start to finish. Furthermore, it also uses the 5 Levels of production system optimization as a frame. The goal of the paper is to describe the application of Project Production Control (PPC) to control and improve the performance of several production processes associated with the fabrication and assembly of a Central Processing Platform (CPP) Jacket, part of an offshore mega project. More specifically, the fabrication and assembly of buoyancy tanks as they were identified as part of the critical path and required the highest demand for capacity. In total, seven buoyancy tanks were built, with a total estimated weight of 2,200 metric tons. These huge buoyancy tanks were designed to be reversed launching and self-upending of the jacket, easily retractable, and reusable for the next project, ensuring sustainability. Results showed that an effective application of PPC not only positively impacted construction progress and productivity but also exposed sources of detrimental variability as the focus of continuous improvement practices. This approach augmented conventional project management practices, and the results had a high impact on construction scheduling, planning, and control.

Keywords: offshore, construction, project management, sustainability

Procedia PDF Downloads 42
676 Optimization for Guide RNA and CRISPR/Cas9 System Nanoparticle Mediated Delivery into Plant Cell for Genome Editing

Authors: Andrey V. Khromov, Antonida V. Makhotenko, Ekaterina A. Snigir, Svetlana S. Makarova, Natalia O. Kalinina, Valentin V. Makarov, Mikhail E. Taliansky

Abstract:

Due to its simplicity, CRISPR/Cas9 has become widely used and capable of inducing mutations in the genes of organisms of various kingdoms. The aim of this work was to develop applications for the efficient modification of DNA coding sequences of phytoene desaturase (PDS), coilin and vacuolar invertase (Solanum tuberosum) genes, and to develop a new nanoparticles carrier efficient technology to deliver the CRISPR/Cas9 system for editing the plant genome. For each of the genes - coilin, PDS and vacuolar invertase, five single RNA guide (sgRNAs) were synthesized. To determine the most suitable nanoplatform, two types of NP platforms were used: magnetic NPs (MNPS) and gold NPs (AuNPs). To test the penetration efficiency, they were functionalized with fluorescent agents - BSA * FITS and GFP, as well as labeled Cy3 small-sized RNA. To measure the efficiency, a fluorescence and confocal microscopy were used. It was shown that the best of these options were AuNP - both in the case of proteins and in the case of RNA. The next step was to check the possibility of delivering components of the CRISPR/Cas9 system to plant cells for editing target genes. AuNPs were functionalized with a ribonucleoprotein complex consisting of Cas9 and corresponding to target genes sgRNAs, and they were biolistically bombarded to axillary buds and apical meristems of potato plants. After the treatment by the best NP carrier, potato meristems were grown to adult plants. DNA isolated from this plants was sent to a preliminary fragment of the analysis to screen out the non-transformed samples, and then to the NGS. The present work was carried out with the financial support from the Russian Science Foundation (grant No. 16-16-04019).

Keywords: biobombardment, coilin, CRISPR/Cas9, nanoparticles, NPs, PDS, sgRNA, vacuolar invertase

Procedia PDF Downloads 296
675 Landslide Susceptibility Analysis in the St. Lawrence Lowlands Using High Resolution Data and Failure Plane Analysis

Authors: Kevin Potoczny, Katsuichiro Goda

Abstract:

The St. Lawrence lowlands extend from Ottawa to Quebec City and are known for large deposits of sensitive Leda clay. Leda clay deposits are responsible for many large landslides, such as the 1993 Lemieux and 2010 St. Jude (4 fatalities) landslides. Due to the large extent and sensitivity of Leda clay, regional hazard analysis for landslides is an important tool in risk management. A 2018 regional study by Farzam et al. on the susceptibility of Leda clay slopes to landslide hazard uses 1 arc second topographical data. A qualitative method known as Hazus is used to estimate susceptibility by checking for various criteria in a location and determine a susceptibility rating on a scale of 0 (no susceptibility) to 10 (very high susceptibility). These criteria are slope angle, geological group, soil wetness, and distance from waterbodies. Given the flat nature of St. Lawrence lowlands, the current assessment fails to capture local slopes, such as the St. Jude site. Additionally, the data did not allow one to analyze failure planes accurately. This study majorly improves the analysis performed by Farzam et al. in two aspects. First, regional assessment with high resolution data allows for identification of local locations that may have been previously identified as low susceptibility. This then provides the opportunity to conduct a more refined analysis on the failure plane of the slope. Slopes derived from 1 arc second data are relatively gentle (0-10 degrees) across the region; however, the 1- and 2-meter resolution 2022 HRDEM provided by NRCAN shows that short, steep slopes are present. At a regional level, 1 arc second data can underestimate the susceptibility of short, steep slopes, which can be dangerous as Leda clay landslides behave retrogressively and travel upwards into flatter terrain. At the location of the St. Jude landslide, slope differences are significant. 1 arc second data shows a maximum slope of 12.80 degrees and a mean slope of 4.72 degrees, while the HRDEM data shows a maximum slope of 56.67 degrees and a mean slope of 10.72 degrees. This equates to a difference of three susceptibility levels when the soil is dry and one susceptibility level when wet. The use of GIS software is used to create a regional susceptibility map across the St. Lawrence lowlands at 1- and 2-meter resolutions. Failure planes are necessary to differentiate between small and large landslides, which have so far been ignored in regional analysis. Leda clay failures can only retrogress as far as their failure planes, so the regional analysis must be able to transition smoothly into a more robust local analysis. It is expected that slopes within the region, once previously assessed at low susceptibility scores, contain local areas of high susceptibility. The goal is to create opportunities for local failure plane analysis to be undertaken, which has not been possible before. Due to the low resolution of previous regional analyses, any slope near a waterbody could be considered hazardous. However, high-resolution regional analysis would allow for more precise determination of hazard sites.

Keywords: hazus, high-resolution DEM, leda clay, regional analysis, susceptibility

Procedia PDF Downloads 56
674 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow

Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat

Abstract:

Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.

Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement

Procedia PDF Downloads 76
673 An Analysis on Clustering Based Gene Selection and Classification for Gene Expression Data

Authors: K. Sathishkumar, V. Thiagarasu

Abstract:

Due to recent advances in DNA microarray technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. Many scientists around the world use the advantage of this gene profiling to characterize complex biological circumstances and diseases. Microarray techniques that are used in genome-wide gene expression and genome mutation analysis help scientists and physicians in understanding of the pathophysiological mechanisms, in diagnoses and prognoses, and choosing treatment plans. DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. This work presents an analysis of several clustering algorithms proposed to deals with the gene expression data effectively. The existing clustering algorithms like Support Vector Machine (SVM), K-means algorithm and evolutionary algorithm etc. are analyzed thoroughly to identify the advantages and limitations. The performance evaluation of the existing algorithms is carried out to determine the best approach. In order to improve the classification performance of the best approach in terms of Accuracy, Convergence Behavior and processing time, a hybrid clustering based optimization approach has been proposed.

Keywords: microarray technology, gene expression data, clustering, gene Selection

Procedia PDF Downloads 308
672 Localized Detection of ᴅ-Serine by Using an Enzymatic Amperometric Biosensor and Scanning Electrochemical Microscopy

Authors: David Polcari, Samuel C. Perry, Loredano Pollegioni, Matthias Geissler, Janine Mauzeroll

Abstract:

ᴅ-serine acts as an endogenous co-agonist for N-methyl-ᴅ-aspartate receptors in neuronal synapses. This makes it a key component in the development and function of a healthy brain, especially given its role in several neurodegenerative diseases such as Alzheimer’s disease and dementia. Despite such clear research motivations, the primary site and mechanism of ᴅ-serine release is still currently unclear. For this reason, we are developing a biosensor for the detection of ᴅ-serine utilizing a microelectrode in combination with a ᴅ-amino acid oxidase enzyme, which produces stoichiometric quantities of hydrogen peroxide in response to ᴅ-serine. For the fabrication of a biosensor with good selectivity, we use a permselective poly(meta-phenylenediamine) film to ensure only the target molecule is reacted, according to the size exclusion principle. In this work, we investigated the effect of the electrodeposition conditions used on the biosensor’s response time and selectivity. Careful optimization of the fabrication process allowed for enhanced biosensor response time. This allowed for the real time sensing of ᴅ-serine in a bulk solution, and also provided in means to map the efflux of ᴅ-serine in real time. This was done using scanning electrochemical microscopy (SECM) with the optimized biosensor to measure localized release of ᴅ-serine from an agar filled glass capillary sealed in an epoxy puck, which acted as a model system. The SECM area scan simultaneously provided information regarding the rate of ᴅ-serine flux from the model substrate, as well as the size of the substrate itself. This SECM methodology, which provides high spatial and temporal resolution, could be useful to investigate the primary site and mechanism of ᴅ-serine release in other biological samples.

Keywords: ᴅ-serine, enzymatic biosensor, microelectrode, scanning electrochemical microscopy

Procedia PDF Downloads 215
671 A Design Framework for an Open Market Platform of Enriched Card-Based Transactional Data for Big Data Analytics and Open Banking

Authors: Trevor Toy, Josef Langerman

Abstract:

Around a quarter of the world’s data is generated by financial with an estimated 708.5 billion global non-cash transactions reached between 2018 and. And with Open Banking still a rapidly developing concept within the financial industry, there is an opportunity to create a secure mechanism for connecting its stakeholders to openly, legitimately and consensually share the data required to enable it. Integration and data sharing of anonymised transactional data are still operated in silos and centralised between the large corporate entities in the ecosystem that have the resources to do so. Smaller fintechs generating data and businesses looking to consume data are largely excluded from the process. Therefore there is a growing demand for accessible transactional data for analytical purposes and also to support the rapid global adoption of Open Banking. The following research has provided a solution framework that aims to provide a secure decentralised marketplace for 1.) data providers to list their transactional data, 2.) data consumers to find and access that data, and 3.) data subjects (the individuals making the transactions that generate the data) to manage and sell the data that relates to themselves. The platform also provides an integrated system for downstream transactional-related data from merchants, enriching the data product available to build a comprehensive view of a data subject’s spending habits. A robust and sustainable data market can be developed by providing a more accessible mechanism for data producers to monetise their data investments and encouraging data subjects to share their data through the same financial incentives. At the centre of the platform is the market mechanism that connects the data providers and their data subjects to the data consumers. This core component of the platform is developed on a decentralised blockchain contract with a market layer that manages transaction, user, pricing, payment, tagging, contract, control, and lineage features that pertain to the user interactions on the platform. One of the platform’s key features is enabling the participation and management of personal data by the individuals from whom the data is being generated. This framework developed a proof-of-concept on the Etheruem blockchain base where an individual can securely manage access to their own personal data and that individual’s identifiable relationship to the card-based transaction data provided by financial institutions. This gives data consumers access to a complete view of transactional spending behaviour in correlation to key demographic information. This platform solution can ultimately support the growth, prosperity, and development of economies, businesses, communities, and individuals by providing accessible and relevant transactional data for big data analytics and open banking.

Keywords: big data markets, open banking, blockchain, personal data management

Procedia PDF Downloads 61
670 Isosorbide Bis-Methyl Carbonate: Opportunities for an Industrial Model Based on Biomass

Authors: Olga Gomez De Miranda, Jose R. Ochoa-Gomez, Stefaan De Wildeman, Luciano Monsegue, Soraya Prieto, Leire Lorenzo, Cristina Dineiro

Abstract:

The chemical industry is facing a new revolution. As long as processes based on the exploitation of fossil resources emerged with force in the XIX century, Society currently demands a new radical change that will lead to the complete and irreversible implementation of a circular sustainable economic model. The implementation of biorefineries will be essential for this. There, renewable raw materials as sugars and other biomass resources are exploited for the development of new materials that will partially replace their petroleum-derived homologs in a safer, and environmentally more benign approach. Isosorbide, (1,4:3,6-dianhydro-d-glucidol) is a primary bio-based derivative obtained from the plant (poly) saccharides and a very interesting example of a useful chemical produced in biorefineries. It can, in turn, be converted to other secondary monomers as isosorbide bis-methyl carbonate (IBMC), whose main field of application can be as a key biodegradable intermediary substitute of bisphenol-A in the manufacture of polycarbonates, or as an alternative to the toxic isocyanates in the synthesis of new polyurethanes (non-isocyanate polyurethanes) both with a huge application market. New products will present advantageous mechanical or optical properties, as well as improved behavior in non-toxicity and biodegradability aspects in comparison to their petro-derived alternatives. A robust production process of IBMC, a biomass-derived chemical, is here presented. It can be used with different raw material qualities using dimethyl carbonate (DMC) as both co-reactant and solvent. It consists of the transesterification of isosorbide with DMC under soft operational conditions, using different basic catalysts, always active with the isosorbide characteristics and purity. Appropriate isolation processes have been also developed to obtain crude IBMC yields higher than 90%, with oligomers production lower than 10%, independently of the quality of the isosorbide considered. All of them are suitable to be used in polycondensation reactions for polymers obtaining. If higher qualities of IBMC are needed, a purification treatment based on nanofiltration membranes has been also developed. The IBMC reaction-isolation conditions established in the laboratory have been successfully modeled using appropriate software programs and moved to a pilot-scale (production of 100 kg of IBMC). It has been demonstrated that a highly efficient IBMC production process able to be up-scaled under suitable market conditions has been obtained. Operational conditions involved the production of IBMC involve soft temperature and energy needs, no additional solvents, and high operational efficiency. All of them are according to green manufacturing rules.

Keywords: biomass, catalyst, isosorbide bis-methyl carbonate, polycarbonate, polyurethane, transesterification

Procedia PDF Downloads 117
669 Design and Development of an 'Optimisation Controller' and a SCADA Based Monitoring System for Renewable Energy Management in Telecom Towers

Authors: M. Sundaram, H. R. Sanath Kumar, A. Ramprakash

Abstract:

Energy saving is a key sustainability focus area for the Indian telecom industry today. This is especially true in rural India where energy consumption contributes to 70 % of the total network operating cost. In urban areas, the energy cost for network operation ranges between 15-30 %. This expenditure on energy as a result of the lack of grid power availability highlights a potential barrier to telecom industry growth. As a result of this, telecom tower companies switch to diesel generators, making them the second largest consumer of diesel in India, consuming over 2.5 billion litres per annum. The growing cost of energy due to increasing diesel prices and concerns over rising greenhouse emissions have caused these companies to look at other renewable energy options. Even the TRAI (Telecom Regulation Authority of India) has issued a number of guidelines to implement Renewable Energy Technologies (RETs) in the telecom towers as part of its ‘Implementation of Green Technologies in Telecom Sector’ initiative. Our proposal suggests the implementation of a Programmable Logic Controller (PLC) based ‘optimisation controller’ that can not only efficiently utilize the energy from RETs but also help to conserve the power used in the telecom towers. When there are multiple RETs available to supply energy, this controller will pick the optimum amount of energy from each RET based on the availability and feasibility at that point of time, reducing the dependence on diesel generators. For effective maintenance of the towers, we are planing to implement a SCADA based monitoring system along with the ‘optimization controller’.

Keywords: operation costs, consumption of fuel and carbon footprint, implementation of a programmable logic controller (PLC) based ‘optimisation controller’, efficient SCADA based monitoring system

Procedia PDF Downloads 406
668 The Superior Performance of Investment Bank-Affiliated Mutual Funds

Authors: Michelo Obrey

Abstract:

Traditionally, mutual funds have long been esteemed as stand-alone entities in the U.S. However, the prevalence of the fund families’ affiliation to financial conglomerates is eroding this striking feature. Mutual fund families' affiliation with financial conglomerates can potentially be an important source of superior performance or cost to the affiliated mutual fund investors. On the one hand, financial conglomerates affiliation offers the mutual funds access to abundant resources, better research quality, private material information, and business connections within the financial group. On the other hand, conflict of interest is bound to arise between the financial conglomerate relationship and fund management. Using a sample of U.S. domestic equity mutual funds from 1994 to 2017, this paper examines whether fund family affiliation to an investment bank help the affiliated mutual funds deliver superior performance through private material information advantage possessed by the investment banks or it costs affiliated mutual fund shareholders due to the conflict of interest. Robust to alternative risk adjustments and cross-section regression methodologies, this paper finds that the investment bank-affiliated mutual funds significantly outperform those of the mutual funds that are not affiliated with an investment bank. Interestingly the paper finds that the outperformance is confined to holding return, a return measure that captures the investment talent that is uninfluenced by transaction costs, fees, and other expenses. Further analysis shows that the investment bank-affiliated mutual funds specialize in hard-to-value stocks, which are not more likely to be held by unaffiliated funds. Consistent with the information advantage hypothesis, the paper finds that affiliated funds holding covered stocks outperform affiliated funds without covered stocks lending no support to the hypothesis that affiliated mutual funds attract superior stock-picking talent. Overall, the paper findings are consistent with the idea that investment banks maximize fee income by monopolistically exploiting their private information, thus strategically transferring performance to their affiliated mutual funds. This paper contributes to the extant literature on the agency problem in mutual fund families. It adds to this stream of research by showing that the agency problem is not only prevalent in fund families but also in financial organizations such as investment banks that have affiliated mutual fund families. The results show evidence of exploitation of synergies such as private material information sharing that benefit mutual fund investors due to affiliation with a financial conglomerate. However, this research has a normative dimension, allowing such incestuous behavior of insider trading and exploitation of superior information not only negatively affect the unaffiliated fund investors but also led to an unfair and unleveled playing field in the financial market.

Keywords: mutual fund performance, conflicts of interest, informational advantage, investment bank

Procedia PDF Downloads 170
667 Optimal Design of Tuned Inerter Damper-Based System for the Control of Wind-Induced Vibration in Tall Buildings through Cultural Algorithm

Authors: Luis Lara-Valencia, Mateo Ramirez-Acevedo, Daniel Caicedo, Jose Brito, Yosef Farbiarz

Abstract:

Controlling wind-induced vibrations as well as aerodynamic forces, is an essential part of the structural design of tall buildings in order to guarantee the serviceability limit state of the structure. This paper presents a numerical investigation on the optimal design parameters of a Tuned Inerter Damper (TID) based system for the control of wind-induced vibration in tall buildings. The control system is based on the conventional TID, with the main difference that its location is changed from the ground level to the last two story-levels of the structural system. The TID tuning procedure is based on an evolutionary cultural algorithm in which the optimum design variables defined as the frequency and damping ratios were searched according to the optimization criteria of minimizing the root mean square (RMS) response of displacements at the nth story of the structure. A Monte Carlo simulation was used to represent the dynamic action of the wind in the time domain in which a time-series derived from the Davenport spectrum using eleven harmonic functions with randomly chosen phase angles was reproduced. The above-mentioned methodology was applied on a case-study derived from a 37-story prestressed concrete building with 144 m height, in which the wind action overcomes the seismic action. The results showed that the optimally tuned TID is effective to reduce the RMS response of displacements up to 25%, which demonstrates the feasibility of the system for the control of wind-induced vibrations in tall buildings.

Keywords: evolutionary cultural algorithm, Monte Carlo simulation, tuned inerter damper, wind-induced vibrations

Procedia PDF Downloads 123
666 Online Allocation and Routing for Blood Delivery in Conditions of Variable and Insufficient Supply: A Case Study in Thailand

Authors: Pornpimol Chaiwuttisak, Honora Smith, Yue Wu

Abstract:

Blood is a perishable product which suffers from physical deterioration with specific fixed shelf life. Although its value during the shelf life is constant, fresh blood is preferred for treatment. However, transportation costs are a major factor to be considered by administrators of Regional Blood Centres (RBCs) which act as blood collection and distribution centres. A trade-off must therefore be reached between transportation costs and short-term holding costs. In this paper we propose a number of algorithms for online allocation and routing of blood supplies, for use in conditions of variable and insufficient blood supply. A case study in northern Thailand provides an application of the allocation and routing policies tested. The plan proposed for daily allocation and distribution of blood supplies consists of two components: firstly, fixed routes are determined for the supply of hospitals which are far from an RBC. Over the planning period of one week, each hospital on the fixed routes is visited once. A robust allocation of blood is made to hospitals on the fixed routes that can be guaranteed on a suitably high percentage of days, despite variable supplies. Secondly, a variable daily route is employed for close-by hospitals, for which more than one visit per week may be needed to fulfil targets. The variable routing takes into account the amount of blood available for each day’s deliveries, which is only known on the morning of delivery. For hospitals on the variables routes, the day and amounts of deliveries cannot be guaranteed but are designed to attain targets over the six-day planning horizon. In the conditions of blood shortage encountered in Thailand, and commonly in other developing countries, it is often the case that hospitals request more blood than is needed, in the knowledge that only a proportion of all requests will be met. Our proposal is for blood supplies to be allocated and distributed to each hospital according to equitable targets based on historical demand data, calculated with regard to expected daily blood supplies. We suggest several policies that could be chosen by the decision makes for the daily distribution of blood. The different policies provide different trade-offs between transportation and holding costs. Variations in the costs of transportation, such as the price of petrol, could make different policies the most beneficial at different times. We present an application of the policies applied to a realistic case study in the RBC at Chiang Mai province which is located in Northern region of Thailand. The analysis includes a total of more than 110 hospitals, with 29 hospitals considered in the variable route. The study is expected to be a pilot for other regions of Thailand. Computational experiments are presented. Concluding remarks include the benefits gained by the online methods and future recommendations.

Keywords: online algorithm, blood distribution, developing country, insufficient blood supply

Procedia PDF Downloads 320
665 IOT Based Automated Production and Control System for Clean Water Filtration Through Solar Energy Operated by Submersible Water Pump

Authors: Musse Mohamud Ahmed, Tina Linda Achilles, Mohammad Kamrul Hasan

Abstract:

Deterioration of the mother nature is evident these day with clear danger of human catastrophe emanating from greenhouses (GHG) with increasing CO2 emissions to the environment. PV technology can help to reduce the dependency on fossil fuel, decreasing air pollution and slowing down the rate of global warming. The objective of this paper is to propose, develop and design the production of clean water supply to rural communities using an appropriate technology such as Internet of Things (IOT) that does not create any CO2 emissions. Additionally, maximization of solar energy power output and reciprocally minimizing the natural characteristics of solar sources intermittences during less presence of the sun itself is another goal to achieve in this work. The paper presents the development of critical automated control system for solar energy power output optimization using several new techniques. water pumping system is developed to supply clean water with the application of IOT-renewable energy. This system is effective to provide clean water supply to remote and off-grid areas using Photovoltaics (PV) technology that collects energy generated from the sunlight. The focus of this work is to design and develop a submersible solar water pumping system that applies an IOT implementation. Thus, this system has been executed and programmed using Arduino Software (IDE), proteus, Maltab and C++ programming language. The mechanism of this system is that it pumps water from water reservoir that is powered up by solar energy and clean water production was also incorporated using filtration system through the submersible solar water pumping system. The filtering system is an additional application platform which is intended to provide a clean water supply to any households in Sarawak State, Malaysia.

Keywords: IOT, automated production and control system, water filtration, automated submersible water pump, solar energy

Procedia PDF Downloads 71
664 A Methodology for Seismic Performance Enhancement of RC Structures Equipped with Friction Energy Dissipation Devices

Authors: Neda Nabid

Abstract:

Friction-based supplemental devices have been extensively used for seismic protection and strengthening of structures, however, the conventional use of these dampers may not necessarily lead to an efficient structural performance. Conventionally designed friction dampers follow a uniform height-wise distribution pattern of slip load values for more practical simplicity. This can lead to localizing structural damage in certain story levels, while the other stories accommodate a negligible amount of relative displacement demand. A practical performance-based optimization methodology is developed to tackle with structural damage localization of RC frame buildings with friction energy dissipation devices under severe earthquakes. The proposed methodology is based on the concept of uniform damage distribution theory. According to this theory, the slip load values of the friction dampers redistribute and shift from stories with lower relative displacement demand to the stories with higher inter-story drifts to narrow down the discrepancy between the structural damage levels in different stories. In this study, the efficacy of the proposed design methodology is evaluated through the seismic performance of five different low to high-rise RC frames equipped with friction wall dampers under six real spectrum-compatible design earthquakes. The results indicate that compared to the conventional design, using the suggested methodology to design friction wall systems can lead to, by average, up to 40% reduction of maximum inter-story drift; and incredibly more uniform height-wise distribution of relative displacement demands under the design earthquakes.

Keywords: friction damper, nonlinear dynamic analysis, RC structures, seismic performance, structural damage

Procedia PDF Downloads 213
663 Hybrid Energy System for the German Mining Industry: An Optimized Model

Authors: Kateryna Zharan, Jan C. Bongaerts

Abstract:

In recent years, economic attractiveness of renewable energy (RE) for the mining industry, especially for off-grid mines, and a negative environmental impact of fossil energy are stimulating to use RE for mining needs. Being that remote area mines have higher energy expenses than mines connected to a grid, integration of RE may give a mine economic benefits. Regarding the literature review, there is a lack of business models for adopting of RE at mine. The main aim of this paper is to develop an optimized model of RE integration into the German mining industry (GMI). Hereby, the GMI with amount of around 800 mill. t. annually extracted resources is included in the list of the 15 major mining country in the world. Accordingly, the mining potential of Germany is evaluated in this paper as a perspective market for RE implementation. The GMI has been classified in order to find out the location of resources, quantity and types of the mines, amount of extracted resources, and access of the mines to the energy resources. Additionally, weather conditions have been analyzed in order to figure out where wind and solar generation technologies can be integrated into a mine with the highest efficiency. Despite the fact that the electricity demand of the GMI is almost completely covered by a grid connection, the hybrid energy system (HES) based on a mix of RE and fossil energy is developed due to show environmental and economic benefits. The HES for the GMI consolidates a combination of wind turbine, solar PV, battery and diesel generation. The model has been calculated using the HOMER software. Furthermore, the demonstrated HES contains a forecasting model that predicts solar and wind generation in advance. The main result from the HES such as CO2 emission reduction is estimated in order to make the mining processing more environmental friendly.

Keywords: diesel generation, German mining industry, hybrid energy system, hybrid optimization model for electric renewables, optimized model, renewable energy

Procedia PDF Downloads 329
662 Analysing the Interactive Effects of Factors Influencing Sand Production on Drawdown Time in High Viscosity Reservoirs

Authors: Gerald Gwamba, Bo Zhou, Yajun Song, Dong Changyin

Abstract:

The challenges that sand production presents to the oil and gas industry, particularly while working in poorly consolidated reservoirs, cannot be overstated. From restricting production to blocking production tubing, sand production increases the costs associated with production as it elevates the cost of servicing production equipment over time. Production in reservoirs that present with high viscosities, flow rate, cementation, clay content as well as fine sand contents is even more complex and challenging. As opposed to the one-factor at a-time testing, investigating the interactive effects arising from a combination of several factors offers increased reliability of results as well as representation of actual field conditions. It is thus paramount to investigate the conditions leading to the onset of sanding during production to ensure the future sustainability of hydrocarbon production operations under viscous conditions. We adopt the Design of Experiments (DOE) to analyse, using Taguchi factorial designs, the most significant interactive effects of sanding. We propose an optimized regression model to predict the drawdown time at sand production. The results obtained underscore that reservoirs characterized by varying (high and low) levels of viscosity, flow rate, cementation, clay, and fine sand content have a resulting impact on sand production. The only significant interactive effect recorded arises from the interaction between BD (fine sand content and flow rate), while the main effects included fluid viscosity and cementation, with percentage significances recorded as 31.3%, 37.76%, and 30.94%, respectively. The drawdown time model presented could be useful for predicting the time to reach the maximum drawdown pressure under viscous conditions during the onset of sand production.

Keywords: factorial designs, DOE optimization, sand production prediction, drawdown time, regression model

Procedia PDF Downloads 136
661 Simulation of Bird Strike on Airplane Wings by Using SPH Methodology

Authors: Tuğçe Kiper Elibol, İbrahim Uslan, Mehmet Ali Guler, Murat Buyuk, Uğur Yolum

Abstract:

According to the FAA report, 142603 bird strikes were reported for a period of 24 years, between 1990 – 2013. Bird strike with aerospace structures not only threaten the flight security but also cause financial loss and puts life in danger. The statistics show that most of the bird strikes are happening with the nose and the leading edge of the wings. Also, a substantial amount of bird strikes is absorbed by the jet engines and causes damage on blades and engine body. Crash proof designs are required to overcome the possibility of catastrophic failure of the airplane. Using computational methods for bird strike analysis during the product development phase has considerable importance in terms of cost saving. Clearly, using simulation techniques to reduce the number of reference tests can dramatically affect the total cost of an aircraft, where for bird strike often full-scale tests are considered. Therefore, development of validated numerical models is required that can replace preliminary tests and accelerate the design cycle. In this study, to verify the simulation parameters for a bird strike analysis, several different numerical options are studied for an impact case against a primitive structure. Then, a representative bird mode is generated with the verified parameters and collided against the leading edge of a training aircraft wing, where each structural member of the wing was explicitly modeled. A nonlinear explicit dynamics finite element code, LS-DYNA was used for the bird impact simulations. SPH methodology was used to model the behavior of the bird. Dynamic behavior of the wing superstructure was observed and will be used for further design optimization purposes.

Keywords: bird impact, bird strike, finite element modeling, smoothed particle hydrodynamics

Procedia PDF Downloads 308
660 Metabolic Manipulation as a Strategy for Optimization of Biomass Productivity and Oil Content in the Microalgae Desmodesmus Sp.

Authors: Ivan A. Sandoval Salazar, Silvia F. Valderrama

Abstract:

The microalgae oil emerges as a promising source of raw material for many industrial applications. Thus, this study had as a main focus on the cultivation of the microalgae species Desmodesmus sp. in laboratory scale with a view to maximizing biomass production and triglyceride content in the lipid fraction. Initially, culture conditions were selected to optimize biomass production, which was subsequently subjected to nutritional stress by varying nitrate and phosphate concentrations in order to increase the content and productivity of fatty acids. The culture medium BOLD 3N, nitrate and phosphate, light intensity 250,500 and 1000 μmol photons.m².s⁻¹, photoperiod of 12:12 were evaluated. Under the best conditions of the tests, a maximum cell division of 1.13 div.dia⁻¹ was obtained on the sixth day of culture, beginning of the exponential phase, and a maximum concentration of 8.42x107 cell.mL⁻¹ and dry biomass of 3.49 gL⁻¹ on the 20th day, in the stationary phase. The lipid content in the first stage of culture was approximately 8% after 12 days and at the end of the culture in the stationary phase ranged from 12% to 16% (20 days). In the microalgae grown at 250 μmol fotons.m2.s-1 the fatty acid profile was mostly polyunsaturated (52%). The total of unsaturated fatty acids, identified in this species of microalga, reached values between 70 and 75%, being qualified for use in the food and pharmaceutical industry. In addition, this study showed that the cultivation conditions influenced mainly the production of polyunsaturated fatty acids, with the predominance of γ-linolenic acid. However, in the cultures submitted to the highest the intensity of light (1000 μmol photons.m².s⁻¹) and low concentrations of nitrate and phosphate, saturated and monounsaturated fatty acids, which present greater oxidative stability, were identified mainly (60 to 70 %) being qualified for the production of biodiesel and for oleochemistry.

Keywords: microalgae, Desmodesmus sp, fatty acids, biodiesel

Procedia PDF Downloads 131
659 Structures and Analytical Crucibles in Nigerian Indigenous Art Music

Authors: Albert Oluwole Uzodimma Authority

Abstract:

Nigeria is a diverse nation with a rich cultural heritage that has produced numerous art musicians and a vast range of art songs. The compositional styles, tonal rhythm, text rhythm, word painting, and text-tone relationship vary extensively from one dialect to another, indicating the need for standardized tools for the structural and analytical deconstruction of Nigerian indigenous art music. The purpose of this research is to examine the structures of Nigerian indigenous art music and outline some crucibles for analyzing it, by investigating how dialectical inflection influences the choice of text tone, scale mode, tonal rhythm, and the general ambiance of Nigerian art music. The research used a structured questionnaire to collect data from 50 musicologists, out of which 41 responded. The study's focus was on the works of two prominent twentieth-century composers, Stephen Olusoji, and Nwamara Alvan-Ikoku, titled "Oyigiyigi" and "O Chineke, Inozikwa omee," respectively. The data collected was presented in percentages using pie charts and tables. The study shows that in Nigerian Indigenous music, several aspects are to be considered for proper analysis, such as linguistic sensitivity, dialectical inflection influences text-tone relationship, text rhythm and tonal rhythm, which help to convey the proper meanings of messages in songs. It also highlights the lack of standardized rubrics for analysis, which necessitated the proposal of robust criteria for analyzing African music, known as Neo-Eclectic-Crucibles. Hinging on eclectic approach, this research makes significant contributions to music scholarship by addressing the need for standardized tools and crucibles for the structural and analytical deconstruction of Nigerian indigenous art music. It provides a template for further studies leading to standardized rubrics for analyzing African music. This research collected data through a structured questionnaire and analyzed it using pie charts and tables to present the findings accurately. The analysis focused on the respondents' perspectives on the research objectives and structural analysis of two indigenous music compositions by Olusoji and Nwamara. This research answers the questions on the structures and analytical crucibles used in Nigerian indigenous art music, how dialectical inflection influences text-tone relationship, scale mode, tonal rhythm, and the general ambiance of Nigerian art music. This paper demonstrates the need for standardized tools and crucibles for the structural and analytical deconstruction of Nigerian indigenous art music. It highlights several aspects that are crucial to analyzing Nigerian indigenous music and proposes the Neo-Eclectic-Crucibles criteria for analyzing African music. The contribution of this research to music scholarship is significant, providing a template for further studies and research in the field.

Keywords: art-music, crucibles, dialectical inflections, indigenous, text-tone, tonal rhythm, word-painting

Procedia PDF Downloads 76
658 Small Scale Waste to Energy Systems: Optimization of Feedstock Composition for Improved Control of Ash Sintering and Quality of Generated Syngas

Authors: Mateusz Szul, Tomasz Iluk, Aleksander Sobolewski

Abstract:

Small-scale, distributed energy systems enabling cogeneration of heat and power based on gasification of sewage sludge, are considered as the most efficient and environmentally friendly ways of their treatment. However, economic aspects of such an investment are very demanding; therefore, for such a small scale sewage sludge gasification installation to be profitable, it needs to be efficient and simple at the same time. The article presents results of research on air gasification of sewage sludge in fixed bed GazEla reactor. Two of the most important aspects of the research considered the influence of the composition of sewage sludge blends with other feedstocks on properties of generated syngas and ash sintering problems occurring at the fixed bed. Different means of the fuel pretreatment and blending were proposed as a way of dealing with the above mentioned undesired characteristics. Influence of RDF (Refuse Derived Fuel) and biomasses in the fuel blends were evaluated. Ash properties were assessed based on proximate, ultimate, and ash composition analysis of the feedstock. The blends were specified based on complementary characteristics of such criteria as C content, moisture, volatile matter, Si, Al, Mg, and content of basic metals in the ash were analyzed, Obtained results were assessed with use of experimental gasification tests and laboratory ISO-procedure for analysis of ash characteristic melting temperatures. Optimal gasification process conditions were determined by energetic parameters of the generated syngas, its content of tars and lack of ash sinters within the reactor bed. Optimal results were obtained for co-gasification of herbaceous biomasses with sewage sludge where LHV (Lower Heating Value) of the obtained syngas reached a stable value of 4.0 MJ/Nm3 for air/steam gasification.

Keywords: ash fusibility, gasification, piston engine, sewage sludge

Procedia PDF Downloads 180
657 Clinical Validation of C-PDR Methodology for Accurate Non-Invasive Detection of Helicobacter pylori Infection

Authors: Suman Som, Abhijit Maity, Sunil B. Daschakraborty, Sujit Chaudhuri, Manik Pradhan

Abstract:

Background: Helicobacter pylori is a common and important human pathogen and the primary cause of peptic ulcer disease and gastric cancer. Currently H. pylori infection is detected by both invasive and non-invasive way but the diagnostic accuracy is not up to the mark. Aim: To set up an optimal diagnostic cut-off value of 13C-Urea Breath Test to detect H. pylori infection and evaluate a novel c-PDR methodology to overcome of inconclusive grey zone. Materials and Methods: All 83 subjects first underwent upper-gastrointestinal endoscopy followed by rapid urease test and histopathology and depending on these results; we classified 49 subjects as H. pylori positive and 34 negative. After an overnight, fast patients are taken 4 gm of citric acid in 200 ml water solution and 10 minute after ingestion of the test meal, a baseline exhaled breath sample was collected. Thereafter an oral dose of 75 mg 13C-Urea dissolved in 50 ml water was given and breath samples were collected upto 90 minute for 15 minute intervals and analysed by laser based high precisional cavity enhanced spectroscopy. Results: We studied the excretion kinetics of 13C isotope enrichment (expressed as δDOB13C ‰) of exhaled breath samples and found maximum enrichment around 30 minute of H. pylori positive patients, it is due to the acid mediated stimulated urease enzyme activity and maximum acidification happened within 30 minute but no such significant isotopic enrichment observed for H. pylori negative individuals. Using Receiver Operating Characteristic (ROC) curve an optimal diagnostic cut-off value, δDOB13C ‰ = 3.14 was determined at 30 minute exhibiting 89.16% accuracy. Now to overcome grey zone problem we explore percentage dose of 13C recovered per hour, i.e. 13C-PDR (%/hr) and cumulative percentage dose of 13C recovered, i.e. c-PDR (%) in exhaled breath samples for the present 13C-UBT. We further explored the diagnostic accuracy of 13C-UBT by constructing ROC curve using c-PDR (%) values and an optimal cut-off value was estimated to be c-PDR = 1.47 (%) at 60 minute, exhibiting 100 % diagnostic sensitivity , 100 % specificity and 100 % accuracy of 13C-UBT for detection of H. pylori infection. We also elucidate the gastric emptying process of present 13C-UBT for H. pylori positive patients. The maximal emptying rate found at 36 minute and half empting time of present 13C-UBT was found at 45 minute. Conclusions: The present study exhibiting the importance of c-PDR methodology to overcome of grey zone problem in 13C-UBT for accurate determination of infection without any risk of diagnostic errors and making it sufficiently robust and novel method for an accurate and fast non-invasive diagnosis of H. pylori infection for large scale screening purposes.

Keywords: 13C-Urea breath test, c-PDR methodology, grey zone, Helicobacter pylori

Procedia PDF Downloads 289
656 Experimental and Modal Determination of the State-Space Model Parameters of a Uni-Axial Shaker System for Virtual Vibration Testing

Authors: Jonathan Martino, Kristof Harri

Abstract:

In some cases, the increase in computing resources makes simulation methods more affordable. The increase in processing speed also allows real time analysis or even more rapid tests analysis offering a real tool for test prediction and design process optimization. Vibration tests are no exception to this trend. The so called ‘Virtual Vibration Testing’ offers solution among others to study the influence of specific loads, to better anticipate the boundary conditions between the exciter and the structure under test, to study the influence of small changes in the structure under test, etc. This article will first present a virtual vibration test modeling with a main focus on the shaker model and will afterwards present the experimental parameters determination. The classical way of modeling a shaker is to consider the shaker as a simple mechanical structure augmented by an electrical circuit that makes the shaker move. The shaker is modeled as a two or three degrees of freedom lumped parameters model while the electrical circuit takes the coil impedance and the dynamic back-electromagnetic force into account. The establishment of the equations of this model, describing the dynamics of the shaker, is presented in this article and is strongly related to the internal physical quantities of the shaker. Those quantities will be reduced into global parameters which will be estimated through experiments. Different experiments will be carried out in order to design an easy and practical method for the identification of the shaker parameters leading to a fully functional shaker model. An experimental modal analysis will also be carried out to extract the modal parameters of the shaker and to combine them with the electrical measurements. Finally, this article will conclude with an experimental validation of the model.

Keywords: lumped parameters model, shaker modeling, shaker parameters, state-space, virtual vibration

Procedia PDF Downloads 255