Search results for: error matrices
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2208

Search results for: error matrices

1428 Amino Acid Based Biodegradable Poly (Ester-Amide)s and Their Potential Biomedical Applications as Drug Delivery Containers and Antibacterial

Authors: Nino Kupatadze, Tamar Memanishvili, Natia Ochkhikidze, David Tugushi, Zaal Kokaia, Ramaz Katsarava

Abstract:

Amino acid-based Biodegradable poly(ester-amide)s (PEAs) have gained considerable interest as a promising materials for numerous biomedical applications. These polymers reveal a high biocompatibility and easily form small particles suitable for delivery various biological, as well as elastic bio-erodible films serving as matrices for constructing antibacterial coatings. In the present work we have demonstrated a potential of the PEAs for two applications: 1. cell therapy for stroke as vehicles for delivery and sustained release of growth factors, 2. bactericidal coating as prevention biofilm and applicable in infected wound management. Stroke remains the main cause of adult disability with limited treatment options. Although stem cell therapy is a promising strategy, it still requires improvement of cell survival, differentiation and tissue modulation. .Recently, microspheres (MPs) made of biodegradable polymers have gained significant attention for providing necessary support of transplanted cells. To investigate this strategy in the cell therapy of stroke, MPs loaded with transcription factors Wnt3A/BMP4 were prepared. These proteins have been shown to mediate the maturation of the cortical neurons. We have suggested that implantation of these materials could create a suitable microenvironment for implanted cells. Particles with spherical shape, porous surface, and 5-40 m in size (monitored by scanning electron microscopy) were made on the basis of the original PEA composed of adipic acid, L-phenylalanine and 1,4-butanediol. After 4 months transplantation of MPs in rodent brain, no inflammation was observed. Additionally, factors were successfully released from MPs and affected neuronal cell differentiation in in vitro. The in vivo study using loaded MPs is in progress. Another severe problem in biomedicine is prevention of surgical devices from biofilm formation. Antimicrobial polymeric coatings are most effective “shields” to protect surfaces/devices from biofilm formation. Among matrices for constructing the coatings preference should be given to bio-erodible polymers. Such types of coatings will play a role of “unstable seating” that will not allow bacteria to occupy the surface. In other words, bio-erodible coatings would be discomfort shelter for bacteria that along with releasing “killers of bacteria” should prevent the formation of biofilm. For this purpose, we selected an original biodegradable PEA composed of L-leucine, 1,6-hexanediol and sebacic acid as a bio-erodible matrix, and nanosilver (AgNPs) as a bactericidal agent (“killer of bacteria”). Such nanocomposite material is also promising in treatment of superficial wound and ulcer. The solubility of the PEA in ethanol allows to reduce AgNO3 to NPs directly in the solution, where the solvent served as a reductive agent, and the PEA served as NPs stabilizer. The photochemical reduction was selected as a basic method to form NPs. The obtained AgNPs were characterized by UV-spectroscopy, transmission electron microscope (TEM), and dynamic light scattering (DLS). According to the UV-data and TEM data the photochemical reduction resulted in spherical AgNPs with wide particle size distribution with a high contribution of the particles below 10 nm that are known as responsible for bactericidal activity of AgNPs. DLS study showed that average size of nanoparticles formed after photo-reduction in ethanol solution ranged within ca. 50 nm.

Keywords: biodegradable polymers, microparticles, nanocomposites, stem cell therapy, stroke

Procedia PDF Downloads 391
1427 Automatic Vertical Wicking Tester Based on Optoelectronic Techniques

Authors: Chi-Wai Kan, Kam-Hong Chau, Ho-Shing Law

Abstract:

Wicking property is important for textile finishing and wears comfort. Good wicking properties can ensure uniformity and efficiency of the textiles treatment. In view of wear comfort, quick wicking fabrics facilitate the evaporation of sweat. Therefore, the wetness sensation of the skin is minimised to prevent discomfort. The testing method for vertical wicking was standardised by the American Association of Textile Chemists and Colorists (AATCC) in 2011. The traditional vertical wicking test involves human error to observe fast changing and/or unclear wicking height. This study introduces optoelectronic devices to achieve an automatic Vertical Wicking Tester (VWT) and reduce human error. The VWT can record the wicking time and wicking height of samples. By reducing the difficulties of manual judgment, the reliability of the vertical wicking experiment is highly increased. Furthermore, labour is greatly decreased by using the VWT. The automatic measurement of the VWT has optoelectronic devices to trace the liquid wicking with a simple operation procedure. The optoelectronic devices detect the colour difference between dry and wet samples. This allows high sensitivity to a difference in irradiance down to 10 μW/cm². Therefore, the VWT is capable of testing dark fabric. The VWT gives a wicking distance (wicking height) of 1 mm resolution and a wicking time of one-second resolution. Acknowledgment: This is a research project of HKRITA funded by Innovation and Technology Fund (ITF) with title “Development of an Automatic Measuring System for Vertical Wicking” (ITP/055/20TP). Author would like to thank the financial support by ITF. Any opinions, findings, conclusions or recommendations expressed in this material/event (or by members of the project team) do not reflect the views of the Government of the Hong Kong Special Administrative Region, the Innovation and Technology Commission or the Panel of Assessors for the Innovation and Technology Support Programme of the Innovation and Technology Fund and the Hong Kong Research Institute of Textiles and Apparel. Also, we would like to thank the support and sponsorship from Lai Tak Enterprises Limited, Kingis Development Limited and Wing Yue Textile Company Limited.

Keywords: AATCC method, comfort, textile measurement, wetness sensation

Procedia PDF Downloads 92
1426 Discharge Estimation in a Two Flow Braided Channel Based on Energy Concept

Authors: Amiya Kumar Pati, Spandan Sahu, Kishanjit Kumar Khatua

Abstract:

River is our main source of water which is a form of open channel flow and the flow in the open channel provides with many complex phenomena of sciences that needs to be tackled such as the critical flow conditions, boundary shear stress, and depth-averaged velocity. The development of society, more or less solely depends upon the flow of rivers. The rivers are major sources of many sediments and specific ingredients which are much essential for human beings. A river flow consisting of small and shallow channels sometimes divide and recombine numerous times because of the slow water flow or the built up sediments. The pattern formed during this process resembles the strands of a braid. Braided streams form where the sediment load is so heavy that some of the sediments are deposited as shifting islands. Braided rivers often exist near the mountainous regions and typically carry coarse-grained and heterogeneous sediments down a fairly steep gradient. In this paper, the apparent shear stress formulae were suitably modified, and the Energy Concept Method (ECM) was applied for the prediction of discharges at the junction of a two-flow braided compound channel. The Energy Concept Method has not been applied for estimating the discharges in the braided channels. The energy loss in the channels is analyzed based on mechanical analysis. The cross-section of channel is divided into two sub-areas, namely the main-channel below the bank-full level and region above the bank-full level for estimating the total discharge. The experimental data are compared with a wide range of theoretical data available in the published literature to verify this model. The accuracy of this approach is also compared with Divided Channel Method (DCM). From error analysis of this method, it is observed that the relative error is less for the data-sets having smooth floodplains when compared to rough floodplains. Comparisons with other models indicate that the present method has reasonable accuracy for engineering purposes.

Keywords: critical flow, energy concept, open channel flow, sediment, two-flow braided compound channel

Procedia PDF Downloads 122
1425 Developing A Third Degree Of Freedom For Opinion Dynamics Models Using Scales

Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle

Abstract:

Opinion dynamics models use an agent-based modeling approach to model people’s opinions. Model's properties are usually explored by testing the two 'degrees of freedom': the interaction rule and the network topology. The latter defines the connection, and thus the possible interaction, among agents. The interaction rule, instead, determines how agents select each other and update their own opinion. Here we show the existence of the third degree of freedom. This can be used for turning one model into each other or to change the model’s output up to 100% of its initial value. Opinion dynamics models represent the evolution of real-world opinions parsimoniously. Thus, it is fundamental to know how real-world opinion (e.g., supporting a candidate) could be turned into a number. Specifically, we want to know if, by choosing a different opinion-to-number transformation, the model’s dynamics would be preserved. This transformation is typically not addressed in opinion dynamics literature. However, it has already been studied in psychometrics, a branch of psychology. In this field, real-world opinions are converted into numbers using abstract objects called 'scales.' These scales can be converted one into the other, in the same way as we convert meters to feet. Thus, in our work, we analyze how this scale transformation may affect opinion dynamics models. We perform our analysis both using mathematical modeling and validating it via agent-based simulations. To distinguish between scale transformation and measurement error, we first analyze the case of perfect scales (i.e., no error or noise). Here we show that a scale transformation may change the model’s dynamics up to a qualitative level. Meaning that a researcher may reach a totally different conclusion, even using the same dataset just by slightly changing the way data are pre-processed. Indeed, we quantify that this effect may alter the model’s output by 100%. By using two models from the standard literature, we show that a scale transformation can transform one model into the other. This transformation is exact, and it holds for every result. Lastly, we also test the case of using real-world data (i.e., finite precision). We perform this test using a 7-points Likert scale, showing how even a small scale change may result in different predictions or a number of opinion clusters. Because of this, we think that scale transformation should be considered as a third-degree of freedom for opinion dynamics. Indeed, its properties have a strong impact both on theoretical models and for their application to real-world data.

Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics

Procedia PDF Downloads 150
1424 Establishing Control Chart Limits for Rounded Measurements

Authors: Ran Etgar

Abstract:

The process of rounding off measurements in continuous variables is commonly encountered. Although it usually has minor effects, sometimes it can lead to poor outcomes in statistical process control using X̄ chart. The traditional control limits can cause incorrect conclusions if applied carelessly. This study looks into the limitations of classical control limits, particularly the impact of asymmetry. An approach to determining the distribution function of the measured parameter ȳ is presented, resulting in a more precise method to establish the upper and lower control limits. The proposed method, while slightly more complex than Shewhart's original idea, is still user-friendly and accurate and only requires the use of two straightforward tables.

Keywords: SPC, round-off data, control limit, rounding error

Procedia PDF Downloads 67
1423 Effect of Naameh Landfill (Lebanon) on Groundwater Quality of the Surrounding Area

Authors: Rana Sawaya, Jalal Halwani, Isam Bashour, Nada Nehme

Abstract:

Mismanagement of municipal solid wastes in Lebanon might lead to serious environmental problems, especially that a big portion of mixed wastes including putrescible is transferred to Naameh landfill. One of the consequences of municipal solid waste deposition is the production of landfill leachate, which if unproperly treated will threaten the main crucial matrices such as soil, water, and air. The main aim of this one of a kind study is to assess the risk posed to groundwater as a result of leachate infiltration on off-site wells especially after stoppage of Naameh landfill's operation end of the year 2016 and initiation of the capping process which is still ongoing and will be finalized in December 2019. For this purpose, nine representative points around the landfill were selected to undergo physicochemical and microbial analysis on a seasonal basis (every three months). The study extended from the year 2014 until the end of the year 2016 (closure of Naameh landfill). The preliminary data obtained are statistically analyzed using the Statistical Package for Social Sciences (SPSS) and was found in conformity with international and Lebanese norms. Thus, the study will be extended an additional year, especially after the finalization of capping and the results obtained, will enable us to propose new techniques and tools (treatment systems) in water resources management depending on the direction of its usage (domestic, irrigation, drinking).

Keywords: contamination, groundwater, leachate, Lebanon, solid waste

Procedia PDF Downloads 127
1422 RAFU Functions in Robotics and Automation

Authors: Alicia C. Sanchez

Abstract:

This paper investigates the implementation of RAFU functions (radical functions) in robotics and automation. Specifically, the main goal is to show how these functions may be useful in lane-keeping control and the lateral control of autonomous machines, vehicles, robots or the like. From the knowledge of several points of a certain route, the RAFU functions are used to achieve the lateral control purpose and maintain the lane-keeping errors within the fixed limits. The stability that these functions provide, their ease of approaching any continuous trajectory and the control of the possible error made on the approximation may be useful in practice.

Keywords: automatic navigation control, lateral control, lane-keeping control, RAFU approximation

Procedia PDF Downloads 291
1421 Reliability-Based Life-Cycle Cost Model for Engineering Systems

Authors: Reza Lotfalian, Sudarshan Martins, Peter Radziszewski

Abstract:

The effect of reliability on life-cycle cost, including initial and maintenance cost of a system is studied. The failure probability of a component is used to calculate the average maintenance cost during the operation cycle of the component. The standard deviation of the life-cycle cost is also calculated as an error measure for the average life-cycle cost. As a numerical example, the model is used to study the average life cycle cost of an electric motor.

Keywords: initial cost, life-cycle cost, maintenance cost, reliability

Procedia PDF Downloads 593
1420 Innovative Activity and Development: Analysing Firm Data from Eurozone Country-Members

Authors: Ilias A. Makris

Abstract:

In this work, we attempt to associate firm characteristics with innovative activity. We collect microdata from listed firms of selected Eurozone Country-members, after the beginning of 2007 financial crisis. The following literature, several indicators of growth and performance were selected and tested for their ability to interpret innovative activity. The main scope is to examine the possible differences in performance and growth between innovative and non-innovative firms, during a severe recession. Additionally to that, a special focus will be held on whether macroeconomic performance and national innovation system, determines the extent of innovators' performance. Preliminary findings, through correlation matrices and non-parametric tests, strongly indicate the positive relation between innovative activity and most of the measures used (profitability, size, employment), confirming that even during a recessionary period, innovative firms not only survive but also seem to succeed better economic results in almost all indexes relative to non-innovative. However, even though innovators seem to perform better in all economies examined, the extent of that performance seems to be strongly affected by the supportive mechanisms (financial and structural) that their country provides. Thus, it is clear, that the technologically intensive 'gap' between European South and North, during the economic crisis, became chaotic, due to the harsh austerity measures and reduced budgets in those countries, even in sectors with high potentials in economic activity and employment, impairing the effects of crisis and enhancing the vicious circle of recession.

Keywords: eurozone, innovative activity, development, firm performance, non-parametric tests

Procedia PDF Downloads 432
1419 Improvement of the Numerical Integration's Quality in Meshless Methods

Authors: Ahlem Mougaida, Hedi Bel Hadj Salah

Abstract:

Several methods are suggested to improve the numerical integration in Galerkin weak form for Meshless methods. In fact, integrating without taking into account the characteristics of the shape functions reproduced by Meshless methods (rational functions, with compact support etc.), causes a large integration error that influences the PDE’s approximate solution. Comparisons between different methods of numerical integration for rational functions are discussed and compared. The algorithms are implemented in Matlab. Finally, numerical results were presented to prove the efficiency of our algorithms in improving results.

Keywords: adaptive methods, meshless, numerical integration, rational quadrature

Procedia PDF Downloads 354
1418 Protein Stabilized Foam Structures as Protective Carrier Systems during Microwave Drying of Probiotics

Authors: Jannika Dombrowski, Sabine Ambros, Ulrich Kulozik

Abstract:

Due to the increasing popularity of healthy products, probiotics are still of rising importance in food manufacturing. With the aim to amplify the field of probiotic application to non-chilled products, the cultures have to be preserved by drying. Microwave drying has proved to be a suitable technique to achieve relatively high survival rates, resulting from drying at gentle temperatures, among others. However, diffusion limitation due to compaction of cell suspension during drying can prolong drying times as well as deteriorate product properties (grindability, rehydration performance). Therefore, we aimed to embed probiotics in an aerated matrix of whey proteins (surfactants) and di-/polysaccharides (foam stabilization, probiotic protection) during drying. As a result of the manifold increased inner surface of the cell suspension, drying performance was enhanced significantly as compared to non-foamed suspensions. This work comprises investigations on suitable foam matrices, being stable under vacuum (variation of protein concentration, type and concentration of di-/polysaccharide) as well as development of an applicable microwave drying process in terms of microwave power, chamber pressure and maximum product temperatures. Performed analyses included foam characteristics (overrun, drainage, firmness, bubble sizes), and properties of the dried cultures (survival, activity). In addition, efficiency of the drying process was evaluated.

Keywords: foam structure, microwave drying, polysaccharides, probiotics

Procedia PDF Downloads 254
1417 Identification of Failures Occurring on a System on Chip Exposed to a Neutron Beam for Safety Applications

Authors: S. Thomet, S. De-Paoli, F. Ghaffari, J. M. Daveau, P. Roche, O. Romain

Abstract:

In this paper, we present a hardware module dedicated to understanding the fail reason of a System on Chip (SoC) exposed to a particle beam. Impact of Single-Event Effects (SEE) on processor-based SoCs is a concern that has increased in the past decade, particularly for terrestrial applications with automotive safety increasing requirements, as well as consumer and industrial domains. The SEE created by the impact of a particle on an SoC may have consequences that can end to instability or crashes. Specific hardening techniques for hardware and software have been developed to make such systems more reliable. SoC is then qualified using cosmic ray Accelerated Soft-Error Rate (ASER) to ensure the Soft-Error Rate (SER) remains in mission profiles. Understanding where errors are occurring is another challenge because of the complexity of operations performed in an SoC. Common techniques to monitor an SoC running under a beam are based on non-intrusive debug, consisting of recording the program counter and doing some consistency checking on the fly. To detect and understand SEE, we have developed a module embedded within the SoC that provide support for recording probes, hardware watchpoints, and a memory mapped register bank dedicated to software usage. To identify CPU failure modes and the most important resources to probe, we have carried out a fault injection campaign on the RTL model of the SoC. Probes are placed on generic CPU registers and bus accesses. They highlight the propagation of errors and allow identifying the failure modes. Typical resulting errors are bit-flips in resources creating bad addresses, illegal instructions, longer than expected loops, or incorrect bus accesses. Although our module is processor agnostic, it has been interfaced to a RISC-V by probing some of the processor registers. Probes are then recorded in a ring buffer. Associated hardware watchpoints are allowing to do some control, such as start or stop event recording or halt the processor. Finally, the module is also providing a bank of registers where the firmware running on the SoC can log information. Typical usage is for operating system context switch recording. The module is connected to a dedicated debug bus and is interfaced to a remote controller via a debugger link. Thus, a remote controller can interact with the monitoring module without any intrusiveness on the SoC. Moreover, in case of CPU unresponsiveness, or system-bus stall, the recorded information can still be recovered, providing the fail reason. A preliminary version of the module has been integrated into a test chip currently being manufactured at ST in 28-nm FDSOI technology. The module has been triplicated to provide reliable information on the SoC behavior. As the primary application domain is automotive and safety, the efficiency of the module will be evaluated by exposing the test chip under a fast-neutron beam by the end of the year. In the meantime, it will be tested with alpha particles and electromagnetic fault injection (EMFI). We will report in the paper on fault-injection results as well as irradiation results.

Keywords: fault injection, SoC fail reason, SoC soft error rate, terrestrial application

Procedia PDF Downloads 226
1416 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation

Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke

Abstract:

Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.

Keywords: automatic calibration framework, approximate bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform

Procedia PDF Downloads 296
1415 Investigation of Delivery of Triple Play Service in GE-PON Fiber to the Home Network

Authors: Anurag Sharma, Dinesh Kumar, Rahul Malhotra, Manoj Kumar

Abstract:

Fiber based access networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This paper is targeted to show the simultaneous delivery of triple play service (data, voice and video). The comparative investigation and suitability of various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be accommodated decreases due to increase in bit error rate.

Keywords: BER, PON, TDMPON, GPON, CWDM, OLT, ONT

Procedia PDF Downloads 725
1414 Effects of Interfacial Modification Techniques on the Mechanical Properties of Natural Particle Based Polymer Composites

Authors: Bahar Basturk, Secil Celik Erbas, Sevket Can Sarikaya

Abstract:

Composites combining the particulates and polymer components have attracted great interest in various application areas such as packaging, furniture, electronics and automotive industries. For strengthening the plastic matrices, the utilization of natural fillers instead of traditional reinforcement materials has received increased attention. The properties of natural filler based polymer composites (NFPC) may be improved by applying proper surface modification techniques to the powder phase of the structures. In this study, acorn powder-epoxy and pine corn powder-epoxy composites containing up to 45% weight percent particulates were prepared by casting method. Alkali treatment and acetylation techniques were carried out to the natural particulates for investigating their influences under mechanical forces. The effects of filler type and content on the tensile properties of the composites were compared with neat epoxy. According to the quasi-static tensile tests, the pine cone based composites showed slightly higher rigidity and strength properties compared to the acorn reinforced samples. Furthermore, the structures independent of powder type and surface modification technique, showed higher tensile properties with increasing the particle content.

Keywords: natural fillers, polymer composites, surface modifications, tensile properties

Procedia PDF Downloads 457
1413 The Effect of AMBs Number of a Dynamics Behavior of a Spur Gear Reducer in Non-Stationary Regime

Authors: Najib Belhadj Messaoud, Slim Souissi

Abstract:

The non-linear dynamic behavior of a single stage spur gear reducer is studied in this paper in transient regime. Driving and driver rotors are, respectively, powered by a motor torque Cm and loaded by a resistive torque Cr. They are supported by two identical Active Magnetic Bearings (AMBs). Gear excitation is induced by the motor torque and load variation in addition to the fluctuation of meshing stiff-ness due to the variation of input rotational speed. Three models of AMBs were used with four, six and eight magnets. They are operated by P.D controller and powered by control and bias currents. The dynamic parameters of the AMBs are modeled by stiffness and damping matrices computed by the derivation of the electromagnetic forces. The equations of motion are solved iteratively using Newmark time integration method. In the first part of the study, the model is powered by an electric motor and by a four strokes four cylinders diesel engine in the second part. The numerical results of the dynamic responses of the system come to confirm the significant effect of the transient regime on the dynamic behavior of a gear set, particularly in the case of engine acyclism condition. Results also confirm the influence of the magnet number by AMBs on the dynamic behavior of the system. Indeed, vibrations were more important in the case of gear reducer supported by AMBs with four magnets.

Keywords: motor, stiffness, gear, acyclism, fluctuation, torque

Procedia PDF Downloads 455
1412 Elaboration and Physico-Chemical Characterization of Edible Films Made from Chitosan and Spray Dried Ethanolic Extracts of Propolis

Authors: David Guillermo Piedrahita Marquez, Hector Suarez Mahecha, Jairo Humberto Lopez

Abstract:

It was necessary to establish which formulation is suitable for the preservation of aquaculture products, that why edible films were made. These were to a characterization in order to meet their morphology physicochemical and mechanical properties, optical. Six Formulations of chitosan and propolis ethanolic extract encapsulated were developed because of their activity against pathogens and due to their properties, which allows the creation waterproof polymer networks against gasses, vapor, and physical damage. In the six Formulations, the concentration of comparison material (1% w/v, 2% pv) and the bioactive concentrations (0.5% w/v, 1% w/v, 1.5% pv) were changed and the results obtained were compared with statistical and multivariate analysis methods. It was observed that the matrices showed a mayor impermeability and thickness control samples and the samples reported in the literature. Also, these films showed a notorious uniformity of the films and a bigger resistance to the physical damage compared with other edible films made of other biopolymers. However the action of some compounds had a negative effect on the mechanical properties and changed drastically the optical properties, the bioactive has an effect on Polymer Matrix and it was determined that the films with 2% w / v of chitosan and 1.5% w/v encapsulated, exhibited the best properties and suffered to a lesser extent the negative impact of immiscible substances.

Keywords: chitosan, edible films, ethanolic extract of propolis, mechanical properties, optical properties, physical characterization, scanning electron microscopy (SEM)

Procedia PDF Downloads 437
1411 Use of Fractal Geometry in Machine Learning

Authors: Fuad M. Alkoot

Abstract:

The main component of a machine learning system is the classifier. Classifiers are mathematical models that can perform classification tasks for a specific application area. Additionally, many classifiers are combined using any of the available methods to reduce the classifier error rate. The benefits gained from the combination of multiple classifier designs has motivated the development of diverse approaches to multiple classifiers. We aim to investigate using fractal geometry to develop an improved classifier combiner. Initially we experiment with measuring the fractal dimension of data and use the results in the development of a combiner strategy.

Keywords: fractal geometry, machine learning, classifier, fractal dimension

Procedia PDF Downloads 206
1410 Nanosilver Containing Biodegradable Bionanocomposites for Antimicrobial Application: Design, Preparation and Study

Authors: Nino Kupatadze, Shorena Tskhadadze, Mzevinar Bedinashvili, David Tugushi, Ramaz Katsarava

Abstract:

Surgical device-associated infection and biofilm formation are some of the major problems in biomedicine for today. The losing protection ability of conventional antimicrobial-drugs leads to the challenges in the current antibiotic therapy, the most serious of which is antibiotic resistance. Our strategy to overcome the biofilm formation consists in coating devices with polymeric film containing nanosilver(AgNPs) as a bactericidal agent. Such bionanocomposites are also promising as wound dressing materials. For this purpose, we have developed a new generation of AgNPs containing polymeric composites in which amino acid based biodegradable poly(ester amide)s (PEAs) were served as both matrices and AgNPs stabilizers. The AgNPs were formed by photochemical (daylight) reduction of AgNO3 in ethanol solution. The formation of AgNPs was monitored by coloring the solution in brownish-red and appearance of the absorption maximum at 420-430 nm in UV spectrum. Comparative studies of PEAs with polyvinylpyrrolidone (PVP) as particle stabilizers were carried out. It was found that PVP is better stabilizer in terms of particles yield and stability. Therefore, in subsequent experiments blends of PEAs and PVP were used as stabilizers for fabricating AgNPs. As expected, PVP increased the stabilizing effect and this apparently observed in the UV spectrum of the samples after 7 h daylight irradiation: for pure PVP λmax = 430 nm, D = 2.03, for pure PEA λmax= 420 nm, D = 0.65, and for the blend of PVP and PEA λmax = 435 nm, D = 1.88. Further study of the obtained nanobiocomposites is in progress now.

Keywords: biodegradation, bionanocompositions, polymer, nanosilver

Procedia PDF Downloads 339
1409 Autism Disease Detection Using Transfer Learning Techniques: Performance Comparison between Central Processing Unit vs. Graphics Processing Unit Functions for Neural Networks

Authors: Mst Shapna Akter, Hossain Shahriar

Abstract:

Neural network approaches are machine learning methods used in many domains, such as healthcare and cyber security. Neural networks are mostly known for dealing with image datasets. While training with the images, several fundamental mathematical operations are carried out in the Neural Network. The operation includes a number of algebraic and mathematical functions, including derivative, convolution, and matrix inversion and transposition. Such operations require higher processing power than is typically needed for computer usage. Central Processing Unit (CPU) is not appropriate for a large image size of the dataset as it is built with serial processing. While Graphics Processing Unit (GPU) has parallel processing capabilities and, therefore, has higher speed. This paper uses advanced Neural Network techniques such as VGG16, Resnet50, Densenet, Inceptionv3, Xception, Mobilenet, XGBOOST-VGG16, and our proposed models to compare CPU and GPU resources. A system for classifying autism disease using face images of an autistic and non-autistic child was used to compare performance during testing. We used evaluation matrices such as Accuracy, F1 score, Precision, Recall, and Execution time. It has been observed that GPU runs faster than the CPU in all tests performed. Moreover, the performance of the Neural Network models in terms of accuracy increases on GPU compared to CPU.

Keywords: autism disease, neural network, CPU, GPU, transfer learning

Procedia PDF Downloads 107
1408 A Corpus Output Error Analysis of Chinese L2 Learners From America, Myanmar, and Singapore

Authors: Qiao-Yu Warren Cai

Abstract:

Due to the rise of big data, building corpora and using them to analyze ChineseL2 learners’ language output has become a trend. Various empirical research has been conducted using Chinese corpora built by different academic institutes. However, most of the research analyzed the data in the Chinese corpora usingcorpus-based qualitative content analysis with descriptive statistics. Descriptive statistics can be used to make summations about the subjects or samples that research has actually measured to describe the numerical data, but the collected data cannot be generalized to the population. Comte, a Frenchpositivist, has argued since the 19th century that human beings’ knowledge, whether the discipline is humanistic and social science or natural science, should be verified in a scientific way to construct a universal theory to explain the truth and human beings behaviors. Inferential statistics, able to make judgments of the probability of a difference observed between groups being dependable or caused by chance (Free Geography Notes, 2015)and to infer from the subjects or examples what the population might think or behave, is just the right method to support Comte’s argument in the field of TCSOL. Also, inferential statistics is a core of quantitative research, but little research has been conducted by combing corpora with inferential statistics. Little research analyzes the differences in Chinese L2 learners’ language corpus output errors by using theOne-way ANOVA so that the findings of previous research are limited to inferring the population's Chinese errors according to the given samples’ Chinese corpora. To fill this knowledge gap in the professional development of Taiwanese TCSOL, the present study aims to utilize the One-way ANOVA to analyze corpus output errors of Chinese L2 learners from America, Myanmar, and Singapore. The results show that no significant difference exists in ‘shì (是) sentence’ and word order errors, but compared with Americans and Singaporeans, it is significantly easier for Myanmar to have ‘sentence blends.’ Based on the above results, the present study provides an instructional approach and contributes to further exploration of how Chinese L2 learners can have (and use) learning strategies to lower errors.

Keywords: Chinese corpus, error analysis, one-way analysis of variance, Chinese L2 learners, Americans, myanmar, Singaporeans

Procedia PDF Downloads 99
1407 The Enhancement of Target Localization Using Ship-Borne Electro-Optical Stabilized Platform

Authors: Jaehoon Ha, Byungmo Kang, Kilho Hong, Jungsoo Park

Abstract:

Electro-optical (EO) stabilized platforms have been widely used for surveillance and reconnaissance on various types of vehicles, from surface ships to unmanned air vehicles (UAVs). EO stabilized platforms usually consist of an assembly of structure, bearings, and motors called gimbals in which a gyroscope is installed. EO elements such as a CCD camera and IR camera, are mounted to a gimbal, which has a range of motion in elevation and azimuth and can designate and track a target. In addition, a laser range finder (LRF) can be added to the gimbal in order to acquire the precise slant range from the platform to the target. Recently, a versatile functionality of target localization is needed in order to cooperate with the weapon systems that are mounted on the same platform. The target information, such as its location or velocity, needed to be more accurate. The accuracy of the target information depends on diverse component errors and alignment errors of each component. Specially, the type of moving platform can affect the accuracy of the target information. In the case of flying platforms, or UAVs, the target location error can be increased with altitude so it is important to measure altitude as precisely as possible. In the case of surface ships, target location error can be increased with obliqueness of the elevation angle of the gimbal since the altitude of the EO stabilized platform is supposed to be relatively low. The farther the slant ranges from the surface ship to the target, the more extreme the obliqueness of the elevation angle. This can hamper the precise acquisition of the target information. So far, there have been many studies on EO stabilized platforms of flying vehicles. However, few researchers have focused on ship-borne EO stabilized platforms of the surface ship. In this paper, we deal with a target localization method when an EO stabilized platform is located on the mast of a surface ship. Especially, we need to overcome the limitation caused by the obliqueness of the elevation angle of the gimbal. We introduce a well-known approach for target localization using Unscented Kalman Filter (UKF) and present the problem definition showing the above-mentioned limitation. Finally, we want to show the effectiveness of the approach that will be demonstrated through computer simulations.

Keywords: target localization, ship-borne electro-optical stabilized platform, unscented kalman filter

Procedia PDF Downloads 509
1406 Improving Functionality of Radiotherapy Department Through: Systemic Periodic Clinical Audits

Authors: Kamal Kaushik, Trisha, Dandapni, Sambit Nanda, A. Mukherjee, S. Pradhan

Abstract:

INTRODUCTION: As complexity in radiotherapy practice and processes are increasing, there is a need to assure quality control to a greater extent. At present, no international literature available with regards to the optimal quality control indicators for radiotherapy; moreover, few clinical audits have been conducted in the field of radiotherapy. The primary aim is to improve the processes that directly impact clinical outcomes for patients in terms of patient safety and quality of care. PROCEDURE: A team of an Oncologist, a Medical Physicist and a Radiation Therapist was formed for weekly clinical audits of patient’s undergoing radiotherapy audits The stages for audits include Pre planning audits, Simulation, Planning, Daily QA, Implementation and Execution (with image guidance). Errors in all the parts of the chain were evaluated and recorded for the development of further departmental protocols for radiotherapy. EVALUATION: The errors at various stages of radiotherapy chain were evaluated and recorded for comparison before starting the clinical audits in the department of radiotherapy and after starting the audits. It was also evaluated to find the stage in which maximum errors were recorded. The clinical audits were used to structure standard protocols (in the form of checklist) in department of Radiotherapy, which may lead to further reduce the occurrences of clinical errors in the chain of radiotherapy. RESULTS: The aim of this study is to perform a comparison between number of errors in different part of RT chain in two groups (A- Before Audit and B-After Audit). Group A: 94 pts. (48 males,46 female), Total no. of errors in RT chain:19 (9 needed Resimulation) Group B: 94 pts. (61 males,33 females), Total no. of errors in RT chain: 8 (4 needed Resimulation) CONCLUSION: After systematic periodic clinical audits percentage of error in radiotherapy process reduced more than 50% within 2 months. There is a great need in improving quality control in radiotherapy, and the role of clinical audits can only grow. Although clinical audits are time-consuming and complex undertakings, the potential benefits in terms of identifying and rectifying errors in quality control procedures are potentially enormous. Radiotherapy being a chain of various process. There is always a probability of occurrence of error in any part of the chain which may further propagate in the chain till execution of treatment. Structuring departmental protocols and policies helps in reducing, if not completely eradicating occurrence of such incidents.

Keywords: audit, clinical, radiotherapy, improving functionality

Procedia PDF Downloads 83
1405 Study for Utilization of Industrial Solid Waste, Generated by the Discharge of Casting Sand Agglomeration with Clay, Blast Furnace Slag and Sugar Cane Bagasse Ash in Concrete Composition

Authors: Mario Sergio de Andrade Zago, Javier Mazariegos Pablos, Eduvaldo Paulo Sichieri

Abstract:

This research project accomplished a study on the technical feasibility of recycling industrial solid waste generated by the discharge of casting sand agglomeration with clay, blast furnace slag and sugar cane bagasse ash. For this, the plan proposed a methodology that initially establishes a process of solid waste encapsulation, by using solidification/stabilization technique on Portland cement matrices, in which the residuals act as small and large aggregates on the composition of concrete, and later it presents the possibility of using this concrete in the manufacture of concrete pieces (concrete blocks) for paving. The results obtained in this research achieved the objective set with great success, regarding the manufacturing of concrete pieces (blocks) for paving urban roads, whenever there is special vehicle traffic or demands capable of producing accentuated abrasion effects (surpassing the 50 MPa required by the regulation), which probes the technical practicability of using waste from sand casting agglomeration with clay and blast furnace slag used in this study, unlocking usage possibilities for construction.

Keywords: industrial solid waste, solidification/stabilization, Portland cement, reuse, bagasse ash in the sugar cane, concrete

Procedia PDF Downloads 298
1404 Typology of Customers in Fitness Centres

Authors: Josef Voracek, Jan Sima

Abstract:

The main purpose of our study is to state the basic types of fitness customers. This paper aims to create a specific customer typology in today’s fitness centres in the region of Prague. Our suggested typology of Prague fitness centres customers is based on answers to the questions: What are the customers like, what are their preferences, and what kinds of services do they use more often in Prague fitness centres? These are the main aspects of the presented typology. A survey was conducted on a sample of 1004 respondents from 48 fitness centres, which ran during May 2012. We used questionnaires and latent class analysis for the assessment and interpretation of data. Gender was especially the main filter criterion. In the population, there were 522 males and 482 females. Data were analysed using the LCA method. We identified 6 segments of typical customers, of which three are male and three are female. Each segment is influenced primarily by the age of customers, from which we can develop further characteristics, such as education, income, marital status, etc. Male segments use the main workout area above all, whilst female segments use a much wider range of services offered, for example, group exercises, personal training, and cardio theatres. LCA method was found to be the most suitable tool, because cluster analysis is very limited in the forms and numbers of variables and indicators. Models of 3 latent classes for each gender are optimal, as it is demonstrated by entropy indices and matrices of the likelihood of the membership to the classes. A probable weak point of the survey is the selection of fitness centres, because of the market in Prague is really specific.

Keywords: customer, fitness, latent class analysis, typology

Procedia PDF Downloads 210
1403 Design of a Fuzzy Luenberger Observer for Fault Nonlinear System

Authors: Mounir Bekaik, Messaoud Ramdani

Abstract:

We present in this work a new technique of stabilization for fault nonlinear systems. The approach we adopt focus on a fuzzy Luenverger observer. The T-S approximation of the nonlinear observer is based on fuzzy C-Means clustering algorithm to find local linear subsystems. The MOESP identification approach was applied to design an empirical model describing the subsystems state variables. The gain of the observer is given by the minimization of the estimation error through Lyapunov-krasovskii functional and LMI approach. We consider a three tank hydraulic system for an illustrative example.

Keywords: nonlinear system, fuzzy, faults, TS, Lyapunov-Krasovskii, observer

Procedia PDF Downloads 322
1402 BER Estimate of WCDMA Systems with MATLAB Simulation Model

Authors: Suyeb Ahmed Khan, Mahmood Mian

Abstract:

Simulation plays an important role during all phases of the design and engineering of communications systems, from early stages of conceptual design through the various stages of implementation, testing, and fielding of the system. In the present paper, a simulation model has been constructed for the WCDMA system in order to evaluate the performance. This model describes multiusers effects and calculation of BER (Bit Error Rate) in 3G mobile systems using Simulink MATLAB 7.1. Gaussian Approximation defines the multi-user effect on system performance. BER has been analyzed with comparison between transmitting data and receiving data.

Keywords: WCDMA, simulations, BER, MATLAB

Procedia PDF Downloads 585
1401 Study of Adaptive Filtering Algorithms and the Equalization of Radio Mobile Channel

Authors: Said Elkassimi, Said Safi, B. Manaut

Abstract:

This paper presented a study of three algorithms, the equalization algorithm to equalize the transmission channel with ZF and MMSE criteria, application of channel Bran A, and adaptive filtering algorithms LMS and RLS to estimate the parameters of the equalizer filter, i.e. move to the channel estimation and therefore reflect the temporal variations of the channel, and reduce the error in the transmitted signal. So far the performance of the algorithm equalizer with ZF and MMSE criteria both in the case without noise, a comparison of performance of the LMS and RLS algorithm.

Keywords: adaptive filtering second equalizer, LMS, RLS Bran A, Proakis (B) MMSE, ZF

Procedia PDF Downloads 308
1400 The Verification Study of Computational Fluid Dynamics Model of the Aircraft Piston Engine

Authors: Lukasz Grabowski, Konrad Pietrykowski, Michal Bialy

Abstract:

This paper presents the results of the research to verify the combustion in aircraft piston engine Asz62-IR. This engine was modernized and a type of ignition system was developed. Due to the high costs of experiments of a nine-cylinder 1,000 hp aircraft engine, a simulation technique should be applied. Therefore, computational fluid dynamics to simulate the combustion process is a reasonable solution. Accordingly, the tests for varied ignition advance angles were carried out and the optimal value to be tested on a real engine was specified. The CFD model was created with the AVL Fire software. The engine in the research had two spark plugs for each cylinder and ignition advance angles had to be set up separately for each spark. The results of the simulation were verified by comparing the pressure in the cylinder. The courses of the indicated pressure of the engine mounted on a test stand were compared. The real course of pressure was measured with an optical sensor, mounted in a specially drilled hole between the valves. It was the OPTRAND pressure sensor, which was designed especially to engine combustion process research. The indicated pressure was measured in cylinder no 3. The engine was running at take-off power. The engine was loaded by a propeller at a special test bench. The verification of the CFD simulation results was based on the results of the test bench studies. The course of the simulated pressure obtained is within the measurement error of the optical sensor. This error is 1% and reflects the hysteresis and nonlinearity of the sensor. The real indicated pressure measured in the cylinder and the pressure taken from the simulation were compared. It can be claimed that the verification of CFD simulations based on the pressure is a success. The next step was to research on the impact of changing the ignition advance timing of spark plugs 1 and 2 on a combustion process. Moving ignition timing between 1 and 2 spark plug results in a longer and uneven firing of a mixture. The most optimal point in terms of indicated power occurs when ignition is simultaneous for both spark plugs, but so severely separated ignitions are assured that ignition will occur at all speeds and loads of engine. It should be confirmed by a bench experiment of the engine. However, this simulation research enabled us to determine the optimal ignition advance angle to be implemented into the ignition control system. This knowledge allows us to set up the ignition point with two spark plugs to achieve as large power as possible.

Keywords: CFD model, combustion, engine, simulation

Procedia PDF Downloads 357
1399 Investigation of the Flow in Impeller Sidewall Gap of a Centrifugal Pump Using CFD

Authors: Mohammadreza DaqiqShirazi, Rouhollah Torabi, Alireza Riasi, Ahmad Nourbakhsh

Abstract:

In this paper, the flow in a sidewall gap of an impeller which belongs to a centrifugal pump is studied using numerical method. The flow in sidewall gap forms internal leakage and is the source of “disk friction loss” which is the most important cause of reduced efficiency in low specific speed centrifugal pumps. Simulation is done using CFX software and a high quality mesh, therefore the modeling error has been reduced. Navier-Stokes equations have been solved for this domain. In order to predict the turbulence effects the SST model has been employed.

Keywords: numerical study, centrifugal pumps, disk friction loss, sidewall gap

Procedia PDF Downloads 520