Search results for: feeding systems
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9610

Search results for: feeding systems

6250 Learning from Dendrites: Improving the Point Neuron Model

Authors: Alexander Vandesompele, Joni Dambre

Abstract:

The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.

Keywords: dendritic computation, spiking neural networks, point neuron model

Procedia PDF Downloads 111
6249 Some Results on Cluster Synchronization

Authors: Shahed Vahedi, Mohd Salmi Md Noorani

Abstract:

This paper investigates cluster synchronization phenomena between community networks. We focus on the situation where a variety of dynamics occur in the clusters. In particular, we show that different synchronization states simultaneously occur between the networks. The controller is designed having an adaptive control gain, and theoretical results are derived via Lyapunov stability. Simulations on well-known dynamical systems are provided to elucidate our results.

Keywords: cluster synchronization, adaptive control, community network, simulation

Procedia PDF Downloads 456
6248 A User Interface for Easiest Way Image Encryption with Chaos

Authors: D. López-Mancilla, J. M. Roblero-Villa

Abstract:

Since 1990, the research on chaotic dynamics has received considerable attention, particularly in light of potential applications of this phenomenon in secure communications. Data encryption using chaotic systems was reported in the 90's as a new approach for signal encoding that differs from the conventional methods that use numerical algorithms as the encryption key. The algorithms for image encryption have received a lot of attention because of the need to find security on image transmission in real time over the internet and wireless networks. Known algorithms for image encryption, like the standard of data encryption (DES), have the drawback of low level of efficiency when the image is large. The encrypting based on chaos proposes a new and efficient way to get a fast and highly secure image encryption. In this work, a user interface for image encryption and a novel and easiest way to encrypt images using chaos are presented. The main idea is to reshape any image into a n-dimensional vector and combine it with vector extracted from a chaotic system, in such a way that the vector image can be hidden within the chaotic vector. Once this is done, an array is formed with the original dimensions of the image and turns again. An analysis of the security of encryption from the images using statistical analysis is made and is used a stage of optimization for image encryption security and, at the same time, the image can be accurately recovered. The user interface uses the algorithms designed for the encryption of images, allowing you to read an image from the hard drive or another external device. The user interface, encrypt the image allowing three modes of encryption. These modes are given by three different chaotic systems that the user can choose. Once encrypted image, is possible to observe the safety analysis and save it on the hard disk. The main results of this study show that this simple method of encryption, using the optimization stage, allows an encryption security, competitive with complicated encryption methods used in other works. In addition, the user interface allows encrypting image with chaos, and to submit it through any public communication channel, including internet.

Keywords: image encryption, chaos, secure communications, user interface

Procedia PDF Downloads 469
6247 Combining in vitro Protein Expression with AlphaLISA Technology to Study Protein-Protein Interaction

Authors: Shayli Varasteh Moradi, Wayne A. Johnston, Dejan Gagoski, Kirill Alexandrov

Abstract:

The demand for a rapid and more efficient technique to identify protein-protein interaction particularly in the areas of therapeutics and diagnostics development is growing. The method described here is a rapid in vitro protein-protein interaction analysis approach based on AlphaLISA technology combined with Leishmania tarentolae cell-free protein production (LTE) system. Cell-free protein synthesis allows the rapid production of recombinant proteins in a multiplexed format. Among available in vitro expression systems, LTE offers several advantages over other eukaryotic cell-free systems. It is based on a fast growing fermentable organism that is inexpensive in cultivation and lysate production. High integrity of proteins produced in this system and the ability to co-express multiple proteins makes it a desirable method for screening protein interactions. Following the translation of protein pairs in LTE system, the physical interaction between proteins of interests is analysed by AlphaLISA assay. The assay is performed using unpurified in vitro translation reaction and therefore can be readily multiplexed. This approach can be used in various research applications such as epitope mapping, antigen-antibody analysis and protein interaction network mapping. The intra-viral protein interaction network of Zika virus was studied using the developed technique. The viral proteins were co-expressed pair-wise in LTE and all possible interactions among viral proteins were tested using AlphaLISA. The assay resulted to the identification of 54 intra-viral protein-protein interactions from which 19 binary interactions were found to be novel. The presented technique provides a powerful tool for rapid analysis of protein-protein interaction with high sensitivity and throughput.

Keywords: AlphaLISA technology, cell-free protein expression, epitope mapping, Leishmania tarentolae, protein-protein interaction

Procedia PDF Downloads 220
6246 Performance Monitoring and Environmental Impact Analysis of a Photovoltaic Power Plant: A Numerical Modeling Approach

Authors: Zahzouh Zoubir

Abstract:

The widespread adoption of photovoltaic panel systems for global electricity generation is a prominent trend. Algeria, demonstrating steadfast commitment to strategic development and innovative projects for harnessing solar energy, emerges as a pioneering force in the field. Heat and radiation, being fundamental factors in any solar system, are currently subject to comprehensive studies aiming to discern their genuine impact on crucial elements within photovoltaic systems. This endeavor is particularly pertinent given that solar module performance is exclusively assessed under meticulously defined Standard Test Conditions (STC). Nevertheless, when deployed outdoors, solar modules exhibit efficiencies distinct from those observed under STC due to the influence of diverse environmental factors. This discrepancy introduces ambiguity in performance determination, especially when surpassing test conditions. This article centers on the performance monitoring of an Algerian photovoltaic project, specifically the Oued El Keberite power (OKP) plant boasting a 15 megawatt capacity, situated in the town of Souk Ahras in eastern Algeria. The study elucidates the behavior of a subfield within this facility throughout the year, encompassing various conditions beyond the STC framework. To ensure the optimal efficiency of solar panels, this study integrates crucial factors, drawing on an authentic technical sheet from the measurement station of the OKP photovoltaic plant. Numerical modeling and simulation of a sub-field of the photovoltaic station were conducted using MATLAB Simulink. The findings underscore how radiation intensity and temperature, whether low or high, impact the short-circuit current, open-circuit voltage; fill factor, and overall efficiency of the photovoltaic system.

Keywords: performance monitoring, photovoltaic system, numerical modeling, radiation intensity

Procedia PDF Downloads 50
6245 A Modular Reactor for Thermochemical Energy Storage Examination of Ettringite-Based Materials

Authors: B. Chen, F. Kuznik, M. Horgnies, K. Johannes, V. Morin, E. Gengembre

Abstract:

More attention on renewable energy has been done after the achievement of Paris Agreement against climate change. Solar-based technology is supposed to be one of the most promising green energy technologies for residential buildings since its widely thermal usage for hot water and heating. However, the seasonal mismatch between its production and consumption makes buildings need an energy storage system to improve the efficiency of renewable energy use. Indeed, there exist already different kinds of energy storage systems using sensible or latent heat. With the consideration of energy dissipation during storage and low energy density for above two methods, thermochemical energy storage is then recommended. Recently, ettringite (3CaO∙Al₂O₃∙3CaSO₄∙32H₂O) based materials have been reported as potential thermochemical storage materials because of high energy density (~500 kWh/m³), low material cost (700 €/m³) and low storage temperature (~60-70°C), compared to reported salt hydrates like SrBr₂·6H₂O (42 k€/m³, ~80°C), LaCl₃·7H₂O (38 k€/m³, ~100°C) and MgSO₄·7H₂O (5 k€/m³, ~150°C). Therefore, they have the possibility to be largely used in building sector with being coupled to normal solar panel systems. On the other side, the lack in terms of extensive examination leads to poor knowledge on their thermal properties and limit maturity of this technology. The aim of this work is to develop a modular reactor adapting to thermal characterizations of ettringite-based material particles of different sizes. The filled materials in the reactor can be self-compacted vertically to ensure hot air or humid air goes through homogenously. Additionally, quick assembly and modification of reactor, like LEGO™ plastic blocks, make it suitable to distinct thermochemical energy storage material samples with different weights (from some grams to several kilograms). In our case, quantity of stored and released energy, best work conditions and even chemical durability of ettringite-based materials have been investigated.

Keywords: dehydration, ettringite, hydration, modular reactor, thermochemical energy storage

Procedia PDF Downloads 114
6244 Proposal for a Generic Context Meta-Model

Authors: Jaouadi Imen, Ben Djemaa Raoudha, Ben Abdallah Hanene

Abstract:

The access to relevant information that is adapted to users’ needs, preferences and environment is a challenge in many applications running. That causes an appearance of context-aware systems. To facilitate the development of this class of applications, it is necessary that these applications share a common context meta-model. In this article, we will present our context meta-model that is defined using the OMG Meta Object facility (MOF). This meta-model is based on the analysis and synthesis of context concepts proposed in literature.

Keywords: context, meta-model, MOF, awareness system

Procedia PDF Downloads 541
6243 Biodsorption as an Efficient Technology for the Removal of Phosphate, Nitrate and Sulphate Anions in Industrial Wastewater

Authors: Angel Villabona-Ortíz, Candelaria Tejada-Tovar, Andrea Viera-Devoz

Abstract:

Wastewater treatment is an issue of vital importance in these times where the impacts of human activities are most evident, which have become essential tasks for the normal functioning of society. However, they put entire ecosystems at risk by time destroying the possibility of sustainable development. Various conventional technologies are used to remove pollutants from water. Agroindustrial waste is the product with the potential to be used as a renewable raw material for the production of energy and chemical products, and their use is beneficial since products with added value are generated from materials that were not used before. Considering the benefits that the use of residual biomass brings, this project proposes the use of agro-industrial residues from corn crops for the production of natural adsorbents whose purpose is aimed at the remediation of contaminated water bodies with large loads of nutrients. The adsorption capacity of two biomaterials obtained from the processing of corn stalks was evaluated by batch system tests. Biochar impregnated with sulfuric acid and thermally activated was synthesized. On the other hand, the cellulose was extracted from the corn stalks and chemically modified with cetyltrimethylammonium chloride in order to quaternize the surface of the adsorbent. The adsorbents obtained were characterized by thermogravimetric analysis (TGA), scanning electron microscopy (SEM), infrared spectrometry with Fourier Transform (FTIR), analysis by Brunauer, Emmett and Teller method (BET) and X-ray Diffraction analysis ( XRD), which showed favorable characteristics for the cellulose extraction process. Higher adsorption capacities of the nutrients were obtained with the use of biochar, with phosphate being the anion with the best removal percentages. The effect of the initial adsorbate concentration was evaluated, with which it was shown that the Freundlich isotherm better describes the adsorption process in most systems. The adsorbent-phosphate / nitrate systems fit better to the Pseudo Primer Order kinetic model, while the adsorbent-sulfate systems showed a better fit to the Pseudo second-order model, which indicates that there are both physical and chemical interactions in the process. Multicomponent adsorption tests revealed that phosphate anions have a higher affinity for both adsorbents. On the other hand, the thermodynamic parameters standard enthalpy (ΔH °) and standard entropy (ΔS °) with negative results indicate the exothermic nature of the process, whereas the ascending values of standard Gibbs free energy (ΔG °). The adsorption process of anions with biocarbon and modified cellulose is spontaneous and exothermic. The use of the evaluated biomateriles is recommended for the treatment of industrial effluents contaminated with sulfate, nitrate and phosphate anions.

Keywords: adsorption, biochar, modified cellulose, corn stalks

Procedia PDF Downloads 163
6242 Conceptualizing Clashing Values in the Field of Media Ethics

Authors: Saadia Izzeldin Malik

Abstract:

Lack of ethics is the crisis of the 21-century. Today’s global world is filled with economic, political, environmental, media/communication, and social crises that all generated by the eroding fabric of ethics and moral values that guide human’s decisions in all aspects of live. Our global world is guided by liberal western democratic principles and liberal capitalist economic principles that define and reinforce each other. In economic terms, capitalism has turned world economic systems into one market place of ideas and products controlled by big multinational corporations that not only determine the conditions and terms of commodity production and commodity exchange between countries, but also transform the political economy of media systems around the globe. The citizen (read the consumer) today is the target of persuasion by all types of media at a time when her/his interests should be, ethically and in principle, the basic significant factor in the selection of media content. It is very important in this juncture of clashing media values –professional and commercial- and wide spread ethical lapses of media organizations and media professionals to think of a perspective to theorize these conflicting values within a broader framework of media ethics. Thus, the aim of this paper is to, epistemologically, bring to the center a perspective on media ethics as a basis for reconciliation of clashing values of the media. The paper focuses on conflicting ethical values in current media debate; namely ownership of media vs. press freedom, individual right for privacy vs. public right to know, and global western consumerism values vs. media values. The paper concludes that a framework to reconcile conflicting values of media ethics should focus on the “individual” journalist and his/her moral development as well as focus on maintaining ethical principles of the media as an institution with a primary social responsibility for the “public” it serves.

Keywords: ethics, media, journalism, social responsibility, conflicting values, global

Procedia PDF Downloads 462
6241 21st Century Biotechnological Research and Development Advancements for Industrial Development in India

Authors: Monisha Isaac

Abstract:

Biotechnology is a discipline which explains the use of living organisms and systems to construct a product, or we can define it as an application or technology developed to use biological systems and organisms processes for a specific use. Particularly, it includes cells and its components use for new technologies and inventions. The tools developed can be further used in diverse fields such as agriculture, industry, research and hospitals etc. The 21st century has seen a drastic development and advancement in biotechnology in India. Significant increase in Government of India’s outlays for biotechnology over the past decade has been observed. A sectoral break up of biotechnology-based companies in India shows that most of the companies are agriculture-based companies having interests ranging from tissue culture to biopesticides. Major attention has been given by the companies in health related activities and in environmental biotechnology. The biopharmaceutical, which comprises of vaccines, diagnostic, and recombinant products is the most reliable and largest segment of the Indian Biotech industry. India has developed its vaccine markets and supplies them to various countries. Then there are the bio-services, which mainly comprise of contract researches and manufacturing services. India has made noticeable developments in the field of bio industries including manufacturing of enzymes, biofuels and biopolymers. Biotechnology is also playing a crucial and significant role in the field of agriculture. Traditional methods have been replaced by new technologies that mainly focus on GM crops, marker assisted technologies and the use of biotechnological tools to improve the quality of fertilizers and soil. It may only be a small contributor but has shown to have huge potential for growth. Bioinformatics is a computational method which helps to store, manage, arrange and design tools to interpret the extensive data gathered through experimental trials, making it important in the design of drugs.

Keywords: biotechnology, advancement, agriculture, bio-services, bio-industries, bio-pharmaceuticals

Procedia PDF Downloads 213
6240 dynr.mi: An R Program for Multiple Imputation in Dynamic Modeling

Authors: Yanling Li, Linying Ji, Zita Oravecz, Timothy R. Brick, Michael D. Hunter, Sy-Miin Chow

Abstract:

Assessing several individuals intensively over time yields intensive longitudinal data (ILD). Even though ILD provide rich information, they also bring other data analytic challenges. One of these is the increased occurrence of missingness with increased study length, possibly under non-ignorable missingness scenarios. Multiple imputation (MI) handles missing data by creating several imputed data sets, and pooling the estimation results across imputed data sets to yield final estimates for inferential purposes. In this article, we introduce dynr.mi(), a function in the R package, Dynamic Modeling in R (dynr). The package dynr provides a suite of fast and accessible functions for estimating and visualizing the results from fitting linear and nonlinear dynamic systems models in discrete as well as continuous time. By integrating the estimation functions in dynr and the MI procedures available from the R package, Multivariate Imputation by Chained Equations (MICE), the dynr.mi() routine is designed to handle possibly non-ignorable missingness in the dependent variables and/or covariates in a user-specified dynamic systems model via MI, with convergence diagnostic check. We utilized dynr.mi() to examine, in the context of a vector autoregressive model, the relationships among individuals’ ambulatory physiological measures, and self-report affect valence and arousal. The results from MI were compared to those from listwise deletion of entries with missingness in the covariates. When we determined the number of iterations based on the convergence diagnostics available from dynr.mi(), differences in the statistical significance of the covariate parameters were observed between the listwise deletion and MI approaches. These results underscore the importance of considering diagnostic information in the implementation of MI procedures.

Keywords: dynamic modeling, missing data, mobility, multiple imputation

Procedia PDF Downloads 153
6239 Role of Microplastics on Reducing Heavy Metal Pollution from Wastewater

Authors: Derin Ureten

Abstract:

Plastic pollution does not disappear, it gets smaller and smaller through photolysis which are caused mainly by sun’s radiation, thermal oxidation, thermal degradation, and biodegradation which is the action of organisms digesting larger plastics. All plastic pollutants have exceedingly harmful effects on the environment. Together with the COVID-19 pandemic, the number of plastic products such as masks and gloves flowing into the environment has increased more than ever. However, microplastics are not the only pollutants in water, one of the most tenacious and toxic pollutants are heavy metals. Heavy metal solutions are also capable of causing varieties of health problems in organisms such as cancer, organ damage, nervous system damage, and even death. The aim of this research is to prove that microplastics can be used in wastewater treatment systems by proving that they could adsorb heavy metals in solutions. Experiment for this research will include two heavy metal solutions; one including microplastics in a heavy metal contaminated water solution, and one that just includes heavy metal solution. After being sieved, absorbance of both mediums will be measured with the help of a spectrometer. Iron (III) chloride (FeCl3) will be used as the heavy metal solution since the solution becomes darker as the presence of this substance increases. The experiment will be supported by Pure Nile Red powder in order to observe if there are any visible differences under the microscope. Pure Nile Red powder is a chemical that binds to hydrophobic materials such as plastics and lipids. If proof of adsorbance could be observed by the rates of the solutions' final absorbance rates and visuals ensured by the Pure Nile Red powder, the experiment will be conducted with different temperature levels in order to analyze the most accurate temperature level to proceed with removal of heavy metals from water. New wastewater treatment systems could be generated with the help of microplastics, for water contaminated with heavy metals.

Keywords: microplastics, heavy metal, pollution, adsorbance, wastewater treatment

Procedia PDF Downloads 70
6238 Evaluation of Classification Algorithms for Diagnosis of Asthma in Iranian Patients

Authors: Taha SamadSoltani, Peyman Rezaei Hachesu, Marjan GhaziSaeedi, Maryam Zolnoori

Abstract:

Introduction: Data mining defined as a process to find patterns and relationships along data in the database to build predictive models. Application of data mining extended in vast sectors such as the healthcare services. Medical data mining aims to solve real-world problems in the diagnosis and treatment of diseases. This method applies various techniques and algorithms which have different accuracy and precision. The purpose of this study was to apply knowledge discovery and data mining techniques for the diagnosis of asthma based on patient symptoms and history. Method: Data mining includes several steps and decisions should be made by the user which starts by creation of an understanding of the scope and application of previous knowledge in this area and identifying KD process from the point of view of the stakeholders and finished by acting on discovered knowledge using knowledge conducting, integrating knowledge with other systems and knowledge documenting and reporting.in this study a stepwise methodology followed to achieve a logical outcome. Results: Sensitivity, Specifity and Accuracy of KNN, SVM, Naïve bayes, NN, Classification tree and CN2 algorithms and related similar studies was evaluated and ROC curves were plotted to show the performance of the system. Conclusion: The results show that we can accurately diagnose asthma, approximately ninety percent, based on the demographical and clinical data. The study also showed that the methods based on pattern discovery and data mining have a higher sensitivity compared to expert and knowledge-based systems. On the other hand, medical guidelines and evidence-based medicine should be base of diagnostics methods, therefore recommended to machine learning algorithms used in combination with knowledge-based algorithms.

Keywords: asthma, datamining, classification, machine learning

Procedia PDF Downloads 429
6237 UV-Cured Thiol-ene Based Polymeric Phase Change Materials for Thermal Energy Storage

Authors: M. Vezir Kahraman, Emre Basturk

Abstract:

Energy storage technology offers new ways to meet the demand to obtain efficient and reliable energy storage materials. Thermal energy storage systems provide the potential to acquire energy savings, which in return decrease the environmental impact related to energy usage. For this purpose, phase change materials (PCMs) that work as 'latent heat storage units' which can store or release large amounts of energy are preferred. Phase change materials (PCMs) are being utilized to absorb, collect and discharge thermal energy during the cycle of melting and freezing, converting from one phase to another. Phase Change Materials (PCMs) can generally be arranged into three classes: organic materials, salt hydrates and eutectics. Many kinds of organic and inorganic PCMs and their blends have been examined as latent heat storage materials. PCMs have found different application areas such as solar energy storage and transfer, HVAC (Heating, Ventilating and Air Conditioning) systems, thermal comfort in vehicles, passive cooling, temperature controlled distributions, industrial waste heat recovery, under floor heating systems and modified fabrics in textiles. Ultraviolet (UV)-curing technology has many advantages, which made it applicable in many different fields. Low energy consumption, high speed, room-temperature operation, low processing costs, high chemical stability, and being environmental friendly are some of its main benefits. UV-curing technique has many applications. One of the many advantages of UV-cured PCMs is that they prevent the interior PCMs from leaking. Shape-stabilized PCM is prepared by blending the PCM with a supporting material, usually polymers. In our study, this problem is minimized by coating the fatty alcohols with a photo-cross-linked thiol-ene based polymeric system. Leakage is minimized because photo-cross-linked polymer acts a matrix. The aim of this study is to introduce a novel thiol-ene based shape-stabilized PCM. Photo-crosslinked thiol-ene based polymers containing fatty alcohols were prepared and characterized for the purpose of phase change materials (PCMs). Different types of fatty alcohols were used in order to investigate their properties as shape-stable PCMs. The structure of the PCMs was confirmed by ATR-FTIR techniques. The phase transition behaviors, thermal stability of the prepared photo-crosslinked PCMs were investigated by differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA). This work was supported by Marmara University, Commission of Scientific Research Project.

Keywords: differential scanning calorimetry (DSC), Polymeric phase change material, thermal energy storage, UV-curing

Procedia PDF Downloads 211
6236 Fundamentals of Mobile Application Architecture

Authors: Mounir Filali

Abstract:

Companies use many innovative ways to reach their customers to stay ahead of the competition. Along with the growing demand for innovative business solutions is the demand for new technology. The most noticeable area of demand for business innovations is the mobile application industry. Recently, companies have recognized the growing need to integrate proprietary mobile applications into their suite of services; Companies have realized that developing mobile apps gives them a competitive edge. As a result, many have begun to rapidly develop mobile apps to stay ahead of the competition. Mobile application development helps companies meet the needs of their customers. Mobile apps also help businesses to take advantage of every potential opportunity to generate leads that convert into sales. Mobile app download growth statistics with the recent rise in demand for business-related mobile apps, there has been a similar rise in the range of mobile app solutions being offered. Today, companies can use the traditional route of the software development team to build their own mobile applications. However, there are also many platform-ready "low-code and no-code" mobile apps available to choose from. These mobile app development options have more streamlined business processes. This helps them be more responsive to their customers without having to be coding experts. Companies must have a basic understanding of mobile app architecture to attract and maintain the interest of mobile app users. Mobile application architecture refers to the buildings or structural systems and design elements that make up a mobile application. It also includes the technologies, processes, and components used during application development. The underlying foundation of all applications consists of all elements of the mobile application architecture; developing a good mobile app architecture requires proper planning and strategic design. The technology framework or platform on the back end and user-facing side of a mobile application is part of the mobile architecture of the application. In-application development Software programmers loosely refer to this set of mobile architecture systems and processes as the "technology stack."

Keywords: mobile applications, development, architecture, technology

Procedia PDF Downloads 87
6235 A Comparative Legal Enquiry on the Concept of Invention

Authors: Giovanna Carugno

Abstract:

The concept of invention is rarely scrutinized by legal scholars since it is a slippery one, full of nuances and difficult to be defined. When does an idea become relevant for the patent law? When is it simply possible to talk of what an invention is? It is the first question to be answered to obtain a patent, but it is sometimes neglected by treaties or reduced to very simple and automatically re-cited definitions. Maybe, also because it is more a transnational and cultural concept than a mere institution of law. Tautology is used to avoid the challenge (in the United States patent regulation, the inventor is the one who contributed to have a patentable invention); in other case, a clear definition is surprisingly not even provided (see, e.g., the European Patent Convention). In Europe, the issue is still more complicated because there are several different solutions elaborate inorganically be national systems of courts varying one to the other only with the aim of solving different IP cases. Also a neighbor domain, like copyright law, is not assisting us in the research, since an author in this field is entitles to be the 'inventor' or the 'author' and to protect as far as he produces something new. Novelty is not enough in patent law. A simple distinction between mere improvement that can be achieved by a man skilled in the art (a sort of reasonable man, in other sectors) or a change that is not obvious rising to the dignity of protection seems not going too far. It is not still defining this concept; it is rigid and not fruitful. So, setting aside for the moment the issue related to the definition of the invention/inventor, our proposal is to scrutinize the possible self-sufficiency of a system in which the inventor or the improver should be awarded of royalties or similar compensation according to the economic improvement he was able to bring. The law, in this case, is in the penumbras of misleading concepts, divided between facts that are obscure and technical, and not involving necessarily legal issues. The aim of this paper is to find out a single definition (or, at least, the minimum elements common in the different legal systems) of what is (legally) an invention and what can be the hints to practically identify an authentic invention. In conclusion, it will propose an alternative system in which the invention is not considered anymore and the only thing that matters are the revenues generated by technological improvement, caused by the worker's activity.

Keywords: comparative law, intellectual property, invention, patents

Procedia PDF Downloads 168
6234 Engineering Thermal-Hydraulic Simulator Based on Complex Simulation Suite “Virtual Unit of Nuclear Power Plant”

Authors: Evgeny Obraztsov, Ilya Kremnev, Vitaly Sokolov, Maksim Gavrilov, Evgeny Tretyakov, Vladimir Kukhtevich, Vladimir Bezlepkin

Abstract:

Over the last decade, a specific set of connected software tools and calculation codes has been gradually developed. It allows simulating I&C systems, thermal-hydraulic, neutron-physical and electrical processes in elements and systems at the Unit of NPP (initially with WWER (pressurized water reactor)). In 2012 it was called a complex simulation suite “Virtual Unit of NPP” (or CSS “VEB” for short). Proper application of this complex tool should result in a complex coupled mathematical computational model. And for a specific design of NPP, it is called the Virtual Power Unit (or VPU for short). VPU can be used for comprehensive modelling of a power unit operation, checking operator's functions on a virtual main control room, and modelling complicated scenarios for normal modes and accidents. In addition, CSS “VEB” contains a combination of thermal hydraulic codes: the best-estimate (two-liquid) calculation codes KORSAR and CORTES and a homogenous calculation code TPP. So to analyze a specific technological system one can build thermal-hydraulic simulation models with different detalization levels up to a nodalization scheme with real geometry. And the result at some points is similar to the notion “engineering/testing simulator” described by the European utility requirements (EUR) for LWR nuclear power plants. The paper is dedicated to description of the tools mentioned above and an example of the application of the engineering thermal-hydraulic simulator in analysis of the boron acid concentration in the primary coolant (changed by the make-up and boron control system).

Keywords: best-estimate code, complex simulation suite, engineering simulator, power plant, thermal hydraulic, VEB, virtual power unit

Procedia PDF Downloads 360
6233 Effective Emergency Response and Disaster Prevention: A Decision Support System for Urban Critical Infrastructure Management

Authors: M. Shahab Uddin, Pennung Warnitchai

Abstract:

Currently more than half of the world’s populations are living in cities, and the number and sizes of cities are growing faster than ever. Cities rely on the effective functioning of complex and interdependent critical infrastructures networks to provide public services, enhance the quality of life, and save the community from hazards and disasters. In contrast, complex connectivity and interdependency among the urban critical infrastructures bring management challenges and make the urban system prone to the domino effect. Unplanned rapid growth, increased connectivity, and interdependency among the infrastructures, resource scarcity, and many other socio-political factors are affecting the typical state of an urban system and making it susceptible to numerous sorts of diversion. In addition to internal vulnerabilities, urban systems are consistently facing external threats from natural and manmade hazards. Cities are not just complex, interdependent system, but also makeup hubs of the economy, politics, culture, education, etc. For survival and sustainability, complex urban systems in the current world need to manage their vulnerabilities and hazardous incidents more wisely and more interactively. Coordinated management in such systems makes for huge potential when it comes to absorbing negative effects in case some of its components were to function improperly. On the other hand, ineffective management during a similar situation of overall disorder from hazards devastation may make the system more fragile and push the system to an ultimate collapse. Following the quantum, the current research hypothesizes that a hazardous event starts its journey as an emergency, and the system’s internal vulnerability and response capacity determine its destination. Connectivity and interdependency among the urban critical infrastructures during this stage may transform its vulnerabilities into dynamic damaging force. An emergency may turn into a disaster in the absence of effective management; similarly, mismanagement or lack of management may lead the situation towards a catastrophe. Situation awareness and factual decision-making is the key to win a battle. The current research proposed a contextual decision support system for an urban critical infrastructure system while integrating three different models: 1) Damage cascade model which demonstrates damage propagation among the infrastructures through their connectivity and interdependency, 2) Restoration model, a dynamic restoration process of individual infrastructure, which is based on facility damage state and overall disruptions in surrounding support environment, and 3) Optimization model that ensures optimized utilization and distribution of available resources in and among the facilities. All three models are tightly connected, mutually interdependent, and together can assess the situation and forecast the dynamic outputs of every input. Moreover, this integrated model will hold disaster managers and decision makers responsible when it comes to checking all the alternative decision before any implementation, and support to produce maximum possible outputs from the available limited inputs. This proposed model will not only support to reduce the extent of damage cascade but will ensure priority restoration and optimize resource utilization through adaptive and collaborative management. Complex systems predictably fail but in unpredictable ways. System understanding, situation awareness, and factual decisions may significantly help urban system to survive and sustain.

Keywords: disaster prevention, decision support system, emergency response, urban critical infrastructure system

Procedia PDF Downloads 200
6232 Modeling Aggregation of Insoluble Phase in Reactors

Authors: A. Brener, B. Ismailov, G. Berdalieva

Abstract:

In the paper we submit the modification of kinetic Smoluchowski equation for binary aggregation applying to systems with chemical reactions of first and second orders in which the main product is insoluble. The goal of this work is to create theoretical foundation and engineering procedures for calculating the chemical apparatuses in the conditions of joint course of chemical reactions and processes of aggregation of insoluble dispersed phases which are formed in working zones of the reactor.

Keywords: binary aggregation, clusters, chemical reactions, insoluble phases

Procedia PDF Downloads 290
6231 High Efficiency Double-Band Printed Rectenna Model for Energy Harvesting

Authors: Rakelane A. Mendes, Sandro T. M. Goncalves, Raphaella L. R. Silva

Abstract:

The concepts of energy harvesting and wireless energy transfer have been widely discussed in recent times. There are some ways to create autonomous systems for collecting ambient energy, such as solar, vibratory, thermal, electromagnetic, radiofrequency (RF), among others. In the case of the RF it is possible to collect up to 100 μW / cm². To collect and/or transfer energy in RF systems, a device called rectenna is used, which is defined by the junction of an antenna and a rectifier circuit. The rectenna presented in this work is resonant at the frequencies of 1.8 GHz and 2.45 GHz. Frequencies at 1.8 GHz band are e part of the GSM / LTE band. The GSM (Global System for Mobile Communication) is a frequency band of mobile telephony, it is also called second generation mobile networks (2G), it came to standardize mobile telephony in the world and was originally developed for voice traffic. LTE (Long Term Evolution) or fourth generation (4G) has emerged to meet the demand for wireless access to services such as Internet access, online games, VoIP and video conferencing. The 2.45 GHz frequency is part of the ISM (Instrumentation, Scientific and Medical) frequency band, this band is internationally reserved for industrial, scientific and medical development with no need for licensing, and its only restrictions are related to maximum power transfer and bandwidth, which must be kept within certain limits (in Brazil the bandwidth is 2.4 - 2.4835 GHz). The rectenna presented in this work was designed to present efficiency above 50% for an input power of -15 dBm. It is known that for wireless energy capture systems the signal power is very low and varies greatly, for this reason this ultra-low input power was chosen. The Rectenna was built using the low cost FR4 (Flame Resistant) substrate, the antenna selected is a microfita antenna, consisting of a Meandered dipole, and this one was optimized using the software CST Studio. This antenna has high efficiency, high gain and high directivity. Gain is the quality of an antenna in capturing more or less efficiently the signals transmitted by another antenna and/or station. Directivity is the quality that an antenna has to better capture energy in a certain direction. The rectifier circuit used has series topology and was optimized using Keysight's ADS software. The rectifier circuit is the most complex part of the rectenna, since it includes the diode, which is a non-linear component. The chosen diode is the Schottky diode SMS 7630, this presents low barrier voltage (between 135-240 mV) and a wider band compared to other types of diodes, and these attributes make it perfect for this type of application. In the rectifier circuit are also used inductor and capacitor, these are part of the input and output filters of the rectifier circuit. The inductor has the function of decreasing the dispersion effect on the efficiency of the rectifier circuit. The capacitor has the function of eliminating the AC component of the rectifier circuit and making the signal undulating.

Keywords: dipole antenna, double-band, high efficiency, rectenna

Procedia PDF Downloads 102
6230 Agent-Based Modelling to Improve Dairy-origin Beef Production: Model Description and Evaluation

Authors: Addisu H. Addis, Hugh T. Blair, Paul R. Kenyon, Stephen T. Morris, Nicola M. Schreurs, Dorian J. Garrick

Abstract:

Agent-based modeling (ABM) enables an in silico representation of complex systems and cap-tures agent behavior resulting from interaction with other agents and their environment. This study developed an ABM to represent a pasture-based beef cattle finishing systems in New Zea-land (NZ) using attributes of the rearer, finisher, and processor, as well as specific attributes of dairy-origin beef cattle. The model was parameterized using values representing 1% of NZ dairy-origin cattle, and 10% of rearers and finishers in NZ. The cattle agent consisted of 32% Holstein-Friesian, 50% Holstein-Friesian–Jersey crossbred, and 8% Jersey, with the remainder being other breeds. Rearers and finishers repetitively and simultaneously interacted to determine the type and number of cattle populating the finishing system. Rearers brought in four-day-old spring-born calves and reared them until 60 calves (representing a full truck load) on average had a live weight of 100 kg before selling them on to finishers. Finishers mainly attained weaners from rearers, or directly from dairy farmers when weaner demand was higher than the supply from rearers. Fast-growing cattle were sent for slaughter before the second winter, and the re-mainder were sent before their third winter. The model finished a higher number of bulls than heifers and steers, although it was 4% lower than the industry reported value. Holstein-Friesian and Holstein-Friesian–Jersey-crossbred cattle dominated the dairy-origin beef finishing system. Jersey cattle account for less than 5% of total processed beef cattle. Further studies to include re-tailer and consumer perspectives and other decision alternatives for finishing farms would im-prove the applicability of the model for decision-making processes.

Keywords: agent-based modelling, dairy cattle, beef finishing, rearers, finishers

Procedia PDF Downloads 73
6229 The Current State Of Human Gait Simulator Development

Authors: Stepanov Ivan, Musalimov Viktor, Monahov Uriy

Abstract:

This report examines the current state of human gait simulator development based on the human hip joint model. This unit will create a database of human gait types, useful for setting up and calibrating mechano devices, as well as the creation of new systems of rehabilitation, exoskeletons and walking robots. The system has ample opportunity to configure the dimensions and stiffness, while maintaining relative simplicity.

Keywords: hip joint, human gait, physiotherapy, simulation

Procedia PDF Downloads 388
6228 Development of a Sustainable Municipal Solid Waste Management for an Urban Area: Case Study from a Developing Country

Authors: Anil Kumar Gupta, Dronadula Venkata Sai Praneeth, Brajesh Dubey, Arundhuti Devi, Suravi Kalita, Khanindra Sharma

Abstract:

Increase in urbanization and industrialization have led to improve in the standard of living. However, at the same time, the challenges due to improper solid waste management are also increasing. Municipal Solid Waste management is considered as a vital step in the development of urban infrastructure. The present study focuses on developing a solid waste management plan for an urban area in a developing country. The current scenario of solid waste management practices at various urban bodies in India is summarized. Guwahati city in the northeastern part of the country and is also one of the targeted smart cities (under the governments Smart Cities program) was chosen as case study to develop and implement the solid waste management plan. The whole city was divided into various divisions and waste samples were collected according to American Society for Testing and Materials (ASTM) - D5231-92 - 2016 for each division in the city and a composite sample prepared to represent the waste from the entire city. The solid waste characterization in terms of physical and chemical which includes mainly proximate and ultimate analysis were carried out. Existing primary and secondary collection systems were studied and possibilities of enhancing the collection systems were discussed. The composition of solid waste for the overall city was found to be as: organic matters 38%, plastic 27%, paper + cardboard 15%, Textile 9%, inert 7% and others 4%. During the conference presentation, further characterization results in terms of Thermal gravimetric analysis (TGA), pH and water holding capacity will be discussed. The waste management options optimizing activities such as recycling, recovery, reuse and reduce will be presented and discussed.

Keywords: proximate, recycling, thermal gravimetric analysis (TGA), solid waste management

Procedia PDF Downloads 167
6227 Characterization of Optical Communication Channels as Non-Deterministic Model

Authors: Valentina Alessandra Carvalho do Vale, Elmo Thiago Lins Cöuras Ford

Abstract:

Increasingly telecommunications sectors are adopting optical technologies, due to its ability to transmit large amounts of data over long distances. However, as in all systems of data transmission, optical communication channels suffer from undesirable and non-deterministic effects, being essential to know the same. Thus, this research allows the assessment of these effects, as well as their characterization and beneficial uses of these effects.

Keywords: optical communication, optical fiber, non-deterministic effects, telecommunication

Procedia PDF Downloads 770
6226 Consequence of Multi-Templating of Closely Related Structural Analogues on a Chitosan-Methacryllic Acid Molecularly Imprinted Polymer Matrix-Thermal and Chromatographic Traits

Authors: O.Ofoegbu, S. Roongnapa, A.N. Eboatu

Abstract:

Most polluted environments, most challengingly, aerosol types, contain a cocktail of different toxicants. Multi-templating of matrices have been the recent target by researchers in a bid to solving complex mixed-toxicant challenges using single or common remediation systems. This investigation looks at the effect of such multi-templated system vis-a-vis the synthesis by non-covalent interaction, of a molecularly imprinted polymer architecture using nicotine and its structural analogue Phenylalanine amide individually and, in the blend, (50:50), as template materials in a Chitosan-Methacrylic acid functional monomer matrix. The temperature for polymerization is 60OC and time for polymerization, 12hrs (water bath heating), 4mins for (microwave heating). The characteristic thermal properties of the molecularly imprinted materials are investigated using Simultaneous Thermal Analysis (STA) profiling, while the absorption and separation efficiencies based on the relative retention times and peak areas of templates were studied amongst other properties. Transmission Electron Microscopy (TEM) results obtained, show the creation of heterogeneous nanocavities, regardless, the introduction of Caffeine a close structural analogue presented near-zero perfusion. This confirms the selectivity and specificity of the templated polymers despite its dual-templated nature. The STA results presented the materials as having decomposition temperatures above 250OC and a relative loss in mass of less than19% over a period within 50mins of heating. Consequent to this outcome, multi-templated systems can be fabricated to sequester specifically and selectively targeted toxicants in a mixed toxicant populated system effectively.

Keywords: chitosan, dual-templated, methacrylic acid, mixed-toxicants, molecularly-imprinted-polymer

Procedia PDF Downloads 103
6225 Enhanced Exchange Bias in Poly-crystalline Compounds through Oxygen Vacancy and B-site Disorder

Authors: Koustav Pal, Indranil Das

Abstract:

In recent times, perovskite and double perovskite (DP) systems attracts lot of interest as they provide a rich material platform for studying emergent functionalities like near-room-temperature ferromagnetic (FM) insulators, exchange bias (EB), magnetocaloric effects, colossal magnetoresistance, anisotropy, etc. These interesting phenomena emerge because of complex couplings between spin, charge, orbital, and lattice degrees of freedom in these systems. Various magnetic phenomena such as exchange bias, spin glass, memory effect, colossal magneto-resistance, etc. can be modified and controlled through antisite (B-site) disorder or controlling oxygen concentration of the material. By controlling oxygen concentration in SrFe0.5Co0.5O3 – δ (SFCO) (δ ∼ 0.3), we achieve intrinsic exchange bias effect with a large exchange bias field (∼1.482 Tesla) and giant coercive field (∼1.454 Tesla). Now we modified the B-site by introducing 10% iridium in the system. This modification give rise to the exchange bias field as high as 1.865 tesla and coercive field 1.863 tesla. Our work aims to investigate the effect of oxygen deficiency and B-site effect on exchange bias in oxide materials for potential technological applications. Structural characterization techniques including X-ray diffraction, scanning tunneling microscopy, and transmission electron microscopy were utilized to determine crystal structure and particle size. X-ray photoelectron spectroscopy was used to identify valence states of the ions. Magnetic analysis revealed that oxygen deficiency resulted in a large exchange bias due to a significant number of ionic mixtures. Iridium doping was found to break interaction paths, resulting in various antiferromagnetic and ferromagnetic surfaces that enhance exchange bias.

Keywords: coercive field, disorder, exchange bias, spin glass

Procedia PDF Downloads 58
6224 Practical Experiences in the Development of a Lab-Scale Process for the Production and Recovery of Fucoxanthin

Authors: Alma Gómez-Loredo, José González-Valdez, Jorge Benavides, Marco Rito-Palomares

Abstract:

Fucoxanthin is a carotenoid that exerts multiple beneficial effects on human health, including antioxidant, anti-cancer, antidiabetic and anti-obesity activity; making the development of a whole process for its production and recovery an important contribution. In this work, the lab-scale production and purification of fucoxanthin in Isocrhysis galbana have been studied. In batch cultures, low light intensities (13.5 μmol/m2s) and bubble agitation were the best conditions for production of the carotenoid with product yields of up to 0.143 mg/g. After fucoxanthin ethanolic extraction from biomass and hexane partition, further recovery and purification of the carotenoid has been accomplished by means of alcohol – salt Aqueous Two-Phase System (ATPS) extraction followed by an ultrafiltration (UF) step. An ATPS comprised of ethanol and potassium phosphate (Volume Ratio (VR) =3; Tie-line Length (TLL) 60% w/w) presented a fucoxanthin recovery yield of 76.24 ± 1.60% among the studied systems and was able to remove 64.89 ± 2.64% of the carotenoid and chlorophyll pollutants. For UF, the addition of ethanol to the original recovered ethanolic ATPS stream to a final relation of 74.15% (w/w) resulted in a reduction of approximately 16% of the protein contents, increasing product purity with a recovery yield of about 63% of the compound in the permeate stream. Considering the production, extraction and primary recovery (ATPS and UF) steps, around a 45% global fucoxanthin recovery should be expected. Although other purification technologies, such as Centrifugal Partition Chromatography are able to obtain fucoxanthin recoveries of up to 83%, the process developed in the present work does not require large volumes of solvents or expensive equipment. Moreover, it has a potential for scale up to commercial scale and represents a cost-effective strategy when compared to traditional separation techniques like chromatography.

Keywords: aqueous two-phase systems, fucoxanthin, Isochrysis galbana, microalgae, ultrafiltration

Procedia PDF Downloads 404
6223 Exploring Partnership Brokering Science in Social Entrepreneurship: A Literature Review

Authors: Lani Fraizer

Abstract:

Increasingly, individuals from diverse professional and academic backgrounds are making a conscious choice to pursue careers related to social change; a sophisticated understanding of social entrepreneur education is becoming ever more important. Social entrepreneurs are impassioned change makers who characteristically combine leadership and entrepreneurial spirits to problem solve social ills affecting our planet. Generating partnership opportunities and nurturing them is an important part of their change-making work. Faced with the complexities of these partnerships, social entrepreneurs and people who work with them need to be well prepared to tackle new and unforeseen challenges faced. As partnerships become even more critical to advance initiatives at scale, for example, understanding the partnership brokering role is even more important for educators who prepare these leaders to establish and sustain multi-stakeholder partnerships. This paper aims to provide practitioners in social entrepreneurship with enhanced knowledge of partnership brokering and identify directions for future research. A literature review search from January 1977 to May 2015 was conducted using the combined keywords ‘partnership brokering’ and ‘social entrepreneurship’ via WorldCat, one of the largest database catalogs in the world with collections of more than 10,000 worldwide. This query focused on literature written in the English language and analyzed solely the role of partnership brokering in social entrepreneurship. The synthesis of the literature review found three main themes emerging: the need for more professional awareness of partnership brokering and its value add in systems change-making work, the need for more knowledge on developing partnership brokering competencies, and the need for more applied research in the area of partnership brokering and how it is practiced by practitioners in social entrepreneurship. The results of the review serve to emphasize and reiterate the importance of partnership brokers in social entrepreneurship work, and act as a reminder of the need for further scholarly research in this area to bridge the gap between practice and research.

Keywords: partnership brokering, leadership, social entrepreneurship, systems changemaking

Procedia PDF Downloads 329
6222 Multi-Label Approach to Facilitate Test Automation Based on Historical Data

Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally

Abstract:

The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.

Keywords: machine learning, multi-class, multi-label, supervised learning, test automation

Procedia PDF Downloads 111
6221 Chronic Fatigue Syndrome/Myalgic Encephalomyelitis in Younger Children: A Qualitative Analysis of Families’ Experiences of the Condition and Perspective on Treatment

Authors: Amberly Brigden, Ali Heawood, Emma C. Anderson, Richard Morris, Esther Crawley

Abstract:

Background: Paediatric chronic fatigue syndrome (CFS)/myalgic encephalomyelitis (ME) is characterised by persistent, disabling fatigue. Health services see patients below the age of 12. This age group experience high levels of disability, with low levels of school attendance, high levels of fatigue, anxiety, functional disability and pain. CFS/ME interventions have been developed for adolescents, but the developmental needs of younger children suggest treatment should be tailored to this age group. Little is known about how intervention should be delivered to this age group, and further work is needed to explore this. Qualitative research aids patient-centered design of health intervention. Methods: Five to 11-year-olds and their parents were recruited from a specialist CFS/ME service. Semi-structured interviews explored the families’ experience of the condition and perspectives on treatment. Interactive and arts-based methods were used. Interviews were audio-recorded, transcribed and analysed thematically. Qualitative Results: 14 parents and 7 children were interviewed. Early analysis of the interviews revealed the importance of the social-ecological setting of the child, which led to themes being developed in the context of Systems Theory. Theme one relates to the level of the child, theme two the family system, theme three the organisational and societal systems, and theme four cuts-across all levels. Theme1: The child’s capacity to describe, understand and manage their condition. Younger children struggled to describe their internal experiences, such as physical symptoms. Parents felt younger children did not understand some concepts of CFS/ME and did not have the capabilities to monitor and self-regulate their behaviour, as required by treatment. A spectrum of abilities was described; older children (10-11-year-olds) were more involved in clinical sessions and had more responsibility for self-management. Theme2: Parents’ responsibility for managing their child’s condition. Parents took responsibility for regulating their child’s behaviour in accordance with the treatment programme. They structured their child’s environment, gave direct instructions to their child, and communicated the needs of their child to others involved in care. Parents wanted their child to experience a 'normal' childhood and took steps to shield their child from medicalization, including diagnostic labels and clinical discussions. Theme3: Parental isolation and the role of organisational and societal systems. Parents felt unsupported in their role of managing the condition and felt negative responses from primary care health services and schools were underpinned by a lack of awareness and knowledge about CFS/ME in younger children. This sometimes led to a protracted time to diagnosis. Parents felt that schools have the potential important role in managing the child’s condition. Theme4: Complexity and uncertainty. Many parents valued specialist treatment (which included activity management, physiotherapy, sleep management, dietary advice, medical management and psychological support), but felt it needed to account for the complexity of the condition in younger children. Some parents expressed uncertainty about the diagnosis and the treatment programme. Conclusions: Interventions for younger children need to consider the 'systems' (family, organisational and societal) involved in the child’s care. Future research will include interviews with clinicians and schools supporting younger children with CFS/ME.

Keywords: chronic fatigue syndrome (CFS)/myalgic encephalomyelitis (ME), pediatric, qualitative, treatment

Procedia PDF Downloads 121