Search results for: efficient DMUs
4487 Sustainable Connectivity: Power-Line Communications for Home Automation in Ethiopia
Authors: Tsegahun Milkesa
Abstract:
This study investigates the implementation of Power-Line Communications (PLC) as a sustainable solution for home automation in Ethiopia. With the country's growing technological landscape and the quest for efficient energy use, this research explores the potential of PLC to facilitate smart home systems, aiming to enhance connectivity and energy management. The primary objective is to assess the feasibility and effectiveness of PLC in Ethiopian residences, considering factors such as infrastructure compatibility, reliability, and scalability. By analyzing existing PLC technologies and their adaptability to local contexts, this study aims to propose optimized solutions tailored to the Ethiopian environment. The research methodology involves a combination of literature review, field surveys, and experimental setups to evaluate PLC's performance in transmitting data and controlling various home appliances. Additionally, socioeconomic implications, including affordability and accessibility, are examined to ensure the technology's inclusivity in diverse Ethiopian households. The findings will contribute insights into the viability of PLC for sustainable connectivity in Ethiopian homes, shedding light on its potential to revolutionize energy-efficient and interconnected living spaces. Ultimately, this study seeks to pave the way for accessible and eco-friendly smart home solutions in Ethiopia, aligning with the nation's aspirations for technological advancement and sustainability.Keywords: sustainable connectivity, power-line communications (PLC), home automation, Ethiopia, smart homes, energy efficiency, connectivity solutions, infrastructure development, sustainable living
Procedia PDF Downloads 744486 Machine Learning Models for the Prediction of Heating and Cooling Loads of a Residential Building
Authors: Aaditya U. Jhamb
Abstract:
Due to the current energy crisis that many countries are battling, energy-efficient buildings are the subject of extensive research in the modern technological era because of growing worries about energy consumption and its effects on the environment. The paper explores 8 factors that help determine energy efficiency for a building: (relative compactness, surface area, wall area, roof area, overall height, orientation, glazing area, and glazing area distribution), with Tsanas and Xifara providing a dataset. The data set employed 768 different residential building models to anticipate heating and cooling loads with a low mean squared error. By optimizing these characteristics, machine learning algorithms may assess and properly forecast a building's heating and cooling loads, lowering energy usage while increasing the quality of people's lives. As a result, the paper studied the magnitude of the correlation between these input factors and the two output variables using various statistical methods of analysis after determining which input variable was most closely associated with the output loads. The most conclusive model was the Decision Tree Regressor, which had a mean squared error of 0.258, whilst the least definitive model was the Isotonic Regressor, which had a mean squared error of 21.68. This paper also investigated the KNN Regressor and the Linear Regression, which had to mean squared errors of 3.349 and 18.141, respectively. In conclusion, the model, given the 8 input variables, was able to predict the heating and cooling loads of a residential building accurately and precisely.Keywords: energy efficient buildings, heating load, cooling load, machine learning models
Procedia PDF Downloads 924485 Mastering Test Automation: Bridging Gaps for Seamless QA
Authors: Rohit Khankhoje
Abstract:
The rapid evolution of software development practices has given rise to an increasing demand for efficient and effective test automation. The paper titled "Mastering Test Automation: Bridging Gaps for Seamless QA" delves into the crucial aspects of test automation, addressing the obstacles faced by organizations in achieving flawless quality assurance. The paper highlights the importance of bridging knowledge gaps within organizations, emphasizing the necessity for management to acquire a deeper comprehension of test automation scenarios, coverage, report trends, and the importance of communication. To tackle these challenges, this paper introduces innovative solutions, including the development of an automation framework that seamlessly integrates with test cases and reporting tools like TestRail and Jira. This integration facilitates the automatic recording of bugs in Jira, enhancing bug reporting and communication between manual QA and automation teams as well as TestRail have all newly added automated testcases as soon as it is part of the automation suite. The paper demonstrates how this framework empowers management by providing clear insights into ongoing automation activities, bug origins, trend analysis, and test case specifics. "Mastering Test Automation" serves as a comprehensive guide for organizations aiming to enhance their quality assurance processes through effective test automation. It not only identifies the common pitfalls and challenges but also offers practical solutions to bridge the gaps, resulting in a more streamlined and efficient QA process.Keywords: automation framework, API integration, test automation, test management tools
Procedia PDF Downloads 724484 Biosorption of Lead (II) from Lead Acid Battery Industry Wastewater by Immobilized Dead Isolated Bacterial Biomass
Authors: Harikrishna Yadav Nanganuru, Narasimhulu Korrapati
Abstract:
Over the past many years, many sites in the world have been contaminated with heavy metals, which are the largest class of contaminants. Lead is one of the toxic heavy metals contaminated in the environment. Lead is not biodegradable, that’s why it is accumulated in the human body and impacts all the systems of the human body when it has been taken by humans. The accumulation of lead in the water environment has been showing adverse effects on the public health. So the removal of lead from the water environment by the biosorption process, which is emerged as a potential method for the lead removal, is an efficient approach. This work was focused to examine the removal of Lead [Pb (II)] ions from aqueous solution and effluent from battery industry. Lead contamination in water is a widespread problem throughout the world and mainly results from lead acid battery manufacturing effluent. In this work, isolated bacteria from wastewater of lead acid battery industry has been utilized for the removal of lead. First effluent from the lead acid battery industry was characterized by the inductively coupled plasma atomic emission spectrometry (ICP – AES). Then the bacteria was isolated from the effluent and used it’s immobilized dead mass for the biosorption of lead. Scanning electron microscopic (SEM) and Atomic force microscopy (AFM) studies clearly suggested that the Lead (Pb) was adsorbed efficiently. The adsorbed percentage of lead (II) from waste was 97.40 the concentration of lead (II) is measured by Atomic Absorption Spectroscopy (AAS). From the result of AAS it can be concluded that immobilized isolated dead mass was well efficient and useful for biosorption of lead contaminated waste water.Keywords: biosorption, ICP-AES, lead (Pb), SEM
Procedia PDF Downloads 3834483 A Case Report of Aberrant Vascular Anatomy of the Deep Inferior Epigastric Artery Flap
Authors: Karissa Graham, Andrew Campbell-Lloyd
Abstract:
The deep inferior epigastric artery perforator flap (DIEP) is used to reconstruct large volumes of tissue. The DIEP flap is based on the deep inferior epigastric artery (DIEA) and vein. Accurate knowledge of the anatomy of these vessels allows for efficient dissection of the flap, minimal damage to surrounding tissue, and a well vascularized flap. A 54 year old lady was assessed for bilateral delayed autologous reconstruction with DIEP free flaps. The right DIEA was consistent with the described anatomy. The left DIEA had a vessel branching shortly after leaving the external iliac artery and before entering the muscle. This independent branch entered the muscle and had a long intramuscular course to the largest perforator. The main DIEA vessel demonstrated a type II branching pattern but had perforators that were too small to have a viable DIEP flap. There were no communicating arterial branches between the independent vessel and DIEA, however, there was one venous communication between them. A muscle sparing transverse rectus abdominis muscle flap was raised using the main periumbilical perforator from the independent vessel. Our case report demonstrated an unreported anatomical variant of the DIEA. A few anatomical variants have been described in the literature, including a unilateral absent DIEA and peritoneal-cutaneous perforators that had no connection to the DIEA. Doing a pre-operative CTA helps to identify these rare anatomical variations, which leads to safer, more efficient, and effective operating.Keywords: aberrant anatomy, CT angiography, DIEP anatomy, free flap
Procedia PDF Downloads 1314482 Efficient Depolymerization of Polyethylene terephthalate (PET) Using Bimetallic Catalysts
Authors: Akmuhammet Karayev, Hassam Mazhar, Mamdouh Al Harthi
Abstract:
Polyethylene terephthalate (PET) recycling stands as a pivotal solution in combating plastic pollution and fostering a circular economy. This study addresses the catalytic glycolysis of PET, a key step in its recycling process, using synthesized catalysts. Our focus lies in elucidating the catalytic mechanism, optimizing reaction kinetics, and enhancing reactor design for efficient PET conversion. We synthesized anionic clays tailored for PET glycolysis and comprehensively characterized them using XRD, FT-IR, BET, DSC, and TGA techniques, confirming their suitability as catalysts. Through systematic parametric studies, we optimized reaction conditions to achieve complete PET conversion to bis hydroxy ethylene terephthalate (BHET) with over 75% yield within 2 hours at 200°C, employing a minimal catalyst concentration of 0.5%. These results underscore the catalysts' exceptional efficiency and sustainability, positioning them as frontrunners in catalyzing PET recycling processes. Furthermore, we demonstrated the recyclability of the obtained BHETs by repolymerizing them back to PET without the need for a catalyst. Heating the BHETs in a distillation unit facilitated their conversion back to PET, highlighting the closed-loop potential of our recycling approach. Our work embodies a significant leap in catalytic glycolysis kinetics, driven by sustainable catalysts, offering rapid and high-impact PET conversion while minimizing environmental footprint. This breakthrough not only sets new benchmarks for efficiency in PET recycling but also exemplifies the pivotal role of catalysis and reaction engineering in advancing sustainable materials management.Keywords: polymer recycling, catalysis, circular economy, glycolysis
Procedia PDF Downloads 404481 Unleashing the Power of Cerebrospinal System for a Better Computer Architecture
Authors: Lakshmi N. Reddi, Akanksha Varma Sagi
Abstract:
Studies on biomimetics are largely developed, deriving inspiration from natural processes in our objective world to develop novel technologies. Recent studies are diverse in nature, making their categorization quite challenging. Based on an exhaustive survey, we developed categorizations based on either the essential elements of nature - air, water, land, fire, and space, or on form/shape, functionality, and process. Such diverse studies as aircraft wings inspired by bird wings, a self-cleaning coating inspired by a lotus petal, wetsuits inspired by beaver fur, and search algorithms inspired by arboreal ant path networks lend themselves to these categorizations. Our categorizations of biomimetic studies allowed us to define a different dimension of biomimetics. This new dimension is not restricted to inspiration from the objective world. It is based on the premise that the biological processes observed in the objective world find their reflections in our human bodies in a variety of ways. For example, the lungs provide the most efficient example for liquid-gas phase exchange, the heart exemplifies a very efficient pumping and circulatory system, and the kidneys epitomize the most effective cleaning system. The main focus of this paper is to bring out the magnificence of the cerebro-spinal system (CSS) insofar as it relates to our current computer architecture. In particular, the paper uses four key measures to analyze the differences between CSS and human- engineered computational systems. These are adaptability, sustainability, energy efficiency, and resilience. We found that the cerebrospinal system reveals some important challenges in the development and evolution of our current computer architectures. In particular, the myriad ways in which the CSS is integrated with other systems/processes (circulatory, respiration, etc) offer useful insights on how the human-engineered computational systems could be made more sustainable, energy-efficient, resilient, and adaptable. In our paper, we highlight the energy consumption differences between CSS and our current computational designs. Apart from the obvious differences in materials used between the two, the systemic nature of how CSS functions provides clues to enhance life-cycles of our current computational systems. The rapid formation and changes in the physiology of dendritic spines and their synaptic plasticity causing memory changes (ex., long-term potentiation and long-term depression) allowed us to formulate differences in the adaptability and resilience of CSS. In addition, the CSS is sustained by integrative functions of various organs, and its robustness comes from its interdependence with the circulatory system. The paper documents and analyzes quantifiable differences between the two in terms of the four measures. Our analyses point out the possibilities in the development of computational systems that are more adaptable, sustainable, energy efficient, and resilient. It concludes with the potential approaches for technological advancement through creation of more interconnected and interdependent systems to replicate the effective operation of cerebro-spinal system.Keywords: cerebrospinal system, computer architecture, adaptability, sustainability, resilience, energy efficiency
Procedia PDF Downloads 964480 High Catalytic Activity and Stability of Ginger Peroxidase Immobilized on Amino Functionalized Silica Coated Titanium Dioxide Nanocomposite: A Promising Tool for Bioremediation
Authors: Misha Ali, Qayyum Husain, Nida Alam, Masood Ahmad
Abstract:
Improving the activity and stability of the enzyme is an important aspect in bioremediation processes. Immobilization of enzyme is an efficient approach to amend the properties of biocatalyst required during wastewater treatment. The present study was done to immobilize partially purified ginger peroxidase on amino functionalized silica coated titanium dioxide nanocomposite. Interestingly there was an enhancement in enzyme activity after immobilization on nanosupport which was evident from effectiveness factor (η) value of 1.76. Immobilized enzyme was characterized by transmission electron microscopy, scanning electron microscopy and Fourier transform infrared spectroscopy. Immobilized peroxidase exhibited higher activity in a broad range of pH and temperature as compared to free enzyme. Also, the thermostability of peroxidase was strikingly improved upon immobilization. After six repeated uses, the immobilized peroxidase retained around 62% of its dye decolorization activity. There was a 4 fold increase in Vmax of immobilized peroxidase as compared to free enzyme. Circular dichroism spectroscopy demonstrated conformational changes in the secondary structure of enzyme, a possible reason for the enhanced enzyme activity after immobilization. Immobilized peroxidase was highly efficient in the removal of acid yellow 42 dye in a stirred batch process. Our study shows that this bio-remediating system has remarkable potential for treatment of aromatic pollutants present in wastewater.Keywords: acid yellow 42, decolorization, ginger peroxidase, immobilization
Procedia PDF Downloads 2474479 Miracle Fruit Application in Sour Beverages: Effect of Different Concentrations on the Temporal Sensory Profile and Overall Linking
Authors: Jéssica F. Rodrigues, Amanda C. Andrade, Sabrina C. Bastos, Sandra B. Coelho, Ana Carla M. Pinheiro
Abstract:
Currently, there is a great demand for the use of natural sweeteners due to the harmful effects of the high sugar and artificial sweeteners consumption on the health. Miracle fruit, which is known for its unique ability to modify the sour taste in sweet taste, has been shown to be a good alternative sweetener. However, it has a high production cost, being important to optimize lower contents to be used. Thus, the aim of this study was to assess the effect of different miracle fruit contents on the temporal (Time-intensity - TI and Temporal Dominance of Sensations - TDS) sensory profile and overall linking of lemonade, to determine the better content to be used as a natural sweetener in sour beverages. TI and TDS results showed that the concentrations of 150 mg, 300 mg and 600 mg miracle fruit were effective in reducing the acidity and promoting the sweet perception in lemonade. Furthermore, the concentrations of 300 mg and 600 mg obtained similar profiles. Through the acceptance test, the concentration of 300 mg miracle fruit was shown to be an efficient substitute for sucrose and sucralose in lemonade, once they had similar hedonic values between ‘I liked it slightly’ and ‘I liked it moderately’. Therefore, 300mg miracle fruit consists in an adequate content to be used as a natural sweetener of lemonade. The results of this work will help the food industry on the efficient application of a new natural sweetener- the Miracle fruit extract in sour beverages, reducing costs and providing a product that meets the consumer desires.Keywords: acceptance, natural sweetener, temporal dominance of sensations, time-intensity
Procedia PDF Downloads 2484478 Nucleophile Mediated Addition-Fragmentation Generation of Aryl Radicals from Aryl Diazonium Salts
Authors: Elene Tatunashvili, Bun Chan, Philippe E. Nashar, Christopher S. P. McErlean
Abstract:
The reduction of aryl diazonium salts is one of the most efficient ways to generate aryl radicals for use in a wide range of transformations, including Sandmeyer-type reactions, Meerwein arylations of olefins and Gomberg-Bachmann-Hey arylations of heteroaromatic systems. The aryl diazonium species can be reduced electrochemically, by UV irradiation, inner-sphere and outer-sphere single electron transfer processes (SET) from metal salts, SET from photo-excited organic catalysts or fragmentation of adducts with weak bases (acetate, hydroxide, etc.). This paper details an approach for the metal-free reduction of aryl diazonium salts, which facilitates the efficient synthesis of various aromatic compounds under exceedingly mild reaction conditions. By measuring the oxidation potential of a number of organic molecules, a series of nucleophiles were identified that reduce aryl diazonium salts via the addition-fragmentation mechanism. This approach leads to unprecedented operational simplicity: The reactions are very rapid and proceed in the open air; there is no need for external irradiation or heating, and the process is compatible with a large number of radical reactions. We illustrate these advantages by using the addition-fragmentation strategy to regioselectively arylate a series of heterocyclic compounds, to synthesize ketones by arylation of silyl enol ethers, and to synthesize benzothiophene and phenanthrene derivatives by radical annulation reactions.Keywords: diazonium salts, hantzsch esters, oxygen, radical reactions, synthetic methods
Procedia PDF Downloads 1484477 Buildings Founded on Thermal Insulation Layer Subjected to Earthquake Load
Authors: David Koren, Vojko Kilar
Abstract:
The modern energy-efficient houses are often founded on a thermal insulation (TI) layer placed under the building’s RC foundation slab. The purpose of the paper is to identify the potential problems of the buildings founded on TI layer from the seismic point of view. The two main goals of the study were to assess the seismic behavior of such buildings, and to search for the critical structural parameters affecting the response of the superstructure as well as of the extruded polystyrene (XPS) layer. As a test building a multi-storeyed RC frame structure with and without the XPS layer under the foundation slab has been investigated utilizing nonlinear dynamic (time-history) and static (pushover) analyses. The structural response has been investigated with reference to the following performance parameters: i) Building’s lateral roof displacements, ii) Edge compressive and shear strains of the XPS, iii) Horizontal accelerations of the superstructure, iv) Plastic hinge patterns of the superstructure, v) Part of the foundation in compression, and vi) Deformations of the underlying soil and vertical displacements of the foundation slab (i.e. identifying the potential uplift). The results have shown that in the case of higher and stiff structures lying on firm soil the use of XPS under the foundation slab might induce amplified structural peak responses compared to the building models without XPS under the foundation slab. The analysis has revealed that the superstructure as well as the XPS response is substantially affected by the stiffness of the foundation slab.Keywords: extruded polystyrene (XPS), foundation on thermal insulation, energy-efficient buildings, nonlinear seismic analysis, seismic response, soil–structure interaction
Procedia PDF Downloads 3004476 Aircraft Components, Manufacturing and Design: Opportunities, Bottlenecks, and Challenges
Authors: Ionel Botef
Abstract:
Aerospace products operate in very aggressive environments characterized by high temperature, high pressure, large stresses on individual components, the presence of oxidizing and corroding atmosphere, as well as internally created or externally ingested particulate materials that induce erosion and impact damage. Consequently, during operation, the materials of individual components degrade. In addition, the impact of maintenance costs for both civil and military aircraft was estimated at least two to three times greater than initial purchase values, and this trend is expected to increase. As a result, for viable product realisation and maintenance, a spectrum of issues regarding novel processing technologies, innovation of new materials, performance, costs, and environmental impact must constantly be addressed. One of these technologies, namely the cold-gas dynamic-spray process has enabled a broad range of coatings and applications, including many that have not been previously possible or commercially practical, hence its potential for new aerospace applications. Therefore, the purpose of this paper is to summarise the state of the art of this technology alongside its theoretical and experimental studies, and explore how the cold-gas dynamic-spray process could be integrated within a framework that finally could lead to more efficient aircraft maintenance. Based on the paper's qualitative findings supported by authorities, evidence, and logic essentially it is argued that the cold-gas dynamic-spray manufacturing process should not be viewed in isolation, but should be viewed as a component of a broad framework that finally leads to more efficient aerospace operations.Keywords: aerospace, aging aircraft, cold spray, materials
Procedia PDF Downloads 1174475 Relay-Augmented Bottleneck Throughput Maximization for Correlated Data Routing: A Game Theoretic Perspective
Authors: Isra Elfatih Salih Edrees, Mehmet Serdar Ufuk Türeli
Abstract:
In this paper, an energy-aware method is presented, integrating energy-efficient relay-augmented techniques for correlated data routing with the goal of optimizing bottleneck throughput in wireless sensor networks. The system tackles the dual challenge of throughput optimization while considering sensor network energy consumption. A unique routing metric has been developed to enable throughput maximization while minimizing energy consumption by utilizing data correlation patterns. The paper introduces a game theoretic framework to address the NP-complete optimization problem inherent in throughput-maximizing correlation-aware routing with energy limitations. By creating an algorithm that blends energy-aware route selection strategies with the best reaction dynamics, this framework provides a local solution. The suggested technique considerably raises the bottleneck throughput for each source in the network while reducing energy consumption by choosing the best routes that strike a compromise between throughput enhancement and energy efficiency. Extensive numerical analyses verify the efficiency of the method. The outcomes demonstrate the significant decrease in energy consumption attained by the energy-efficient relay-augmented bottleneck throughput maximization technique, in addition to confirming the anticipated throughput benefits.Keywords: correlated data aggregation, energy efficiency, game theory, relay-augmented routing, throughput maximization, wireless sensor networks
Procedia PDF Downloads 814474 Photocatalytic Disintegration of Naphthalene and Naphthalene Similar Compounds in Indoors Air
Authors: Tobias Schnabel
Abstract:
Naphthalene and naphthalene similar compounds are a common problem in the indoor air of buildings from the 1960s and 1970s in Germany. Often tar containing roof felt was used under the concrete floor to prevent humidity to come through the floor. This tar containing roof felt has high concentrations of PAH (Polycyclic aromatic hydrocarbon) and naphthalene. Naphthalene easily evaporates and contaminates the indoor air. Especially after renovations and energetically modernization of the buildings, the naphthalene concentration rises because no forced air exchange can happen. Because of this problem, it is often necessary to change the floors after renovation of the buildings. The MFPA Weimar (Material research and testing facility) developed in cooperation a project with LEJ GmbH and Reichmann Gebäudetechnik GmbH. It is a technical solution for the disintegration of naphthalene in naphthalene, similar compounds in indoor air with photocatalytic reforming. Photocatalytic systems produce active oxygen species (hydroxyl radicals) through trading semiconductors on a wavelength of their bandgap. The light energy separates the charges in the semiconductor and produces free electrons in the line tape and defect electrons. The defect electrons can react with hydroxide ions to hydroxyl radicals. The produced hydroxyl radicals are a strong oxidation agent, and can oxidate organic matter to carbon dioxide and water. During the research, new titanium oxide catalysator surface coatings were developed. This coating technology allows the production of very porous titan oxide layer on temperature stable carrier materials. The porosity allows the naphthalene to get easily absorbed by the surface coating, what accelerates the reaction of the heterogeneous photocatalysis. The photocatalytic reaction is induced by high power and high efficient UV-A (ultra violet light) Leds with a wavelength of 365nm. Various tests in emission chambers and on the reformer itself show that a reduction of naphthalene in important concentrations between 2 and 250 µg/m³ is possible. The disintegration rate was at least 80%. To reduce the concentration of naphthalene from 30 µg/m³ to a level below 5 µg/m³ in a usual 50 ² classroom, an energy of 6 kWh is needed. The benefits of the photocatalytic indoor air treatment are that every organic compound in the air can be disintegrated and reduced. The use of new photocatalytic materials in combination with highly efficient UV leds make a safe and energy efficient reduction of organic compounds in indoor air possible. At the moment the air cleaning systems take the step from prototype stage into the usage in real buildings.Keywords: naphthalene, titandioxide, indoor air, photocatalysis
Procedia PDF Downloads 1424473 Distributed System Computing Resource Scheduling Algorithm Based on Deep Reinforcement Learning
Authors: Yitao Lei, Xingxiang Zhai, Burra Venkata Durga Kumar
Abstract:
As the quantity and complexity of computing in large-scale software systems increase, distributed system computing becomes increasingly important. The distributed system realizes high-performance computing by collaboration between different computing resources. If there are no efficient resource scheduling resources, the abuse of distributed computing may cause resource waste and high costs. However, resource scheduling is usually an NP-hard problem, so we cannot find a general solution. However, some optimization algorithms exist like genetic algorithm, ant colony optimization, etc. The large scale of distributed systems makes this traditional optimization algorithm challenging to work with. Heuristic and machine learning algorithms are usually applied in this situation to ease the computing load. As a result, we do a review of traditional resource scheduling optimization algorithms and try to introduce a deep reinforcement learning method that utilizes the perceptual ability of neural networks and the decision-making ability of reinforcement learning. Using the machine learning method, we try to find important factors that influence the performance of distributed system computing and help the distributed system do an efficient computing resource scheduling. This paper surveys the application of deep reinforcement learning on distributed system computing resource scheduling proposes a deep reinforcement learning method that uses a recurrent neural network to optimize the resource scheduling, and proposes the challenges and improvement directions for DRL-based resource scheduling algorithms.Keywords: resource scheduling, deep reinforcement learning, distributed system, artificial intelligence
Procedia PDF Downloads 1104472 Hybrid Graphene Based Nanomaterial as Highly Efficient Catalyst for the Electrochemical Determination of Ciprofloxacin
Authors: Tien S. H. Pham, Peter J. Mahon, Aimin Yu
Abstract:
The detection of drug molecules by voltammetry has attracted great interest over the past years. However, many drug molecules exhibit poor electrochemical signals at common electrodes which result in low sensitivity in detection. An efficient way to overcome this problem is to modify electrodes with functional materials. Since discovered in 2004, graphene (or reduced graphene oxide) has emerged as one of the most studied two-dimensional carbon materials in condensed matter physics, electrochemistry, and so on due to its exceptional physicochemical properties. Additionally, the continuous development of technology has opened the new window for the successful fabrications of many novel graphene-based nanomaterials to serve in electrochemical analysis. This research aims to synthesize and characterize gold nanoparticle coated beta-cyclodextrin functionalized reduced graphene oxide (Au NP–β-CD–RGO) nanocomposites with highly conductive and strongly electro-catalytic properties as well as excellent supramolecular recognition abilities for the modification of electrodes. The electrochemical responses of ciprofloxacin at the as-prepared nanocomposite modified electrode was effectively amplified was much higher in comparison with that at the bare electrode. The linear concentration range was from 0.01 to 120 µM, with a detection limit of 2.7 nM using differential pulse voltammetry. Thus, Au NP–β-CD–RGO nanocomposite has great potential as an ideal material to construct sensitive sensors for the electrochemical determination of ciprofloxacin or similar antibacterial drugs in the future based on its excellent stability, selectivity, and reproducibility.Keywords: Au nanoparticles, β-CD, ciprofloxacin, electrochemical determination, graphene based nanomaterials
Procedia PDF Downloads 1864471 Emergence of Information Centric Networking and Web Content Mining: A Future Efficient Internet Architecture
Authors: Sajjad Akbar, Rabia Bashir
Abstract:
With the growth of the number of users, the Internet usage has evolved. Due to its key design principle, there is an incredible expansion in its size. This tremendous growth of the Internet has brought new applications (mobile video and cloud computing) as well as new user’s requirements i.e. content distribution environment, mobility, ubiquity, security and trust etc. The users are more interested in contents rather than their communicating peer nodes. The current Internet architecture is a host-centric networking approach, which is not suitable for the specific type of applications. With the growing use of multiple interactive applications, the host centric approach is considered to be less efficient as it depends on the physical location, for this, Information Centric Networking (ICN) is considered as the potential future Internet architecture. It is an approach that introduces uniquely named data as a core Internet principle. It uses the receiver oriented approach rather than sender oriented. It introduces the naming base information system at the network layer. Although ICN is considered as future Internet architecture but there are lot of criticism on it which mainly concerns that how ICN will manage the most relevant content. For this Web Content Mining(WCM) approaches can help in appropriate data management of ICN. To address this issue, this paper contributes by (i) discussing multiple ICN approaches (ii) analyzing different Web Content Mining approaches (iii) creating a new Internet architecture by merging ICN and WCM to solve the data management issues of ICN. From ICN, Content-Centric Networking (CCN) is selected for the new architecture, whereas, Agent-based approach from Web Content Mining is selected to find most appropriate data.Keywords: agent based web content mining, content centric networking, information centric networking
Procedia PDF Downloads 4734470 Smartphone Application for Social Inclusion of Deaf Parents and Children About Sphincter Training
Authors: Júlia Alarcon Pinto, Carlos João Schaffhausser, Gustavo Alarcon Pinto
Abstract:
Introduction: The deaf people in Brazil communicate through the Brazilian Sign Language (LIBRAS), which is restricted to this minority and people that received training. However, there is a lack of prepared professionals in the health system to deal with these patients. Therefore, effective communication, health education, quality of support and assistance are compromised. It is of utmost importance to develop measures that ensure the inclusion of deaf parents and children since there are frequent doubts about sphincter training and an absence of tools to promote effective communication between doctors and their patients. Objective: Use of an efficient, rapid and cheap communication method to promote social inclusion and patient education of deaf parents and children during pediatrics appointments. Results; The application demonstrates how to express phrases and symptoms within seconds and this allows patients to fully understand the information provided during the appointment and are capable to evaluate the signs of readiness, learn the correct approaches with the child, what are the adequate instruments, possible obstacles and the importance to execute medical orientations in order to achieve success in the process. Consequently, patients feel more satisfied, secured and embraced by professionals in the health system care. Conclusion: It is of utmost importance to use efficient and cheap methods that support patient care and education in order to promote health and social inclusion.Keywords: application, deaf patients, social inclusion, sphincter training
Procedia PDF Downloads 1194469 Building Information Modelling (BIM) and Unmanned Aerial Vehicles (UAV) Technologies in Road Construction Project Monitoring and Management: Case Study of a Project in Cyprus
Authors: Yiannis Vacanas, Kyriacos Themistocleous, Athos Agapiou, Diofantos Hadjimitsis
Abstract:
Building Information Modelling (BIM) technology is considered by construction professionals as a very valuable process in modern design, procurement and project management. Construction professionals of all disciplines can use a single 3D model which BIM technology provides, to design a project accurately and furthermore monitor the progress of construction works effectively and efficiently. Unmanned Aerial Vehicles (UAVs), a technology initially developed for military applications, is now without any difficulty accessible and has already been used by commercial industries, including the construction industry. UAV technology has mainly been used for collection of images that allow visual monitoring of building and civil engineering projects conditions in various circumstances. UAVs, nevertheless, have undergone significant advances in equipment capabilities and now have the capacity to acquire high-resolution imagery from many angles in a cost effective manner, and by using photogrammetry methods, someone can determine characteristics such as distances, angles, areas, volumes and elevations of an area within overlapping images. In order to examine the potential of using a combination of BIM and UAV technologies in construction project management, this paper presents the results of a case study of a typical road construction project where the combined use of the two technologies was used in order to achieve efficient and accurate as-built data collection of the works progress, with outcomes such as volumes, and production of sections and 3D models, information necessary in project progress monitoring and efficient project management.Keywords: BIM, project management, project monitoring, UAV
Procedia PDF Downloads 3014468 Energy Efficient Massive Data Dissemination Through Vehicle Mobility in Smart Cities
Authors: Salman Naseer
Abstract:
One of the main challenges of operating a smart city (SC) is collecting the massive data generated from multiple data sources (DS) and to transmit them to the control units (CU) for further data processing and analysis. These ever-increasing data demands require not only more and more capacity of the transmission channels but also results in resource over-provision to meet the resilience requirements, thus the unavoidable waste because of the data fluctuations throughout the day. In addition, the high energy consumption (EC) and carbon discharges from these data transmissions posing serious issues to the environment we live in. Therefore, to overcome the issues of intensive EC and carbon emissions (CE) of massive data dissemination in Smart Cities, we propose an energy efficient and carbon reduction approach by utilizing the daily mobility of the existing vehicles as an alternative communications channel to accommodate the data dissemination in smart cities. To illustrate the effectiveness and efficiency of our approach, we take the Auckland City in New Zealand as an example, assuming massive data generated by various sources geographically scattered throughout the Auckland region to the control centres located in city centre. The numerical results show that our proposed approach can provide up to 5 times lower delay as transferring the large volume of data by utilizing the existing daily vehicles’ mobility than the conventional transmission network. Moreover, our proposed approach offers about 30% less EC and CE than that of conventional network transmission approach.Keywords: smart city, delay tolerant network, infrastructure offloading, opportunistic network, vehicular mobility, energy consumption, carbon emission
Procedia PDF Downloads 1404467 iCount: An Automated Swine Detection and Production Monitoring System Based on Sobel Filter and Ellipse Fitting Model
Authors: Jocelyn B. Barbosa, Angeli L. Magbaril, Mariel T. Sabanal, John Paul T. Galario, Mikka P. Baldovino
Abstract:
The use of technology has become ubiquitous in different areas of business today. With the advent of digital imaging and database technology, business owners have been motivated to integrate technology to their business operation ranging from small, medium to large enterprises. Technology has been found to have brought many benefits that can make a business grow. Hog or swine raising, for example, is a very popular enterprise in the Philippines, whose challenges in production monitoring can be addressed through technology integration. Swine production monitoring can become a tedious task as the enterprise goes larger. Specifically, problems like delayed and inconsistent reports are most likely to happen if counting of swine per pen of which building is done manually. In this study, we present iCount, which aims to ensure efficient swine detection and counting that hastens the swine production monitoring task. We develop a system that automatically detects and counts swine based on Sobel filter and ellipse fitting model, given the still photos of the group of swine captured in a pen. We improve the Sobel filter detection result through 8-neigbhorhood rule implementation. Ellipse fitting technique is then employed for proper swine detection. Furthermore, the system can generate periodic production reports and can identify the specific consumables to be served to the swine according to schedules. Experiments reveal that our algorithm provides an efficient way for detecting swine, thereby providing a significant amount of accuracy in production monitoring.Keywords: automatic swine counting, swine detection, swine production monitoring, ellipse fitting model, sobel filter
Procedia PDF Downloads 3114466 Establishments of an Efficient Platform for Genome Editing in Grapevine
Authors: S. Najafi, E. Bertini, M. Pezzotti, G.B. Tornielli, S. Zenoni
Abstract:
Grapevine is an important agricultural fruit crop plant consumed worldwide and with a key role in the global economy. Grapevine is strongly affected by both biotic and abiotic stresses, which impact grape growth at different stages, such as during plant and berry development and pre- and post-harvest, consequently causing significant economic losses. Recently global warming has propelled the anticipation of the onset of berry ripening, determining the reduction of a grape color and increased volatilization of aroma compounds. Climate change could negatively alter the physiological characteristics of the grape and affect the berry and wine quality. Modern plant breeding can provide tools such as genome editing for improving grape resilience traits while maintaining intact the viticultural and oenological quality characteristics of the genotype. This study aims at developing a platform for genome editing application in grapevine plants with the final goal to improve berry quality, biotic, and abiotic resilience traits. We chose to directly deliver ribonucleoproteins (RNP, preassembled Cas protein and guide RNA) into plant protoplasts, and, from these cell structures, regenerate grapevine plants edited in specific selected genes controlling traits of interest. Edited plants regenerated by somatic embryogenesis from protoplasts will then be sequenced and molecularly characterized. Embryogenic calli of Sultana and Shiraz cultivars were initiated from unopened leaves of in-vitro shoot tip cultures and from stamens, respectively. Leaves were placed on NB2 medium while stamens on callus initiation medium (PIV) medium and incubated in the dark at 28 °C for three months. Viable protoplasts, tested by FDA staining, isolated from embryogenic calli were cultured by disc method at 1*105 protoplasts/ml. Mature well-shaped somatic embryos developed directly in the protoplast culture medium two months later and were transferred in the light into to shooting medium for further growth. Regenerated plants were then transferred to the greenhouse; no phenotypic alterations were observed when compared to non in-vitro cultured plants. The performed experiments allowed to established an efficient protocol of embryogenic calli production, protoplast isolation, and regeneration of the whole plant through somatic embryogenesis in both Sultana and Shiraz. Regenerated plants, through direct somatic embryogenesis deriving from a single cell, avoid the risk of chimerism during the regeneration process, therefore improving the genome editing process. As pre-requisite of genome editing, an efficient method for transfection of protoplast by yellow fluorescent protein (YFP) marker genes was also established and experiments of direct delivery of CRISPR–Cas9 ribonucleoproteins (RNPs) in protoplasts to achieve efficient DNA-free targeted mutations are in progress.Keywords: CRISPR-cas9, plant regeneration, protoplast isolation, Vitis vinifera
Procedia PDF Downloads 1474465 Efficient Human Motion Detection Feature Set by Using Local Phase Quantization Method
Authors: Arwa Alzughaibi
Abstract:
Human Motion detection is a challenging task due to a number of factors including variable appearance, posture and a wide range of illumination conditions and background. So, the first need of such a model is a reliable feature set that can discriminate between a human and a non-human form with a fair amount of confidence even under difficult conditions. By having richer representations, the classification task becomes easier and improved results can be achieved. The Aim of this paper is to investigate the reliable and accurate human motion detection models that are able to detect the human motions accurately under varying illumination levels and backgrounds. Different sets of features are tried and tested including Histogram of Oriented Gradients (HOG), Deformable Parts Model (DPM), Local Decorrelated Channel Feature (LDCF) and Aggregate Channel Feature (ACF). However, we propose an efficient and reliable human motion detection approach by combining Histogram of oriented gradients (HOG) and local phase quantization (LPQ) as the feature set, and implementing search pruning algorithm based on optical flow to reduce the number of false positive. Experimental results show the effectiveness of combining local phase quantization descriptor and the histogram of gradient to perform perfectly well for a large range of illumination conditions and backgrounds than the state-of-the-art human detectors. Areaunder th ROC Curve (AUC) of the proposed method achieved 0.781 for UCF dataset and 0.826 for CDW dataset which indicates that it performs comparably better than HOG, DPM, LDCF and ACF methods.Keywords: human motion detection, histograms of oriented gradient, local phase quantization, local phase quantization
Procedia PDF Downloads 2564464 Stochastic Matrices and Lp Norms for Ill-Conditioned Linear Systems
Authors: Riadh Zorgati, Thomas Triboulet
Abstract:
In quite diverse application areas such as astronomy, medical imaging, geophysics or nondestructive evaluation, many problems related to calibration, fitting or estimation of a large number of input parameters of a model from a small amount of output noisy data, can be cast as inverse problems. Due to noisy data corruption, insufficient data and model errors, most inverse problems are ill-posed in a Hadamard sense, i.e. existence, uniqueness and stability of the solution are not guaranteed. A wide class of inverse problems in physics relates to the Fredholm equation of the first kind. The ill-posedness of such inverse problem results, after discretization, in a very ill-conditioned linear system of equations, the condition number of the associated matrix can typically range from 109 to 1018. This condition number plays the role of an amplifier of uncertainties on data during inversion and then, renders the inverse problem difficult to handle numerically. Similar problems appear in other areas such as numerical optimization when using interior points algorithms for solving linear programs leads to face ill-conditioned systems of linear equations. Devising efficient solution approaches for such system of equations is therefore of great practical interest. Efficient iterative algorithms are proposed for solving a system of linear equations. The approach is based on a preconditioning of the initial matrix of the system with an approximation of a generalized inverse leading to a stochastic preconditioned matrix. This approach, valid for non-negative matrices, is first extended to hermitian, semi-definite positive matrices and then generalized to any complex rectangular matrices. The main results obtained are as follows: 1) We are able to build a generalized inverse of any complex rectangular matrix which satisfies the convergence condition requested in iterative algorithms for solving a system of linear equations. This completes the (short) list of generalized inverse having this property, after Kaczmarz and Cimmino matrices. Theoretical results on both the characterization of the type of generalized inverse obtained and the convergence are derived. 2) Thanks to its properties, this matrix can be efficiently used in different solving schemes as Richardson-Tanabe or preconditioned conjugate gradients. 3) By using Lp norms, we propose generalized Kaczmarz’s type matrices. We also show how Cimmino's matrix can be considered as a particular case consisting in choosing the Euclidian norm in an asymmetrical structure. 4) Regarding numerical results obtained on some pathological well-known test-cases (Hilbert, Nakasaka, …), some of the proposed algorithms are empirically shown to be more efficient on ill-conditioned problems and more robust to error propagation than the known classical techniques we have tested (Gauss, Moore-Penrose inverse, minimum residue, conjugate gradients, Kaczmarz, Cimmino). We end on a very early prospective application of our approach based on stochastic matrices aiming at computing some parameters (such as the extreme values, the mean, the variance, …) of the solution of a linear system prior to its resolution. Such an approach, if it were to be efficient, would be a source of information on the solution of a system of linear equations.Keywords: conditioning, generalized inverse, linear system, norms, stochastic matrix
Procedia PDF Downloads 1314463 Sustainable and Efficient Recovery of Polyhydroxyalkanoate Polymer from Cupriavidus necator Using Environment Friendly Solvents
Authors: Geeta Gahlawat, Sanjeev Kumar Soni
Abstract:
An imprudent use of environmentally hazardous petrochemical-based plastics and limited availability of fossil fuels have provoked research interests towards production of biodegradable plastics - polyhydroxyalkanoate (PHAs). However, the industrial application of PHAs based products is primarily restricted by their high cost of recovery and extraction protocols. Moreover, solvents used for the extraction and purification are toxic and volatile which causes adverse environmental hazards. Development of efficient downstream recovery strategies along with utilization of non-toxic solvents will accelerate their commercialization. In this study, various extraction strategies were designed for sustainable and cost-effective recovery of PHAs from Cupriavidus necator using non-toxic environment friendly solvents viz. 1,2-propylene carbonate, ethyl acetate, isoamyl alcohol, butyl acetate. The effect of incubation time i.e. 10, 30 and 50 min and temperature i.e. 60, 80, 100, 120°C was tested to identify the most suitable solvent. PHAs extraction using a recyclable solvent, 1,2 propylene carbonate, showed the highest recovery yield (90%) and purity (93%) at 120°C and 30 min incubation. Ethyl acetate showed the better capacity to recover PHAs from cells than butyl acetate. Extraction with ethyl acetate exhibited high recovery yield and purity of 96% and 92%, respectively at 100°C. Effect of non-toxic surfactant such as linear alkylbenzene sulfonic acid (LAS) was also studied at 40, 60 and 80°C, and detergent pH range of 3.0, 5.0, 7.0 and 9.0 for the extraction of PHAs from the cells. LAS gave highest yield of 86% and purity of 88% at temperature 80°C and 5.0 pH.Keywords: polyhydroxyalkanoates, Cupriavidus necator, extraction, recovery yield
Procedia PDF Downloads 5094462 The Use of Sustainability Criteria on Infrastructure Design to Encourage Sustainable Engineering Solutions on Infrastructure Projects
Authors: Shian Saroop, Dhiren Allopi
Abstract:
In order to stay competitive and to meet upcoming stricter environmental regulations and customer requirements, designers have a key role in designing civil infrastructure so that it is environmentally sustainable. There is an urgent need for engineers to apply technologies and methods that deliver better and more sustainable performance of civil infrastructure as well as a need to establish a standard of measurement for greener infrastructure, rather than merely use tradition solutions. However, there are no systems in place at the design stage that assesses the environmental impact of design decisions on township infrastructure projects. This paper identifies alternative eco-efficient civil infrastructure design solutions and developed sustainability criteria and a toolkit to analyse the eco efficiency of infrastructure projects. The proposed toolkit is aimed at promoting high-performance, eco-efficient, economical and environmentally friendly design decisions on stormwater, roads, water and sanitation related to township infrastructure projects. These green solutions would bring a whole new class of eco-friendly solutions to current infrastructure problems, while at the same time adding a fresh perspective to the traditional infrastructure design process. A variety of projects were evaluated using the green infrastructure toolkit and their results are compared to each other, to assess the results of using greener infrastructure verses the traditional method of designing infrastructure. The application of ‘green technology’ would ensure a sustainable design of township infrastructure services assisting the design to consider alternative resources, the environmental impacts of design decisions, ecological sensitivity issues, innovation, maintenance and materials, at the design stage of a project.Keywords: eco-efficiency, green infrastructure, infrastructure design, sustainable development
Procedia PDF Downloads 2264461 Influence of Humidity on Environmental Sustainability, Air Quality and Occupant Health
Authors: E. Cintura, M. I. Gomes
Abstract:
Nowadays, sustainable development issues have a key role in the planning of the man-made environment. Ensuring this development means limiting the impact of human activity on nature. It is essential to secure healthy places and good living conditions. For these reasons, indoor air quality and building materials play a fundamental role in sustainable architectural projects. These factors significantly affect human health: they can radically change the quality of the internal environment and energy consumption. The use of natural materials such as earth has many beneficial aspects in comfort and indoor air quality. As well as advantages in the environmental impact of the construction, they ensure a low energy consumption. Since they are already present in nature, their production and use do not require a high-energy consumption. Furthermore, they have a high thermo-hygrometric capacity, being able to absorb moisture, contributing positively to indoor conditions. Indoor air quality is closely related to relative humidity. For these reasons, it can be affirmed that the use of earth materials guarantees a sustainable development and at the same time improves the health of the building users. This paper summarizes several researches that demonstrate the importance of indoor air quality for human health and how it strictly depends on the building materials used. Eco-efficient plasters are also considered: earth and ash mortar. The bibliography consulted has the objective of supporting future experimental and laboratory analyzes. It is necessary to carry on with research by the use of simulations and testing to confirm the hygrothermal properties of eco-efficient plasters and therefore their ability to improve indoor air quality.Keywords: hygroscopicity, hygrothermal comfort, mortar, plaster
Procedia PDF Downloads 1384460 Agarose Amplification Based Sequencing (AG-seq) Characterization Cell-free RNA in Preimplantation Spent Embryo Medium
Authors: Huajuan Shi
Abstract:
Background: The biopsy of the preimplantation embryo may increase the potential risk and concern of embryo viability. Clinically discarded spent embryo medium (SEM) has entered the view of researchers, sparking an interest in noninvasive embryo screening. However, one of the major restrictions is the extremelty low quantity of cf-RNA, which is difficult to efficiently and unbiased amplify cf-RNA using traditional methods. Hence, there is urgently need to an efficient and low bias amplification method which can comprehensively and accurately obtain cf-RNA information to truly reveal the state of SEM cf-RNA. Result: In this present study, we established an agarose PCR amplification system, and has significantly improved the amplification sensitivity and efficiency by ~90 fold and 9.29 %, respectively. We applied agarose to sequencing library preparation (named AG-seq) to quantify and characterize cf-RNA in SEM. The number of detected cf-RNAs (3533 vs 598) and coverage of 3' end were significantly increased, and the noise of low abundance gene detection was reduced. The increasing percentage 5' end adenine and alternative splicing (AS) events of short fragments (< 400 bp) were discovered by AG-seq. Further, the profiles and characterizations of cf-RNA in spent cleavage medium (SCM) and spent blastocyst medium (SBM) indicated that 4‐mer end motifs of cf-RNA fragments could remarkably differentiate different embryo development stages. Significance: This study established an efficient and low-cost SEM amplification and library preparation method. Not only that, we successfully described the characterizations of SEM cf-RNA of preimplantation embryo by using AG-seq, including abundance features fragment lengths. AG-seq facilitates the study of cf-RNA as a noninvasive embryo screening biomarker and opens up potential clinical utilities of trace samples.Keywords: cell-free RNA, agarose, spent embryo medium, RNA sequencing, non-invasive detection
Procedia PDF Downloads 904459 Hybrid Adaptive Modeling to Enhance Robustness of Real-Time Optimization
Authors: Hussain Syed Asad, Richard Kwok Kit Yuen, Gongsheng Huang
Abstract:
Real-time optimization has been considered an effective approach for improving energy efficient operation of heating, ventilation, and air-conditioning (HVAC) systems. In model-based real-time optimization, model mismatches cannot be avoided. When model mismatches are significant, the performance of the real-time optimization will be impaired and hence the expected energy saving will be reduced. In this paper, the model mismatches for chiller plant on real-time optimization are considered. In the real-time optimization of the chiller plant, simplified semi-physical or grey box model of chiller is always used, which should be identified using available operation data. To overcome the model mismatches associated with the chiller model, hybrid Genetic Algorithms (HGAs) method is used for online real-time training of the chiller model. HGAs combines Genetic Algorithms (GAs) method (for global search) and traditional optimization method (i.e. faster and more efficient for local search) to avoid conventional hit and trial process of GAs. The identification of model parameters is synthesized as an optimization problem; and the objective function is the Least Square Error between the output from the model and the actual output from the chiller plant. A case study is used to illustrate the implementation of the proposed method. It has been shown that the proposed approach is able to provide reliability in decision making, enhance the robustness of the real-time optimization strategy and improve on energy performance.Keywords: energy performance, hybrid adaptive modeling, hybrid genetic algorithms, real-time optimization, heating, ventilation, and air-conditioning
Procedia PDF Downloads 4154458 Preparation of Wireless Networks and Security; Challenges in Efficient Accession of Encrypted Data in Healthcare
Authors: M. Zayoud, S. Oueida, S. Ionescu, P. AbiChar
Abstract:
Background: Wireless sensor network is encompassed of diversified tools of information technology, which is widely applied in a range of domains, including military surveillance, weather forecasting, and earthquake forecasting. Strengthened grounds are always developed for wireless sensor networks, which usually emerges security issues during professional application. Thus, essential technological tools are necessary to be assessed for secure aggregation of data. Moreover, such practices have to be incorporated in the healthcare practices that shall be serving in the best of the mutual interest Objective: Aggregation of encrypted data has been assessed through homomorphic stream cipher to assure its effectiveness along with providing the optimum solutions to the field of healthcare. Methods: An experimental design has been incorporated, which utilized newly developed cipher along with CPU-constrained devices. Modular additions have also been employed to evaluate the nature of aggregated data. The processes of homomorphic stream cipher have been highlighted through different sensors and modular additions. Results: Homomorphic stream cipher has been recognized as simple and secure process, which has allowed efficient aggregation of encrypted data. In addition, the application has led its way to the improvisation of the healthcare practices. Statistical values can be easily computed through the aggregation on the basis of selected cipher. Sensed data in accordance with variance, mean, and standard deviation has also been computed through the selected tool. Conclusion: It can be concluded that homomorphic stream cipher can be an ideal tool for appropriate aggregation of data. Alongside, it shall also provide the best solutions to the healthcare sector.Keywords: aggregation, cipher, homomorphic stream, encryption
Procedia PDF Downloads 259