Search results for: edge computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1791

Search results for: edge computing

291 Uniform and Controlled Cooling of a Steel Block by Multiple Jet Impingement and Airflow

Authors: E. K. K. Agyeman, P. Mousseau, A. Sarda, D. Edelin

Abstract:

During the cooling of hot metals by the circulation of water in canals formed by boring holes in the metal, the rapid phase change of the water due to the high initial temperature of the metal leads to a non homogenous distribution of the phases within the canals. The liquid phase dominates towards the entrance of the canal while the gaseous phase dominates towards the exit. As a result of the different thermal properties of both phases, the metal is not uniformly cooled. This poses a problem during the cooling of moulds, where a uniform temperature distribution is needed in order to ensure the integrity of the part being formed. In this study, the simultaneous use of multiple water jets and an airflow for the uniform and controlled cooling of a steel block is investigated. A circular hole is bored at the centre of the steel block along its length and a perforated steel pipe is inserted along the central axis of the hole. Water jets that impact the internal surface of the steel block are generated from the perforations in the steel pipe when the water within it is put under pressure. These jets are oriented in the opposite direction to that of gravity. An intermittent airflow is imposed in the annular space between the steel pipe and the surface of hole bored in the steel block. The evolution of the temperature with respect to time of the external surface of the block is measured with the help of thermocouples and an infrared camera. Due to the high initial temperature of the steel block (350 °C), the water changes phase when it impacts the internal surface of the block. This leads to high heat fluxes. The strategy used to control the cooling speed of the block is the intermittent impingement of its internal surface by the jets. The intervals of impingement and of non impingement are varied in order to achieve the desired result. An airflow is used during the non impingement periods as an additional regulator of the cooling speed and to improve the temperature homogeneity of the impinged surface. After testing different jet positions, jet speeds and impingement intervals, it’s observed that the external surface of the steel block has a uniform temperature distribution along its length. However, the temperature distribution along its width isn’t uniform with the maximum temperature difference being between the centre of the block and its edge. Changing the positions of the jets has no significant effect on the temperature distribution on the external surface of the steel block. It’s also observed that reducing the jet impingement interval and increasing the non impingement interval slows down the cooling of the block and improves upon the temperature homogeneity of its external surface while increasing the duration of jet impingement speeds up the cooling process.

Keywords: cooling speed, homogenous cooling, jet impingement, phase change

Procedia PDF Downloads 126
290 Exploring the Concept of Fashion Waste: Hanging by a Thread

Authors: Timothy Adam Boleratzky

Abstract:

The goal of this transformative endeavour lies in the repurposing of textile scraps, heralding a renaissance in the creation of wearable art. Through a judicious fusion of Life Cycle Assessment (LCA) methodologies and cutting-edge techniques, this research embarks upon a voyage of exploration, unraveling the intricate tapestry of environmental implications woven into the fabric of textile waste. Delving deep into the annals of empirical evidence and scholarly discourse, the study not only elucidates the urgent imperative for waste reduction strategies but also unveils the transformative potential inherent in embracing circular economy principles within the hallowed halls of fashion. As the research unfurls its sails, guided by the compass of sustainability, it traverses uncharted territories, charting a course toward a more enlightened and responsible fashion ecosystem. The canvas upon which this journey unfolds is richly adorned with insights gleaned from the crucible of experimentation, laying bare the myriad pathways toward waste minimisation and resource optimisation. From the adoption of recycling strategies to the cultivation of eco-friendly production techniques, the research endeavours to sculpt a blueprint for a more sustainable future, one stitch at a time. In this unfolding narrative, the role of wearable art emerges as a potent catalyst for change, transcending the boundaries of conventional fashion to embrace a more holistic ethos of sustainability. Through the alchemy of creativity and craftsmanship, discarded textile scraps are imbued with new life, morphing into exquisite creations that serve as both a testament to human ingenuity and a rallying cry for environmental preservation. Each thread, each stitch, becomes a silent harbinger of change, weaving together a tapestry of hope in a world besieged by ecological uncertainty. As the research journey culminates, its echoes resonate far beyond the confines of academia, reverberating through the corridors of industry and beyond. In its wake, it leaves a legacy of empowerment and enlightenment, inspiring a generation of designers, entrepreneurs, and consumers to embrace a more sustainable vision of fashion. For in the intricate interplay of threads and textiles lies the promise of a brighter, more resilient future, where beauty coexists harmoniously with responsibility and where fashion becomes not merely an expression of style but a celebration of sustainability.

Keywords: fabric-manipulation, sustainability, textiles, waste, wearable-art

Procedia PDF Downloads 47
289 The Evolution of the Israel Defence Forces’ Information Operations: A Case Study of the Israel Defence Forces' Activities in the Information Domain 2006–2014

Authors: Teemu Saressalo

Abstract:

This article examines the evolution of the Israel Defence Forces’ information operation activities during an eight-year timespan from the 2006 war with Hezbollah to more recent operations such as Pillar of Defence and Protective Edge. To this end, the case study will show a change in the Israel Defence Forces’ activities in the information domain. In the 2006 war with Hezbollah in Lebanon, Israel inflicted enormous damage on the Lebanese infrastructure, leaving more than 1,200 people dead and 4,400 injured. Casualties among Hezbollah, Israel’s main adversary, were estimated to range from 250 to 700 fighters. Damage to the Lebanese infrastructure was estimated at over USD 2.5bn, with almost 2,000 houses and buildings damaged and destroyed. Even this amount of destruction did not force Hezbollah to yield and while both sides were claiming victory in the war, Israel paid a heavier price in political backlashes and loss of reputation, mainly due to failures in the media and the way in which the war was portrayed and perceived in Israel and abroad. Much of this can be credited to Hezbollah’s efficient use of the media, and Israel’s failure to do so. Israel managed the next conflict it was engaged in completely differently – it had learnt its lessons and built up new ways to counter its adversary’s propaganda and media operations. In Operation Cast Lead at the turn of 2009, Hamas, Israel’s adversary and Gaza’s dominating faction, was not able to utilize the media in the same way that Hezbollah had. By creating a virtual and physical barrier around the Gaza Strip, Israel almost totally denied its adversary access to the worldwide media, and by restricting the movement of journalists in the area, Israel could let its voice be heard above all. The operation Cast Lead began with a deception operation, which caught Hamas totally off guard. The 21-day campaign left the Gaza Strip devastated, but did not cause as much protest in Israel during the operation as the 2006 war did, mainly due to almost total Israeli dominance in the information dimension. The most important outcome from the Israeli perspective was the fact that Operation Cast Lead was assessed to be a success and the operation enjoyed domestic support along with support from many western nations, which had condemned Israeli actions in the 2006 war. Later conflicts have shown the same tendency towards virtually total dominance in the information domain, which has had an impact on target audiences across the world. Thus, it is clear that well-planned and conducted information operations are able to shape public opinion and influence decision-makers, although Israel might have been outpaced by its rivals.

Keywords: Hamas, Hezbollah, information operations, Israel Defence Forces

Procedia PDF Downloads 241
288 Voting Representation in Social Networks Using Rough Set Techniques

Authors: Yasser F. Hassan

Abstract:

Social networking involves use of an online platform or website that enables people to communicate, usually for a social purpose, through a variety of services, most of which are web-based and offer opportunities for people to interact over the internet, e.g. via e-mail and ‘instant messaging’, by analyzing the voting behavior and ratings of judges in a popular comments in social networks. While most of the party literature omits the electorate, this paper presents a model where elites and parties are emergent consequences of the behavior and preferences of voters. The research in artificial intelligence and psychology has provided powerful illustrations of the way in which the emergence of intelligent behavior depends on the development of representational structure. As opposed to the classical voting system (one person – one decision – one vote) a new voting system is designed where agents with opposed preferences are endowed with a given number of votes to freely distribute them among some issues. The paper uses ideas from machine learning, artificial intelligence and soft computing to provide a model of the development of voting system response in a simulated agent. The modeled development process involves (simulated) processes of evolution, learning and representation development. The main value of the model is that it provides an illustration of how simple learning processes may lead to the formation of structure. We employ agent-based computer simulation to demonstrate the formation and interaction of coalitions that arise from individual voter preferences. We are interested in coordinating the local behavior of individual agents to provide an appropriate system-level behavior.

Keywords: voting system, rough sets, multi-agent, social networks, emergence, power indices

Procedia PDF Downloads 395
287 A Comparison between TM: TM Co Doped and TM: RE Co Doped ZnO Based Advanced Materials for Spintronics Applications; Structural, Optical and Magnetic Property Analysis

Authors: V. V. Srinivasu, Jayashree Das

Abstract:

Owing to the industrial and technological importance, transition metal (TM) doped ZnO has been widely chosen for many practical applications in electronics and optoelectronics. Besides, though still a controversial issue, the reported room temperature ferromagnetism in transition metal doped ZnO has added a feather to its excellence and importance in current semiconductor research for prospective application in Spintronics. Anticipating non controversial and improved optical and magnetic properties, we adopted co doping method to synthesise polycrystalline Mn:TM (Fe,Ni) and Mn:RE(Gd,Sm) co doped ZnO samples by solid state sintering route with compositions Zn1-x (Mn:Fe/Ni)xO and Zn1-x(Mn:Gd/Sm)xO and sintered at two different temperatures. The structure, composition and optical changes induced in ZnO due to co doping and sintering were investigated by XRD, FTIR, UV, PL and ESR studies. X-ray peak profile analysis (XPPA) and Williamson-Hall analysis carried out shows changes in the values of stress, strain, FWHM and the crystallite size in both the co doped systems. FTIR spectra also show the effect of both type of co doping on the stretching and bending bonds of ZnO compound. UV-Vis study demonstrates changes in the absorption band edge as well as the significant change in the optical band gap due to exchange interactions inside the system after co doping. PL studies reveal effect of co doping on UV and visible emission bands in the co doped systems at two different sintering temperatures, indicating the existence of defects in the form of oxygen vacancies. While the TM: TM co doped samples of ZnO exhibit ferromagnetism at room temperature, the TM: RE co doped samples show paramagnetic behaviour. The magnetic behaviours observed are supported by results from Electron Spin resonance (ESR) study; which shows sharp resonance peaks with considerable line width (∆H) and g values more than 2. Such values are usually found due to the presence of an internal field inside the system giving rise to the shift of resonance field towards the lower field. The g values in this range are assigned to the unpaired electrons trapped in oxygen vacancies. TM: TM co doped ZnO samples exhibit low field absorption peaks in their ESR spectra, which is a new interesting observation. We emphasize that the interesting observations reported in this paper may be considered for the improved futuristic applications of ZnO based materials.

Keywords: co-doping, electro spin resonance, microwave absorption, spintronics

Procedia PDF Downloads 339
286 Spatial Architecture Impact in Mediation Open Circuit Voltage Control of Quantum Solar Cell Recovery Systems

Authors: Moustafa Osman Mohammed

Abstract:

The photocurrent generations are influencing ultra-high efficiency solar cells based on self-assembled quantum dot (QD) nanostructures. Nanocrystal quantum dots (QD) provide a great enhancement toward solar cell efficiencies through the use of quantum confinement to tune absorbance across the solar spectrum enabled multi-exciton generation. Based on theoretical predictions, QDs have potential to improve systems efficiency in approximate regular electrons excitation intensity greater than 50%. In solar cell devices, an intermediate band formed by the electron levels in quantum dot systems. The spatial architecture is exploring how can solar cell integrate and produce not only high open circuit voltage (> 1.7 eV) but also large short-circuit currents due to the efficient absorption of sub-bandgap photons. In the proposed QD system, the structure allows barrier material to absorb wavelengths below 700 nm while multi-photon processes in the used quantum dots to absorb wavelengths up to 2 µm. The assembly of the electronic model is flexible to demonstrate the atoms and molecules structure and material properties to tune control energy bandgap of the barrier quantum dot to their respective optimum values. In terms of energy virtual conversion, the efficiency and cost of the electronic structure are unified outperform a pair of multi-junction solar cell that obtained in the rigorous test to quantify the errors. The milestone toward achieving the claimed high-efficiency solar cell device is controlling the edge causes of energy bandgap between the barrier material and quantum dot systems according to the media design limits. Despite this remarkable potential for high photocurrent generation, the achievable open-circuit voltage (Voc) is fundamentally limited due to non-radiative recombination processes in QD solar cells. The orientation of voltage recovery system is compared theoretically with experimental Voc variation in mediation upper–limit obtained one diode modeling form at the cells with different bandgap (Eg) as classified in the proposed spatial architecture. The opportunity for improvement Voc is valued approximately greater than 1V by using smaller QDs through QD solar cell recovery systems as confined to other micro and nano operations states.

Keywords: nanotechnology, photovoltaic solar cell, quantum systems, renewable energy, environmental modeling

Procedia PDF Downloads 158
285 Creativity and Innovation in Postgraduate Supervision

Authors: Rajendra Chetty

Abstract:

The paper aims to address two aspects of postgraduate studies: interdisciplinary research and creative models of supervision. Interdisciplinary research can be viewed as a key imperative to solve complex problems. While excellent research requires a context of disciplinary strength, the cutting edge is often found at the intersection between disciplines. Interdisciplinary research foregrounds a team approach and information, methodologies, designs, and theories from different disciplines are integrated to advance fundamental understanding or to solve problems whose solutions are beyond the scope of a single discipline. Our aim should also be to generate research that transcends the original disciplines i.e. transdisciplinary research. Complexity is characteristic of the knowledge economy, hence, postgraduate research and engaged scholarship should be viewed by universities as primary vehicles through which knowledge can be generated to have a meaningful impact on society. There are far too many ‘ordinary’ studies that fall into the realm of credentialism and certification as opposed to significant studies that generate new knowledge and provide a trajectory for further academic discourse. Secondly, the paper will look at models of supervision that are different to the dominant ‘apprentice’ or individual approach. A reflective practitioner approach would be used to discuss a range of supervision models that resonate well with the principles of interdisciplinarity, growth in the postgraduate sector and a commitment to engaged scholarship. The global demand for postgraduate education has resulted in increased intake and new demands to limited supervision capacity at institutions. Team supervision lodged within large-scale research projects, working with a cohort of students within a research theme, the journal article route of doctoral studies and the professional PhD are some of the models that provide an alternative to the traditional approach. International cooperation should be encouraged in the production of high-impact research and institutions should be committed to stimulating international linkages which would result in co-supervision and mobility of postgraduate students and global significance of postgraduate research. International linkages are also valuable in increasing the capacity for supervision at new and developing universities. Innovative co-supervision and joint-degree options with global partners should be explored within strategic planning for innovative postgraduate programmes. Co-supervision of PhD students is probably the strongest driver (besides funding) for collaborative research as it provides the glue of shared interest, advantage and commitment between supervisors. The students’ field serves and informs the co-supervisors own research agendas and helps to shape over-arching research themes through shared research findings.

Keywords: interdisciplinarity, internationalisation, postgraduate, supervision

Procedia PDF Downloads 238
284 Long Short-Term Memory Stream Cruise Control Method for Automated Drift Detection and Adaptation

Authors: Mohammad Abu-Shaira, Weishi Shi

Abstract:

Adaptive learning, a commonly employed solution to drift, involves updating predictive models online during their operation to react to concept drifts, thereby serving as a critical component and natural extension for online learning systems that learn incrementally from each example. This paper introduces LSTM-SCCM “Long Short-Term Memory Stream Cruise Control Method”, a drift adaptation-as-a-service framework for online learning. LSTM-SCCM automates drift adaptation through prompt detection, drift magnitude quantification, dynamic hyperparameter tuning, performing shortterm optimization and model recalibration for immediate adjustments, and, when necessary, conducting long-term model recalibration to ensure deeper enhancements in model performance. LSTM-SCCM is incorporated into a suite of cutting-edge online regression models, assessing their performance across various types of concept drift using diverse datasets with varying characteristics. The findings demonstrate that LSTM-SCCM represents a notable advancement in both model performance and efficacy in handling concept drift occurrences. LSTM-SCCM stands out as the sole framework adept at effectively tackling concept drifts within regression scenarios. Its proactive approach to drift adaptation distinguishes it from conventional reactive methods, which typically rely on retraining after significant degradation to model performance caused by drifts. Additionally, LSTM-SCCM employs an in-memory approach combined with the Self-Adjusting Memory (SAM) architecture to enhance real-time processing and adaptability. The framework incorporates variable thresholding techniques and does not assume any particular data distribution, making it an ideal choice for managing high-dimensional datasets and efficiently handling large-scale data. Our experiments, which include abrupt, incremental, and gradual drifts across both low- and high-dimensional datasets with varying noise levels, and applied to four state-of-the-art online regression models, demonstrate that LSTM-SCCM is versatile and effective, rendering it a valuable solution for online regression models to address concept drift.

Keywords: automated drift detection and adaptation, concept drift, hyperparameters optimization, online and adaptive learning, regression

Procedia PDF Downloads 17
283 Chaotic Electronic System with Lambda Diode

Authors: George Mahalu

Abstract:

The Chua diode has been configured over time in various ways, using electronic structures like as operational amplifiers (OAs) or devices with gas or semiconductors. When discussing the use of semiconductor devices, tunnel diodes (Esaki diodes) are most often considered, and more recently, transistorized configurations such as lambda diodes. The paper-work proposed here uses in the modeling a lambda diode type configuration consisting of two Junction Field Effect Transistors (JFET). The original scheme is created in the MULTISIM electronic simulation environment and is analyzed in order to identify the conditions for the appearance of evolutionary unpredictability specific to nonlinear dynamic systems with chaos-induced behavior. The chaotic deterministic oscillator is one autonomous type, a fact that places it in the class of Chua’s type oscillators, the only significant and most important difference being the presence of a nonlinear device like the one mentioned structure above. The chaotic behavior is identified both by means of strange attractor-type trajectories and visible during the simulation and by highlighting the hypersensitivity of the system to small variations of one of the input parameters. The results obtained through simulation and the conclusions drawn are useful in the further research of ways to implement such constructive electronic solutions in theoretical and practical applications related to modern small signal amplification structures, to systems for encoding and decoding messages through various modern ways of communication, as well as new structures that can be imagined both in modern neural networks and in those for the physical implementation of some requirements imposed by current research with the aim of obtaining practically usable solutions in quantum computing and quantum computers.

Keywords: chaos, lambda diode, strange attractor, nonlinear system

Procedia PDF Downloads 89
282 Automatic Segmentation of 3D Tomographic Images Contours at Radiotherapy Planning in Low Cost Solution

Authors: D. F. Carvalho, A. O. Uscamayta, J. C. Guerrero, H. F. Oliveira, P. M. Azevedo-Marques

Abstract:

The creation of vector contours slices (ROIs) on body silhouettes in oncologic patients is an important step during the radiotherapy planning in clinic and hospitals to ensure the accuracy of oncologic treatment. The radiotherapy planning of patients is performed by complex softwares focused on analysis of tumor regions, protection of organs at risk (OARs) and calculation of radiation doses for anomalies (tumors). These softwares are supplied for a few manufacturers and run over sophisticated workstations with vector processing presenting a cost of approximately twenty thousand dollars. The Brazilian project SIPRAD (Radiotherapy Planning System) presents a proposal adapted to the emerging countries reality that generally does not have the monetary conditions to acquire some radiotherapy planning workstations, resulting in waiting queues for new patients treatment. The SIPRAD project is composed by a set of integrated and interoperabilities softwares that are able to execute all stages of radiotherapy planning on simple personal computers (PCs) in replace to the workstations. The goal of this work is to present an image processing technique, computationally feasible, that is able to perform an automatic contour delineation in patient body silhouettes (SIPRAD-Body). The SIPRAD-Body technique is performed in tomography slices under grayscale images, extending their use with a greedy algorithm in three dimensions. SIPRAD-Body creates an irregular polyhedron with the Canny Edge adapted algorithm without the use of preprocessing filters, as contrast and brightness. In addition, comparing the technique SIPRAD-Body with existing current solutions is reached a contours similarity at least 78%. For this comparison is used four criteria: contour area, contour length, difference between the mass centers and Jaccard index technique. SIPRAD-Body was tested in a set of oncologic exams provided by the Clinical Hospital of the University of Sao Paulo (HCRP-USP). The exams were applied in patients with different conditions of ethnology, ages, tumor severities and body regions. Even in case of services that have already workstations, it is possible to have SIPRAD working together PCs because of the interoperability of communication between both systems through the DICOM protocol that provides an increase of workflow. Therefore, the conclusion is that SIPRAD-Body technique is feasible because of its degree of similarity in both new radiotherapy planning services and existing services.

Keywords: radiotherapy, image processing, DICOM RT, Treatment Planning System (TPS)

Procedia PDF Downloads 298
281 Detection, Isolation, and Raman Spectroscopic Characterization of Acute and Chronic Staphylococcus aureus Infection in an Endothelial Cell Culture Model

Authors: Astrid Tannert, Anuradha Ramoji, Christina Ebert, Frederike Gladigau, Lorena Tuchscherr, Jürgen Popp, Ute Neugebauer

Abstract:

Staphylococcus aureus is a facultative intracellular pathogen, which by entering host cells may evade immunologic host response as well as antimicrobial treatment. In that way, S. aureus can cause persistent intracellular infections which are difficult to treat. Depending on the strain, S. aureus may persist at different intracellular locations like the phagolysosome. The first barrier invading pathogens from the blood stream that they have to cross are the endothelial cells lining the inner surface of blood and lymphatic vessels. Upon proceeding from an acute to a chronic infection, intracellular pathogens undergo certain biochemical and structural changes including a deceleration of metabolic processes to adopt for long-term intracellular survival and the development of a special phenotype designated as small colony variant. In this study, the endothelial cell line Ea.hy 926 was used as a model for acute and chronic S. aureus infection. To this end, Ea.hy 926 cells were cultured on QIAscout™ Microraft Arrays, a special graded cell culture substrate that contains around 12,000 microrafts of 200 µm edge length. After attachment to the substrate, the endothelial cells were infected with GFP-expressing S. aureus for 3 weeks. The acute infection and the development of persistent bacteria was followed by confocal laser scanning microscopy, scanning the whole Microraft Array for the presence and for detailed determination of the intracellular location of fluorescent intracellular bacteria every second day. After three weeks of infection representative microrafts containing infected cells, cells with protruded infections and cells that did never show any infection were isolated and fixed for Raman micro-spectroscopic investigation. For comparison, also microrafts with acute infection were isolated. The acquired Raman spectra are correlated with the fluorescence microscopic images to give hints about a) the molecular alterations in endothelial cells during acute and chronic infection compared to non-infected cells, and b) metabolic and structural changes within the pathogen when entering a mode of persistence within host cells. We thank Dr. Ruth Kläver from QIAGEN GmbH for her support regarding QIAscout technology. Financial support by the BMBF via the CSCC (FKZ 01EO1502) and from the DFG via the Jena Biophotonic and Imaging Laboratory (JBIL, FKZ PO 633/29-1, BA 1601/10-1) is highly acknowledged.

Keywords: correlative image analysis, intracellular infection, pathogen-host adaption, Raman micro-spectroscopy

Procedia PDF Downloads 181
280 Revolutionizing Healthcare Facility Maintenance: A Groundbreaking AI, BIM, and IoT Integration Framework

Authors: Mina Sadat Orooje, Mohammad Mehdi Latifi, Behnam Fereydooni Eftekhari

Abstract:

The integration of cutting-edge Internet of Things (IoT) technologies with advanced Artificial Intelligence (AI) systems is revolutionizing healthcare facility management. However, the current landscape of hospital building maintenance suffers from slow, repetitive, and disjointed processes, leading to significant financial, resource, and time losses. Additionally, the potential of Building Information Modeling (BIM) in facility maintenance is hindered by a lack of data within digital models of built environments, necessitating a more streamlined data collection process. This paper presents a robust framework that harmonizes AI with BIM-IoT technology to elevate healthcare Facility Maintenance Management (FMM) and address these pressing challenges. The methodology begins with a thorough literature review and requirements analysis, providing insights into existing technological landscapes and associated obstacles. Extensive data collection and analysis efforts follow to deepen understanding of hospital infrastructure and maintenance records. Critical AI algorithms are identified to address predictive maintenance, anomaly detection, and optimization needs alongside integration strategies for BIM and IoT technologies, enabling real-time data collection and analysis. The framework outlines protocols for data processing, analysis, and decision-making. A prototype implementation is executed to showcase the framework's functionality, followed by a rigorous validation process to evaluate its efficacy and gather user feedback. Refinement and optimization steps are then undertaken based on evaluation outcomes. Emphasis is placed on the scalability of the framework in real-world scenarios and its potential applications across diverse healthcare facility contexts. Finally, the findings are meticulously documented and shared within the healthcare and facility management communities. This framework aims to significantly boost maintenance efficiency, cut costs, provide decision support, enable real-time monitoring, offer data-driven insights, and ultimately enhance patient safety and satisfaction. By tackling current challenges in healthcare facility maintenance management it paves the way for the adoption of smarter and more efficient maintenance practices in healthcare facilities.

Keywords: artificial intelligence, building information modeling, healthcare facility maintenance, internet of things integration, maintenance efficiency

Procedia PDF Downloads 62
279 Peculiarities of Internal Friction and Shear Modulus in 60Co γ-Rays Irradiated Monocrystalline SiGe Alloys

Authors: I. Kurashvili, G. Darsavelidze, T. Kimeridze, G. Chubinidze, I. Tabatadze

Abstract:

At present, a number of modern semiconductor devices based on SiGe alloys have been created in which the latest achievements of high technologies are used. These devices might cause significant changes to networking, computing, and space technology. In the nearest future new materials based on SiGe will be able to restrict the A3B5 and Si technologies and firmly establish themselves in medium frequency electronics. Effective realization of these prospects requires the solution of prediction and controlling of structural state and dynamical physical –mechanical properties of new SiGe materials. Based on these circumstances, a complex investigation of structural defects and structural-sensitive dynamic mechanical characteristics of SiGe alloys under different external impacts (deformation, radiation, thermal cycling) acquires great importance. Internal friction (IF) and shear modulus temperature and amplitude dependences of the monocrystalline boron-doped Si1-xGex(x≤0.05) alloys grown by Czochralski technique is studied in initial and 60Co gamma-irradiated states. In the initial samples, a set of dislocation origin relaxation processes and accompanying modulus defects are revealed in a temperature interval of 400-800 ⁰C. It is shown that after gamma-irradiation intensity of relaxation internal friction in the vicinity of 280 ⁰C increases and simultaneously activation parameters of high temperature relaxation processes reveal clear rising. It is proposed that these changes of dynamical mechanical characteristics might be caused by a decrease of the dislocation mobility in the Cottrell atmosphere enriched by the radiation defects.

Keywords: internal friction, shear modulus, gamma-irradiation, SiGe alloys

Procedia PDF Downloads 144
278 Multiscale Hub: An Open-Source Framework for Practical Atomistic-To-Continuum Coupling

Authors: Masoud Safdari, Jacob Fish

Abstract:

Despite vast amount of existing theoretical knowledge, the implementation of a universal multiscale modeling, analysis, and simulation software framework remains challenging. Existing multiscale software and solutions are often domain-specific, closed-source and mandate a high-level of experience and skills in both multiscale analysis and programming. Furthermore, tools currently existing for Atomistic-to-Continuum (AtC) multiscaling are developed with the assumptions such as accessibility of high-performance computing facilities to the users. These issues mentioned plus many other challenges have reduced the adoption of multiscale in academia and especially industry. In the current work, we introduce Multiscale Hub (MsHub), an effort towards making AtC more accessible through cloud services. As a joint effort between academia and industry, MsHub provides a universal web-enabled framework for practical multiscaling. Developed on top of universally acclaimed scientific programming language Python, the package currently provides an open-source, comprehensive, easy-to-use framework for AtC coupling. MsHub offers an easy to use interface to prominent molecular dynamics and multiphysics continuum mechanics packages such as LAMMPS and MFEM (a free, lightweight, scalable C++ library for finite element methods). In this work, we first report on the design philosophy of MsHub, challenges identified and issues faced regarding its implementation. MsHub takes the advantage of a comprehensive set of tools and algorithms developed for AtC that can be used for a variety of governing physics. We then briefly report key AtC algorithms implemented in MsHub. Finally, we conclude with a few examples illustrating the capabilities of the package and its future directions.

Keywords: atomistic, continuum, coupling, multiscale

Procedia PDF Downloads 178
277 Bismuth Telluride Topological Insulator: Physical Vapor Transport vs Molecular Beam Epitaxy

Authors: Omar Concepcion, Osvaldo De Melo, Arturo Escobosa

Abstract:

Topological insulator (TI) materials are insulating in the bulk and conducting in the surface. The unique electronic properties associated with these surface states make them strong candidates for exploring innovative quantum phenomena and as practical applications for quantum computing, spintronic and nanodevices. Many materials, including Bi₂Te₃, have been proposed as TIs and, in some cases, it has been demonstrated experimentally by angle-resolved photoemission spectroscopy (ARPES), scanning tunneling spectroscopy (STM) and/or magnetotransport measurements. A clean surface is necessary in order to make any of this measurements. Several techniques have been used to produce films and different kinds of nanostructures. Growth and characterization in situ is usually the best option although cleaving the films can be an alternative to have a suitable surface. In the present work, we report a comparison of Bi₂Te₃ grown by physical vapor transport (PVT) and molecular beam epitaxy (MBE). The samples were characterized by X-ray diffraction (XRD), Scanning electron microscopy (SEM), Atomic force microscopy (AFM), X-ray photoelectron spectroscopy (XPS) and ARPES. The Bi₂Te₃ samples grown by PVT, were cleaved in the ultra-high vacuum in order to obtain a surface free of contaminants. In both cases, the XRD shows a c-axis orientation and the pole diagrams proved the epitaxial relationship between film and substrate. The ARPES image shows the linear dispersion characteristic of the surface states of the TI materials. The samples grown by PVT, a relatively simple and cost-effective technique shows the same high quality and TI properties than the grown by MBE.

Keywords: Bismuth telluride, molecular beam epitaxy, physical vapor transport, topological insulator

Procedia PDF Downloads 193
276 Microwave Single Photon Source Using Landau-Zener Transitions

Authors: Siddhi Khaire, Samarth Hawaldar, Baladitya Suri

Abstract:

As efforts towards quantum communication advance, the need for single photon sources becomes imminent. Due to the extremely low energy of a single microwave photon, efforts to build single photon sources and detectors in the microwave range are relatively recent. We plan to use a Cooper Pair Box (CPB) that has a ‘sweet-spot’ where the two energy levels have minimal separation. Moreover, these qubits have fairly large anharmonicity making them close to ideal two-level systems. If the external gate voltage of these qubits is varied rapidly while passing through the sweet-spot, due to Landau-Zener effect, the qubit can be excited almost deterministically. The rapid change of the gate control voltage through the sweet spot induces a non-adiabatic population transfer from the ground to the excited state. The qubit eventually decays into the emission line emitting a single photon. The advantage of this setup is that the qubit can be excited without any coherent microwave excitation, thereby effectively increasing the usable source efficiency due to the absence of control pulse microwave photons. Since the probability of a Landau-Zener transition can be made almost close to unity by the appropriate design of parameters, this source behaves as an on-demand source of single microwave photons. The large anharmonicity of the CPB also ensures that only one excited state is involved in the transition and multiple photon output is highly improbable. Such a system has so far not been implemented and would find many applications in the areas of quantum optics, quantum computation as well as quantum communication.

Keywords: quantum computing, quantum communication, quantum optics, superconducting qubits, flux qubit, charge qubit, microwave single photon source, quantum information processing

Procedia PDF Downloads 101
275 Roof and Road Network Detection through Object Oriented SVM Approach Using Low Density LiDAR and Optical Imagery in Misamis Oriental, Philippines

Authors: Jigg L. Pelayo, Ricardo G. Villar, Einstine M. Opiso

Abstract:

The advances of aerial laser scanning in the Philippines has open-up entire fields of research in remote sensing and machine vision aspire to provide accurate timely information for the government and the public. Rapid mapping of polygonal roads and roof boundaries is one of its utilization offering application to disaster risk reduction, mitigation and development. The study uses low density LiDAR data and high resolution aerial imagery through object-oriented approach considering the theoretical concept of data analysis subjected to machine learning algorithm in minimizing the constraints of feature extraction. Since separating one class from another in distinct regions of a multi-dimensional feature-space, non-trivial computing for fitting distribution were implemented to formulate the learned ideal hyperplane. Generating customized hybrid feature which were then used in improving the classifier findings. Supplemental algorithms for filtering and reshaping object features are develop in the rule set for enhancing the final product. Several advantages in terms of simplicity, applicability, and process transferability is noticeable in the methodology. The algorithm was tested in the different random locations of Misamis Oriental province in the Philippines demonstrating robust performance in the overall accuracy with greater than 89% and potential to semi-automation. The extracted results will become a vital requirement for decision makers, urban planners and even the commercial sector in various assessment processes.

Keywords: feature extraction, machine learning, OBIA, remote sensing

Procedia PDF Downloads 363
274 Response of Caldeira De Tróia Saltmarsh to Sea Level Rise, Sado Estuary, Portugal

Authors: A. G. Cunha, M. Inácio, M. C. Freitas, C. Antunes, T. Silva, C. Andrade, V. Lopes

Abstract:

Saltmarshes are essential ecosystems both from an ecological and biological point of view. Furthermore, they constitute an important social niche, providing valuable economic and protection functions. Thus, understanding their rates and patterns of sedimentation is critical for functional management and rehabilitation, especially in an SLR scenario. The Sado estuary is located 40 km south of Lisbon. It is a bar built estuary, separated from the sea by a large sand spit: the Tróia barrier. Caldeira de Tróia is located on the free edge of this barrier, and encompasses a salt marsh with ca. 21,000 m². Sediment cores were collected in the high and low marshes and in the mudflat area of the North bank of Caldeira de Tróia. From the low marsh core, fifteen samples were chosen for ²¹⁰Pb and ¹³⁷Cs determination at University of Geneva. The cores from the high marsh and the mudflat are still being analyzed. A sedimentation rate of 2.96 mm/year was derived from ²¹⁰Pb using the Constant Flux Constant Sedimentation model. The ¹³⁷Cs profile shows a peak in activity (1963) between 15.50 and 18.50 cm, giving a 3.1 mm/year sedimentation rate for the past 53 years. The adopted sea level rise scenario was based on a model built with the initial rate of SLR of 2.1 mm/year in 2000 and an acceleration of 0.08 mm/year². Based on the harmonic analysis of Setubal-Tróia tide gauge of 2005 data, the tide model was estimated and used to build the tidal tables to the period 2000-2016. With these tables, the average mean water levels were determined for the same time span. A digital terrain model was created from LIDAR scanning with 2m horizontal resolution (APA-DGT, 2011) and validated with altimetric data obtained with a DGPS-RTK. The response model calculates a new elevation for each pixel of the DTM for 2050 and 2100 based on the sedimentation rates specific of each environment. At this stage, theoretical values were chosen for the high marsh and the mudflat (respectively, equal and double the low marsh rate – 2.92 mm/year). These values will be rectified once sedimentation rates are determined for the other environments. For both projections, the total surface of the marsh decreases: 2% in 2050 and 61% in 2100. Additionally, the high marsh coverage diminishes significantly, indicating a regression in terms of maturity.

Keywords: ¹³⁷Cs, ²¹⁰Pb, saltmarsh, sea level rise, response model

Procedia PDF Downloads 251
273 Computer-Aided Detection of Liver and Spleen from CT Scans using Watershed Algorithm

Authors: Belgherbi Aicha, Bessaid Abdelhafid

Abstract:

In the recent years a great deal of research work has been devoted to the development of semi-automatic and automatic techniques for the analysis of abdominal CT images. The first and fundamental step in all these studies is the semi-automatic liver and spleen segmentation that is still an open problem. In this paper, a semi-automatic liver and spleen segmentation method by the mathematical morphology based on watershed algorithm has been proposed. Our algorithm is currency in two parts. In the first, we seek to determine the region of interest by applying the morphological to extract the liver and spleen. The second step consists to improve the quality of the image gradient. In this step, we propose a method for improving the image gradient to reduce the over-segmentation problem by applying the spatial filters followed by the morphological filters. Thereafter we proceed to the segmentation of the liver, spleen. The aim of this work is to develop a method for semi-automatic segmentation liver and spleen based on watershed algorithm, improve the accuracy and the robustness of the liver and spleen segmentation and evaluate a new semi-automatic approach with the manual for liver segmentation. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work. The system has been evaluated by computing the sensitivity and specificity between the semi-automatically segmented (liver and spleen) contour and the manually contour traced by radiological experts. Liver segmentation has achieved the sensitivity and specificity; sens Liver=96% and specif Liver=99% respectively. Spleen segmentation achieves similar, promising results sens Spleen=95% and specif Spleen=99%.

Keywords: CT images, liver and spleen segmentation, anisotropic diffusion filter, morphological filters, watershed algorithm

Procedia PDF Downloads 325
272 Analysis of Splicing Methods for High Speed Automated Fibre Placement Applications

Authors: Phillip Kearney, Constantina Lekakou, Stephen Belcher, Alessandro Sordon

Abstract:

The focus in the automotive industry is to reduce human operator and machine interaction, so manufacturing becomes more automated and safer. The aim is to lower part cost and construction time as well as defects in the parts, sometimes occurring due to the physical limitations of human operators. A move to automate the layup of reinforcement material in composites manufacturing has resulted in the use of tapes that are placed in position by a robotic deposition head, also described as Automated Fibre Placement (AFP). The process of AFP is limited with respect to the finite amount of material that can be loaded into the machine at any one time. Joining two batches of tape material together involves a splice to secure the ends of the finishing tape to the starting edge of the new tape. The splicing method of choice for the majority of prepreg applications is a hand stich method, and as the name suggests requires human input to achieve. This investigation explores three methods for automated splicing, namely, adhesive, binding and stitching. The adhesive technique uses an additional adhesive placed on the tape ends to be joined. Binding uses the binding agent that is already impregnated onto the tape through the application of heat. The stitching method is used as a baseline to compare the new splicing methods to the traditional technique currently in use. As the methods will be used within a High Speed Automated Fibre Placement (HSAFP) process, this meant the parameters of the splices have to meet certain specifications: (a) the splice must be able to endure a load of 50 N in tension applied at a rate of 1 mm/s; (b) the splice must be created in less than 6 seconds, dictated by the capacity of the tape accumulator within the system. The samples for experimentation were manufactured with controlled overlaps, alignment and splicing parameters, these were then tested in tension using a tensile testing machine. Initial analysis explored the use of the impregnated binding agent present on the tape, as in the binding splicing technique. It analysed the effect of temperature and overlap on the strength of the splice. It was found that the optimum splicing temperature was at the higher end of the activation range of the binding agent, 100 °C. The optimum overlap was found to be 25 mm; it was found that there was no improvement in bond strength from 25 mm to 30 mm overlap. The final analysis compared the different splicing methods to the baseline of a stitched bond. It was found that the addition of an adhesive was the best splicing method, achieving a maximum load of over 500 N compared to the 26 N load achieved by a stitching splice and 94 N by the binding method.

Keywords: analysis, automated fibre placement, high speed, splicing

Procedia PDF Downloads 156
271 Increment of Panel Flutter Margin Using Adaptive Stiffeners

Authors: S. Raja, K. M. Parammasivam, V. Aghilesh

Abstract:

Fluid-structure interaction is a crucial consideration in the design of many engineering systems such as flight vehicles and bridges. Aircraft lifting surfaces and turbine blades can fail due to oscillations caused by fluid-structure interaction. Hence, it is focussed to study the fluid-structure interaction in the present research. First, the effect of free vibration over the panel is studied. It is well known that the deformation of a panel and flow induced forces affects one another. The selected panel has a span 300mm, chord 300mm and thickness 2 mm. The project is to study, the effect of cross-sectional area and the stiffener location is carried out for the same panel. The stiffener spacing is varied along both the chordwise and span-wise direction. Then for that optimal location the ideal stiffener length is identified. The effect of stiffener cross-section shapes (T, I, Hat, Z) over flutter velocity has been conducted. The flutter velocities of the selected panel with two rectangular stiffeners of cantilever configuration are estimated using MSC NASTRAN software package. As the flow passes over the panel, deformation takes place which further changes the flow structure over it. With increasing velocity, the deformation goes on increasing, but the stiffness of the system tries to dampen the excitation and maintain equilibrium. But beyond a critical velocity, the system damping suddenly becomes ineffective, so it loses its equilibrium. This estimated in NASTRAN using PK method. The first 10 modal frequencies of a simple panel and stiffened panel are estimated numerically and are validated with open literature. A grid independence study is also carried out and the modal frequency values remain the same for element lengths less than 20 mm. The current investigation concludes that the span-wise stiffener placement is more effective than the chord-wise placement. The maximum flutter velocity achieved for chord-wise placement is 204 m/s while for a span-wise arrangement it is augmented to 963 m/s for the stiffeners location of ¼ and ¾ of the chord from the panel edge (50% of chord from either side of the mid-chord line). The flutter velocity is directly proportional to the stiffener cross-sectional area. A significant increment in flutter velocity from 218m/s to 1024m/s is observed for the stiffener lengths varying from 50% to 60% of the span. The maximum flutter velocity above Mach 3 is achieved. It is also observed that for a stiffened panel, the full effect of stiffener can be achieved only when the stiffener end is clamped. Stiffeners with Z cross section incremented the flutter velocity from 142m/s (Panel with no stiffener) to 328 m/s, which is 2.3 times that of simple panel.

Keywords: stiffener placement, stiffener cross-sectional area, stiffener length, stiffener cross sectional area shape

Procedia PDF Downloads 295
270 Angiomotin Regulates Integrin Beta 1-Mediated Endothelial Cell Migration and Angiogenesis

Authors: Yuanyuan Zhang, Yujuan Zheng, Giuseppina Barutello, Sumako Kameishi, Kungchun Chiu, Katharina Hennig, Martial Balland, Federica Cavallo, Lars Holmgren

Abstract:

Angiogenesis describes that new blood vessels migrate from pre-existing ones to form 3D lumenized structure and remodeling. During directional migration toward the gradient of pro-angiogenic factors, the endothelial cells, especially the tip cells need filopodia to sense the environment and exert the pulling force. Of particular interest are the integrin proteins, which play an essential role in focal adhesion in the connection between migrating cells and extracellular matrix (ECM). Understanding how these biomechanical complexes orchestrate intrinsic and extrinsic forces is important for our understanding of the underlying mechanisms driving angiogenesis. We have previously identified Angiomotin (Amot), a member of Amot scaffold protein family, as a promoter for endothelial cell migration in vitro and zebrafish models. Hence, we established inducible endothelial-specific Amot knock-out mice to study normal retinal angiogenesis as well as tumor angiogenesis. We found that the migration ratio of the blood vessel network to the edge was significantly decreased in Amotec- retinas at postnatal day 6 (P6). While almost all the Amot defect tip cells lost migration advantages at P7. In consistence with the dramatic morphology defect of tip cells, there was a non-autonomous defect in astrocytes, as well as the disorganized fibronectin expression pattern correspondingly in migration front. Furthermore, the growth of transplanted LLC tumor was inhibited in Amot knockout mice due to fewer vasculature involved. By using MMTV-PyMT transgenic mouse model, there was a significantly longer period before tumors arised when Amot was specifically knocked out in blood vessels. In vitro evidence showed that Amot binded to beta-actin, Integrin beta 1 (ITGB1), Fibronectin, FAK, Vinculin, major focal adhesion molecules, and ITGB1 and stress fibers were distinctly induced by Amot transfection. Via traction force microscopy, the total energy (force indicater) was found significantly decreased in Amot knockdown cells. Taken together, we propose that Amot is a novel partner of the ITGB1/Fibronectin protein complex at focal adhesion and required for exerting force transition between endothelial cell and extracellular matrix.

Keywords: angiogenesis, angiomotin, endothelial cell migration, focal adhesion, integrin beta 1

Procedia PDF Downloads 240
269 Progressive Damage Analysis of Mechanically Connected Composites

Authors: Şeyma Saliha Fidan, Ozgur Serin, Ata Mugan

Abstract:

While performing verification analyses under static and dynamic loads that composite structures used in aviation are exposed to, it is necessary to obtain the bearing strength limit value for mechanically connected composite structures. For this purpose, various tests are carried out in accordance with aviation standards. There are many companies in the world that perform these tests in accordance with aviation standards, but the test costs are very high. In addition, due to the necessity of producing coupons, the high cost of coupon materials, and the long test times, it is necessary to simulate these tests on the computer. For this purpose, various test coupons were produced by using reinforcement and alignment angles of the composite radomes, which were integrated into the aircraft. Glass fiber reinforced and Quartz prepreg is used in the production of the coupons. The simulations of the tests performed according to the American Society for Testing and Materials (ASTM) D5961 Procedure C standard were performed on the computer. The analysis model was created in three dimensions for the purpose of modeling the bolt-hole contact surface realistically and obtaining the exact bearing strength value. The finite element model was carried out with the Analysis System (ANSYS). Since a physical break cannot be made in the analysis studies carried out in the virtual environment, a hypothetical break is realized by reducing the material properties. The material properties reduction coefficient was determined as 10%, which is stated to give the most realistic approach in the literature. There are various theories in this method, which is called progressive failure analysis. Because the hashin theory does not match our experimental results, the puck progressive damage method was used in all coupon analyses. When the experimental and numerical results are compared, the initial damage and the resulting force drop points, the maximum damage load values ​​, and the bearing strength value are very close. Furthermore, low error rates and similar damage patterns were obtained in both test and simulation models. In addition, the effects of various parameters such as pre-stress, use of bushing, the ratio of the distance between the bolt hole center and the plate edge to the hole diameter (E/D), the ratio of plate width to hole diameter (W/D), hot-wet environment conditions were investigated on the bearing strength of the composite structure.

Keywords: puck, finite element, bolted joint, composite

Procedia PDF Downloads 103
268 Study and Simulation of a Dynamic System Using Digital Twin

Authors: J.P. Henriques, E. R. Neto, G. Almeida, G. Ribeiro, J.V. Coutinho, A.B. Lugli

Abstract:

Industry 4.0, or the Fourth Industrial Revolution, is transforming the relationship between people and machines. In this scenario, some technologies such as Cloud Computing, Internet of Things, Augmented Reality, Artificial Intelligence, Additive Manufacturing, among others, are making industries and devices increasingly intelligent. One of the most powerful technologies of this new revolution is the Digital Twin, which allows the virtualization of a real system or process. In this context, the present paper addresses the linear and nonlinear dynamic study of a didactic level plant using Digital Twin. In the first part of the work, the level plant is identified at a fixed point of operation, BY using the existing method of least squares means. The linearized model is embedded in a Digital Twin using Automation Studio® from Famous Technologies. Finally, in order to validate the usage of the Digital Twin in the linearized study of the plant, the dynamic response of the real system is compared to the Digital Twin. Furthermore, in order to develop the nonlinear model on a Digital Twin, the didactic level plant is identified by using the method proposed by Hammerstein. Different steps are applied to the plant, and from the Hammerstein algorithm, the nonlinear model is obtained for all operating ranges of the plant. As for the linear approach, the nonlinear model is embedded in the Digital Twin, and the dynamic response is compared to the real system in different points of operation. Finally, yet importantly, from the practical results obtained, one can conclude that the usage of Digital Twin to study the dynamic systems is extremely useful in the industrial environment, taking into account that it is possible to develop and tune controllers BY using the virtual model of the real systems.

Keywords: industry 4.0, digital twin, system identification, linear and nonlinear models

Procedia PDF Downloads 152
267 C-eXpress: A Web-Based Analysis Platform for Comparative Functional Genomics and Proteomics in Human Cancer Cell Line, NCI-60 as an Example

Authors: Chi-Ching Lee, Po-Jung Huang, Kuo-Yang Huang, Petrus Tang

Abstract:

Background: Recent advances in high-throughput research technologies such as new-generation sequencing and multi-dimensional liquid chromatography makes it possible to dissect the complete transcriptome and proteome in a single run for the first time. However, it is almost impossible for many laboratories to handle and analysis these “BIG” data without the support from a bioinformatics team. We aimed to provide a web-based analysis platform for users with only limited knowledge on bio-computing to study the functional genomics and proteomics. Method: We use NCI-60 as an example dataset to demonstrate the power of the web-based analysis platform and data delivering system: C-eXpress takes a simple text file that contain the standard NCBI gene or protein ID and expression levels (rpkm or fold) as input file to generate a distribution map of gene/protein expression levels in a heatmap diagram organized by color gradients. The diagram is hyper-linked to a dynamic html table that allows the users to filter the datasets based on various gene features. A dynamic summary chart is generated automatically after each filtering process. Results: We implemented an integrated database that contain pre-defined annotations such as gene/protein properties (ID, name, length, MW, pI); pathways based on KEGG and GO biological process; subcellular localization based on GO cellular component; functional classification based on GO molecular function, kinase, peptidase and transporter. Multiple ways of sorting of column and rows is also provided for comparative analysis and visualization of multiple samples.

Keywords: cancer, visualization, database, functional annotation

Procedia PDF Downloads 621
266 Faster Pedestrian Recognition Using Deformable Part Models

Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia

Abstract:

Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.

Keywords: autonomous vehicles, deformable part model, dpm, pedestrian detection, real time

Procedia PDF Downloads 282
265 ChaQra: A Cellular Unit of the Indian Quantum Network

Authors: Shashank Gupta, Iteash Agarwal, Vijayalaxmi Mogiligidda, Rajesh Kumar Krishnan, Sruthi Chennuri, Deepika Aggarwal, Anwesha Hoodati, Sheroy Cooper, Ranjan, Mohammad Bilal Sheik, Bhavya K. M., Manasa Hegde, M. Naveen Krishna, Amit Kumar Chauhan, Mallikarjun Korrapati, Sumit Singh, J. B. Singh, Sunil Sud, Sunil Gupta, Sidhartha Pant, Sankar, Neha Agrawal, Ashish Ranjan, Piyush Mohapatra, Roopak T., Arsh Ahmad, Nanjunda M., Dilip Singh

Abstract:

Major research interests on quantum key distribution (QKD) are primarily focussed on increasing 1. point-to-point transmission distance (1000 Km), 2. secure key rate (Mbps), 3. security of quantum layer (device-independence). It is great to push the boundaries on these fronts, but these isolated approaches are neither scalable nor cost-effective due to the requirements of specialised hardware and different infrastructure. Current and future QKD network requires addressing different sets of challenges apart from distance, key rate, and quantum security. In this regard, we present ChaQra -a sub-quantum network with core features as 1) Crypto agility (integration in the already deployed telecommunication fibres), 2) Software defined networking (SDN paradigm for routing different nodes), 3) reliability (addressing denial-of-service with hybrid quantum safe cryptography), 4) upgradability (modules upgradation based on scientific and technological advancements), 5) Beyond QKD (using QKD network for distributed computing, multi-party computation etc). Our results demonstrate a clear path to create and accelerate quantum secure Indian subcontinent under the national quantum mission.

Keywords: quantum network, quantum key distribution, quantum security, quantum information

Procedia PDF Downloads 59
264 Ayurvastra: A Study on the Ancient Indian Textile for Healing

Authors: Reena Aggarwal

Abstract:

The use of textile chemicals in the various pre and post-textile manufacturing processes has made the textile industry conscious of its negative contribution to environmental pollution. Popular environmentally friendly fibers such as recycled polyester and organic cotton have been now increasingly used by fabrics and apparel manufacturers. However, after these textiles or the finished apparel are manufactured, they have to be dyed in the same chemical dyes that are harmful and toxic to the environment. Dyeing is a major area of concern for the environment as well as for people who have chemical sensitivities as it may cause nausea, breathing difficulties, seizures, etc. Ayurvastra or herbal medical textiles are one step ahead of the organic lifestyle, which supports the core concept of holistic well-being and also eliminates the impact of harmful chemicals and pesticides. There is a wide range of herbs that can be used not only for dyeing but also for providing medicinal properties to the textiles like antibacterial, antifungal, antiseptic, antidepressant and for treating insomnia, skin diseases, etc. The concept of herbal dyeing of fabric is to manifest herbal essence in every aspect of clothing, i.e., from production to end-use, additionally to eliminate the impact of harmful chemical dyes and chemicals which are known to result in problems like skin rashes, headache, trouble concentrating, nausea, diarrhea, fatigue, muscle and joint pain, dizziness, difficulty breathing, irregular heartbeat and seizures. Herbal dyeing or finishing on textiles will give an extra edge to the textiles as it adds an extra function to the fabric. The herbal extracts can be applied to the textiles by a simple process like the pad dry cure method and mainly acts on the human body through the skin for aiding in the treatment of disease or managing the medical condition through its herbal properties. This paper, therefore, delves into producing Ayurvastra, which is a perfect amalgamation of cloth and wellness. The aim of the paper is to design and create herbal disposable and non-disposable medical textile products acting mainly topically (through the skin) for providing medicinal properties/managing medical conditions. Keeping that in mind, a range of antifungal socks and antibacterial napkins treated with turmeric and aloe vera were developed, which are recommended for the treatment of fungal and bacterial infections, respectively. Both Herbal Antifungal socks and Antibacterial napkins have proved to be efficient enough in managing and treating fungal and bacterial infections of the skin, respectively.

Keywords: ayurvastra, ayurveda, herbal, pandemic, sustainable

Procedia PDF Downloads 132
263 Process Improvement and Redesign of the Immuno Histology (IHC) Lab at MSKCC: A Lean and Ergonomic Study

Authors: Samantha Meyerholz

Abstract:

MSKCC offers patients cutting edge cancer care with the highest quality standards. However, many patients and industry members do not realize that the operations of the Immunology Histology Lab (IHC) are the backbone for carrying out this mission. The IHC lab manufactures blocks and slides containing critical tissue samples that will be read by a Pathologist to diagnose and dictate a patient’s treatment course. The lab processes 200 requests daily, leading to the generation of approximately 2,000 slides and 1,100 blocks each day. Lab material is transported through labeling, cutting, staining and sorting manufacturing stations, while being managed by multiple techs throughout the space. The quality of the stain as well as wait times associated with processing requests, is directly associated with patients receiving rapid treatments and having a wider range of care options. This project aims to improve slide request turnaround time for rush and non-rush cases, while increasing the quality of each request filled (no missing slides or poorly stained items). Rush cases are to be filled in less than 24 hours, while standard cases are allotted a 48 hour time period. Reducing turnaround times enable patients to communicate sooner with their clinical team regarding their diagnosis, ultimately leading faster treatments and potentially better outcomes. Additional project goals included streamlining tech and material workflow, while reducing waste and increasing efficiency. This project followed a DMAIC structure with emphasis on lean and ergonomic principles that could be integrated into an evolving lab culture. Load times and batching processes were analyzed using process mapping, FMEA analysis, waste analysis, engineering observation, 5S and spaghetti diagramming. Reduction of lab technician movement as well as their body position at each workstation was of top concern to pathology leadership. With new equipment being brought into the lab to carry out workflow improvements, screen and tool placement was discussed with the techs in focus groups, to reduce variation and increase comfort throughout the workspace. 5S analysis was completed in two phases in the IHC lab, helping to drive solutions that reduced rework and tech motion. The IHC lab plans to continue utilizing these techniques to further reduce the time gap between tissue analysis and cancer care.

Keywords: engineering, ergonomics, healthcare, lean

Procedia PDF Downloads 223
262 An Approach to Building a Recommendation Engine for Travel Applications Using Genetic Algorithms and Neural Networks

Authors: Adrian Ionita, Ana-Maria Ghimes

Abstract:

The lack of features, design and the lack of promoting an integrated booking application are some of the reasons why most online travel platforms only offer automation of old booking processes, being limited to the integration of a smaller number of services without addressing the user experience. This paper represents a practical study on how to improve travel applications creating user-profiles through data-mining based on neural networks and genetic algorithms. Choices made by users and their ‘friends’ in the ‘social’ network context can be considered input data for a recommendation engine. The purpose of using these algorithms and this design is to improve user experience and to deliver more features to the users. The paper aims to highlight a broader range of improvements that could be applied to travel applications in terms of design and service integration, while the main scientific approach remains the technical implementation of the neural network solution. The motivation of the technologies used is also related to the initiative of some online booking providers that have made the fact that they use some ‘neural network’ related designs public. These companies use similar Big-Data technologies to provide recommendations for hotels, restaurants, and cinemas with a neural network based recommendation engine for building a user ‘DNA profile’. This implementation of the ‘profile’ a collection of neural networks trained from previous user choices, can improve the usability and design of any type of application.

Keywords: artificial intelligence, big data, cloud computing, DNA profile, genetic algorithms, machine learning, neural networks, optimization, recommendation system, user profiling

Procedia PDF Downloads 164