Search results for: large array
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7659

Search results for: large array

7419 Lightweight Hardware Firewall for Embedded System Based on Bus Transactions

Authors: Ziyuan Wu, Yulong Jia, Xiang Zhang, Wanting Zhou, Lei Li

Abstract:

The Internet of Things (IoT) is a rapidly evolving field involving a large number of interconnected embedded devices. In the design of embedded System-on-Chip (SoC), the key issues are power consumption, performance, and security. However, the easy-to-implement software and untrustworthy third-party IP cores may threaten the safety of hardware assets. Considering that illegal access and malicious attacks against SoC resources pass through the bus that integrates IPs, we propose a Lightweight Hardware Firewall (LHF) to protect SoC, which monitors and disallows the offending bus transactions based on physical addresses. Furthermore, under the LHF architecture, this paper refines two types of firewalls: Destination Hardware Firewall (DHF) and Source Hardware Firewall (SHF). The former is oriented to fine-grained detection and configuration, whose core technology is based on the method of dynamic grading units. In addition, we design the SHF based on static entries to achieve lightweight. Finally, we evaluate the hardware consumption of the proposed method by both Field-Programmable Gate Array (FPGA) and IC. Compared with the exciting efforts, LHF introduces a bus latency of zero clock cycles for every read or write transaction implemented on Xilinx Kintex-7 FPGAs. Meanwhile, the DC synthesis results based on TSMC 90nm show that the area is reduced by about 25% compared with the previous method.

Keywords: IoT, security, SoC, bus architecture, lightweight hardware firewall, FPGA

Procedia PDF Downloads 61
7418 Genetic Algorithm and Multi Criteria Decision Making Approach for Compressive Sensing Based Direction of Arrival Estimation

Authors: Ekin Nurbaş

Abstract:

One of the essential challenges in array signal processing, which has drawn enormous research interest over the past several decades, is estimating the direction of arrival (DOA) of plane waves impinging on an array of sensors. In recent years, the Compressive Sensing based DoA estimation methods have been proposed by researchers, and it has been discovered that the Compressive Sensing (CS)-based algorithms achieved significant performances for DoA estimation even in scenarios where there are multiple coherent sources. On the other hand, the Genetic Algorithm, which is a method that provides a solution strategy inspired by natural selection, has been used in sparse representation problems in recent years and provides significant improvements in performance. With all of those in consideration, in this paper, a method that combines the Genetic Algorithm (GA) and the Multi-Criteria Decision Making (MCDM) approaches for Direction of Arrival (DoA) estimation in the Compressive Sensing (CS) framework is proposed. In this method, we generate a multi-objective optimization problem by splitting the norm minimization and reconstruction loss minimization parts of the Compressive Sensing algorithm. With the help of the Genetic Algorithm, multiple non-dominated solutions are achieved for the defined multi-objective optimization problem. Among the pareto-frontier solutions, the final solution is obtained with the multiple MCDM methods. Moreover, the performance of the proposed method is compared with the CS-based methods in the literature.

Keywords: genetic algorithm, direction of arrival esitmation, multi criteria decision making, compressive sensing

Procedia PDF Downloads 146
7417 An Advanced Exponential Model for Seismic Isolators Having Hardening or Softening Behavior at Large Displacements

Authors: Nicolò Vaiana, Giorgio Serino

Abstract:

In this paper, an advanced Nonlinear Exponential Model (NEM), able to simulate the uniaxial dynamic behavior of seismic isolators having a continuously decreasing tangent stiffness with increasing displacement in the relatively large displacements range and a hardening or softening behavior at large displacements, is presented. The mathematical model is validated by comparing the experimental force-displacement hysteresis loops obtained during cyclic tests, conducted on a helical wire rope isolator and a recycled rubber-fiber reinforced bearing, with those predicted analytically. Good agreement between the experimental and simulated results shows that the proposed model can be an effective numerical tool to predict the force-displacement relationship of seismic isolation devices within the large displacements range. Compared to the widely used Bouc-Wen model, unable to simulate the response of seismic isolators at large displacements, the proposed one allows to avoid the numerical solution of a first order nonlinear ordinary differential equation for each time step of a nonlinear time history analysis, thus reducing the computation effort. Furthermore, the proposed model can simulate the smooth transition of the hysteresis loops from small to large displacements by adopting only one set of five parameters determined from the experimental hysteresis loops having the largest amplitude.

Keywords: base isolation, hardening behavior, nonlinear exponential model, seismic isolators, softening behavior

Procedia PDF Downloads 329
7416 Optimal Analysis of Structures by Large Wing Panel Using FEM

Authors: Byeong-Sam Kim, Kyeongwoo Park

Abstract:

In this study, induced structural optimization is performed to compare the trade-off between wing weight and induced drag for wing panel extensions, construction of wing panel and winglets. The aerostructural optimization problem consists of parameters with strength condition, and two maneuver conditions using residual stresses in panel production. The results of kinematic motion analysis presented a homogenization based theory for 3D beams and 3D shells for wing panel. This theory uses a kinematic description of the beam based on normalized displacement moments. The displacement of the wing is a significant design consideration as large deflections lead to large stresses and increased fatigue of components cause residual stresses. The stresses in the wing panel are small compared to the yield stress of aluminum alloy. This study describes the implementation of a large wing panel, aerostructural analysis and structural parameters optimization framework that couples a three-dimensional panel method.

Keywords: wing panel, aerostructural optimization, FEM, structural analysis

Procedia PDF Downloads 591
7415 Large Strain Compression-Tension Behavior of AZ31B Rolled Sheet in the Rolling Direction

Authors: A. Yazdanmehr, H. Jahed

Abstract:

Being made with the lightest commercially available industrial metal, Magnesium (Mg) alloys are of interest for light-weighting. Expanding their application to different material processing methods requires Mg properties at large strains. Several room-temperature processes such as shot and laser peening and hole cold expansion need compressive large strain data. Two methods have been proposed in the literature to obtain the stress-strain curve at high strains: 1) anti-buckling guides and 2) small cubic samples. In this paper, an anti-buckling fixture is used with the help of digital image correlation (DIC) to obtain the compression-tension (C-T) of AZ31B-H24 rolled sheet at large strain values of up to 10.5%. The effect of the anti-bucking fixture on stress-strain curves is evaluated experimentally by comparing the results with those of the compression tests of cubic samples. For testing cubic samples, a new fixture has been designed to increase the accuracy of testing cubic samples with DIC strain measurements. Results show a negligible effect of anti-buckling on stress-strain curves, specifically at high strain values.

Keywords: large strain, compression-tension, loading-unloading, Mg alloys

Procedia PDF Downloads 238
7414 A Large Language Model-Driven Method for Automated Building Energy Model Generation

Authors: Yake Zhang, Peng Xu

Abstract:

The development of building energy models (BEM) required for architectural design and analysis is a time-consuming and complex process, demanding a deep understanding and proficient use of simulation software. To streamline the generation of complex building energy models, this study proposes an automated method for generating building energy models using a large language model and the BEM library aimed at improving the efficiency of model generation. This method leverages a large language model to parse user-specified requirements for target building models, extracting key features such as building location, window-to-wall ratio, and thermal performance of the building envelope. The BEM library is utilized to retrieve energy models that match the target building’s characteristics, serving as reference information for the large language model to enhance the accuracy and relevance of the generated model, allowing for the creation of a building energy model that adapts to the user’s modeling requirements. This study enables the automatic creation of building energy models based on natural language inputs, reducing the professional expertise required for model development while significantly decreasing the time and complexity of manual configuration. In summary, this study provides an efficient and intelligent solution for building energy analysis and simulation, demonstrating the potential of a large language model in the field of building simulation and performance modeling.

Keywords: artificial intelligence, building energy modelling, building simulation, large language model

Procedia PDF Downloads 26
7413 FSO Performance under High Solar Irradiation: Case Study Qatar

Authors: Syed Jawad Hussain, Abir Touati, Farid Touati

Abstract:

Free-Space Optics (FSO) is a wireless technology that enables the optical transmission of data though the air. FSO is emerging as a promising alternative or complementary technology to fiber optic and wireless radio-frequency (RF) links due to its high-bandwidth, robustness to EMI, and operation in unregulated spectrum. These systems are envisioned to be an essential part of future generation heterogeneous communication networks. Despite the vibrant advantages of FSO technology and the variety of its applications, its widespread adoption has been hampered by rather disappointing link reliability for long-range links due to atmospheric turbulence-induced fading and sensitivity to detrimental climate conditions. Qatar, with modest cloud coverage, high concentrations of airborne dust and high relative humidity particularly lies in virtually rainless sunny belt with a typical daily average solar radiation exceeding 6 kWh/m2 and 80-90% clear skies throughout the year. The specific objective of this work is to study for the first time in Qatar the effect of solar irradiation on the deliverability of the FSO Link. In order to analyze the transport media, we have ported Embedded Linux kernel on Field Programmable Gate Array (FPGA) and designed a network sniffer application that can run into FPGA. We installed new FSO terminals and configure and align them successively. In the reporting period, we carry out measurement and relate them to weather conditions.

Keywords: free space optics, solar irradiation, field programmable gate array, FSO outage

Procedia PDF Downloads 360
7412 Tactile Sensory Digit Feedback for Cochlear Implant Electrode Insertion

Authors: Yusuf Bulale, Mark Prince, Geoff Tansley, Peter Brett

Abstract:

Cochlear Implantation (CI) which became a routine procedure for the last decades is an electronic device that provides a sense of sound for patients who are severely and profoundly deaf. Today, cochlear implantation technology uses electrode array (EA) implanted manually into the cochlea. The optimal success of this implantation depends on the electrode technology and deep insertion techniques. However, this manual insertion procedure may cause mechanical trauma which can lead to a severe destruction of the delicate intracochlear structure. Accordingly, future improvement of the cochlear electrode implant insertion needs reduction of the excessive force application during the cochlear implantation which causes tissue damage and trauma. This study is examined tool-tissue interaction of large prototype scale digit embedded with distributive tactile sensor based upon cochlear electrode and large prototype scale cochlea phantom for simulating the human cochlear which could lead to small-scale digit requirements. The digit, distributive tactile sensors embedded with silicon-substrate was inserted into the cochlea phantom to measure any digit/phantom interaction and position of the digit in order to minimize tissue and trauma damage during the electrode cochlear insertion. The digit has provided tactile information from the digit-phantom insertion interaction such as contact status, tip penetration, obstacles, relative shape and location, contact orientation and multiple contacts. The tests demonstrated that even devices of such a relative simple design with low cost have a potential to improve cochlear implant surgery and other lumen mapping applications by providing tactile sensory feedback information and thus controlling the insertion through sensing and control of the tip of the implant during the insertion. In that approach, the surgeon could minimize the tissue damage and potential damage to the delicate structures within the cochlear caused by current manual electrode insertion of the cochlear implantation. This approach also can be applied to other minimally invasive surgery applications as well as diagnosis and path navigation procedures.

Keywords: cochlear electrode insertion, distributive tactile sensory feedback information, flexible digit, minimally invasive surgery, tool/tissue interaction

Procedia PDF Downloads 397
7411 Modified InVEST for Whatsapp Messages Forensic Triage and Search through Visualization

Authors: Agria Rhamdhan

Abstract:

WhatsApp as the most popular mobile messaging app has been used as evidence in many criminal cases. As the use of mobile messages generates large amounts of data, forensic investigation faces the challenge of large data problems. The hardest part of finding this important evidence is because current practice utilizes tools and technique that require manual analysis to check all messages. That way, analyze large sets of mobile messaging data will take a lot of time and effort. Our work offers methodologies based on forensic triage to reduce large data to manageable sets resulting easier to do detailed reviews, then show the results through interactive visualization to show important term, entities and relationship through intelligent ranking using Term Frequency-Inverse Document Frequency (TF-IDF) and Latent Dirichlet Allocation (LDA) Model. By implementing this methodology, investigators can improve investigation processing time and result's accuracy.

Keywords: forensics, triage, visualization, WhatsApp

Procedia PDF Downloads 168
7410 Tailoring Quantum Oscillations of Excitonic Schrodinger’s Cats as Qubits

Authors: Amit Bhunia, Mohit Kumar Singh, Maryam Al Huwayz, Mohamed Henini, Shouvik Datta

Abstract:

We report [https://arxiv.org/abs/2107.13518] experimental detection and control of Schrodinger’s Cat like macroscopically large, quantum coherent state of a two-component Bose-Einstein condensate of spatially indirect electron-hole pairs or excitons using a resonant tunneling diode of III-V Semiconductors. This provides access to millions of excitons as qubits to allow efficient, fault-tolerant quantum computation. In this work, we measure phase-coherent periodic oscillations in photo-generated capacitance as a function of an applied voltage bias and light intensity over a macroscopically large area. Periodic presence and absence of splitting of excitonic peaks in the optical spectra measured by photocapacitance point towards tunneling induced variations in capacitive coupling between the quantum well and quantum dots. Observation of negative ‘quantum capacitance’ due to a screening of charge carriers by the quantum well indicates Coulomb correlations of interacting excitons in the plane of the sample. We also establish that coherent resonant tunneling in this well-dot heterostructure restricts the available momentum space of the charge carriers within this quantum well. Consequently, the electric polarization vector of the associated indirect excitons collective orients along the direction of applied bias and these excitons undergo Bose-Einstein condensation below ~100 K. Generation of interference beats in photocapacitance oscillation even with incoherent white light further confirm the presence of stable, long-range spatial correlation among these indirect excitons. We finally demonstrate collective Rabi oscillations of these macroscopically large, ‘multipartite’, two-level, coupled and uncoupled quantum states of excitonic condensate as qubits. Therefore, our study not only brings the physics and technology of Bose-Einstein condensation within the reaches of semiconductor chips but also opens up experimental investigations of the fundamentals of quantum physics using similar techniques. Operational temperatures of such two-component excitonic BEC can be raised further with a more densely packed, ordered array of QDs and/or using materials having larger excitonic binding energies. However, fabrications of single crystals of 0D-2D heterostructures using 2D materials (e.g. transition metal di-chalcogenides, oxides, perovskites etc.) having higher excitonic binding energies are still an open challenge for semiconductor optoelectronics. As of now, these 0D-2D heterostructures can already be scaled up for mass production of miniaturized, portable quantum optoelectronic devices using the existing III-V and/or Nitride based semiconductor fabrication technologies.

Keywords: exciton, Bose-Einstein condensation, quantum computation, heterostructures, semiconductor Physics, quantum fluids, Schrodinger's Cat

Procedia PDF Downloads 180
7409 Beam Coding with Orthogonal Complementary Golay Codes for Signal to Noise Ratio Improvement in Ultrasound Mammography

Authors: Y. Kumru, K. Enhos, H. Köymen

Abstract:

In this paper, we report the experimental results on using complementary Golay coded signals at 7.5 MHz to detect breast microcalcifications of 50 µm size. Simulations using complementary Golay coded signals show perfect consistence with the experimental results, confirming the improved signal to noise ratio for complementary Golay coded signals. For improving the success on detecting the microcalcifications, orthogonal complementary Golay sequences having cross-correlation for minimum interference are used as coded signals and compared to tone burst pulse of equal energy in terms of resolution under weak signal conditions. The measurements are conducted using an experimental ultrasound research scanner, Digital Phased Array System (DiPhAS) having 256 channels, a phased array transducer with 7.5 MHz center frequency and the results obtained through experiments are validated by Field-II simulation software. In addition, to investigate the superiority of coded signals in terms of resolution, multipurpose tissue equivalent phantom containing series of monofilament nylon targets, 240 µm in diameter, and cyst-like objects with attenuation of 0.5 dB/[MHz x cm] is used in the experiments. We obtained ultrasound images of monofilament nylon targets for the evaluation of resolution. Simulation and experimental results show that it is possible to differentiate closely positioned small targets with increased success by using coded excitation in very weak signal conditions.

Keywords: coded excitation, complementary golay codes, DiPhAS, medical ultrasound

Procedia PDF Downloads 263
7408 Healthcare Big Data Analytics Using Hadoop

Authors: Chellammal Surianarayanan

Abstract:

Healthcare industry is generating large amounts of data driven by various needs such as record keeping, physician’s prescription, medical imaging, sensor data, Electronic Patient Record(EPR), laboratory, pharmacy, etc. Healthcare data is so big and complex that they cannot be managed by conventional hardware and software. The complexity of healthcare big data arises from large volume of data, the velocity with which the data is accumulated and different varieties such as structured, semi-structured and unstructured nature of data. Despite the complexity of big data, if the trends and patterns that exist within the big data are uncovered and analyzed, higher quality healthcare at lower cost can be provided. Hadoop is an open source software framework for distributed processing of large data sets across clusters of commodity hardware using a simple programming model. The core components of Hadoop include Hadoop Distributed File System which offers way to store large amount of data across multiple machines and MapReduce which offers way to process large data sets with a parallel, distributed algorithm on a cluster. Hadoop ecosystem also includes various other tools such as Hive (a SQL-like query language), Pig (a higher level query language for MapReduce), Hbase(a columnar data store), etc. In this paper an analysis has been done as how healthcare big data can be processed and analyzed using Hadoop ecosystem.

Keywords: big data analytics, Hadoop, healthcare data, towards quality healthcare

Procedia PDF Downloads 413
7407 Study of Virus/es Threatening Large Cardamom Cultivation in Sikkim and Darjeeling Hills of Northeast India

Authors: Dharmendra Pratap

Abstract:

Large Cardamom (Amomum subulatum), family Zingiberaceae is an aromatic spice crop and has rich medicinal value. Large Cardamom is as synonymous to Sikkim as Tea is to Darjeeling. Since Sikkim alone contributes up to 88% of India's large cardamom production which is the world leader by producing over 50% of the global yield. However, the production of large cardamom has declined almost to half since last two decade. The economic losses have been attributed to two viral diseases namely, chirke and Foorkey. Chirke disease is characterized by light and dark green streaks on leaves. The affected leaves exhibit streak mosaic, which gradually coalesce, turn brown and eventually dry up. Excessive sprouting and formation of bushy dwarf clumps at the base of mother plants that gradually die characterize the foorkey disease. In our surveys in Sikkim–Darjeeling hill area during 2012-14, 40-45% of plants were found to be affected with foorkey disease and 10-15% with chirke. Mechanical and aphid transmission study showed banana as an alternate host for both the disease. For molecular identification, total genomic DNA and RNA was isolated from the infected leaf tissues and subjected to Rolling circle amplification (RCA) and RT-PCR respectively. The DNA concatamers produced in the RCA reaction were monomerized by different restriction enzymes and the bands corresponding to ~1 kb genomes were purified and cloned in the respective sites. The nucleotide sequencing results revealed the association of Nanovirus with the foorkey disease of large cardamom. DNA1 showed 74% identity with Replicase gene of FBNYV, DNA2 showed 77% identity with the NSP gene of BBTV and DNA3 showed 74% identity with CP gene of BBTV. The finding suggests the presence of a new species of nanovirus associated with foorkey disease of large cardamom in Sikkim and Darjeeling hills. The details of their epidemiology and other factors would be discussed.

Keywords: RCA, nanovirus, large cardamom, molecular virology and microbiology

Procedia PDF Downloads 492
7406 Large Panel Technology Apartments of Yesterday and Today: Quality Aspects

Authors: Barbara Gronostajska

Abstract:

Currently, housing conditions of buildings executed in large panel technology are deteriorating. The article presents modernization solutions implemented throughout the variety of architectural activities (adding of balconies and staircases, connecting apartments) which guarantee very intriguing results that meet the needs and expectations of the modern society.

Keywords: housing estate, apartments, flats, modernization, plate blocks

Procedia PDF Downloads 480
7405 Transcriptomic Analysis of Non-Alcoholic Fatty Liver Disease in Cafeteria Diet Induced Obese Rats

Authors: Mohammad Jamal

Abstract:

Non-alcoholic fatty liver disease (NAFLD) has become one of the most chronic liver diseases, prevalent among people with morbid obesity. NAFLD does not develop clinically significant liver disease, however cirrhosis and liver cancer develop in subset and currently there are no approved therapies for the treatment of NAFLD. The study is aimed to understand the various key genes involved in the mechanism of NAFLD which can be valuable for developing diagnostic and predictive biomarkers based on their histologic stage of liver. The study was conducted on 16 male Sprague Dawley rats. The animals were divided in two groups: control group (n=8) fed on ad libitum normal chow and regular water and the cafeteria group (CAF)) (n=8) fed on high fatty/ carbohydrate diet. The animals received their respective diet from 4 weeks onwards from D.O.B until 25 weeks. Liver was extracted and RT² Profiler PCR Array was used to assess the NAFLD related genes. Histological evaluation was performed using H&E stain in liver tissue sections. Our PCR array results showed that genes involved in anti-inflammatory activity (Ifng, IL10), fatty acid uptake/oxidation (Fabp5), apoptosis (Fas), lipogenesis (Gck and Srebf1), Insulin signalling (Igfbp1) and metabolic pathway (pdk4) were upregulated in the liver of cafeteria fed obese rats. Bloated hepatocytes, displaced nucleus and higher lipid content were seen in the liver of cafeteria fed obese rats. Although Liver biopsies remain the gold standard in evaluating NAFLD, however an approach towards non-invasive markers could be used in understanding the physiology, therapeutic potential, and the targets to combat NAFLD.

Keywords: biomarkers, cafeteria diet, obesity, NAFLD

Procedia PDF Downloads 143
7404 Rigorous Photogrammetric Push-Broom Sensor Modeling for Lunar and Planetary Image Processing

Authors: Ahmed Elaksher, Islam Omar

Abstract:

Accurate geometric relation algorithms are imperative in Earth and planetary satellite and aerial image processing, particularly for high-resolution images that are used for topographic mapping. Most of these satellites carry push-broom sensors. These sensors are optical scanners equipped with linear arrays of CCDs. These sensors have been deployed on most EOSs. In addition, the LROC is equipped with two push NACs that provide 0.5 meter-scale panchromatic images over a 5 km swath of the Moon. The HiRISE carried by the MRO and the HRSC carried by MEX are examples of push-broom sensor that produces images of the surface of Mars. Sensor models developed in photogrammetry relate image space coordinates in two or more images with the 3D coordinates of ground features. Rigorous sensor models use the actual interior orientation parameters and exterior orientation parameters of the camera, unlike approximate models. In this research, we generate a generic push-broom sensor model to process imageries acquired through linear array cameras and investigate its performance, advantages, and disadvantages in generating topographic models for the Earth, Mars, and the Moon. We also compare and contrast the utilization, effectiveness, and applicability of available photogrammetric techniques and softcopies with the developed model. We start by defining an image reference coordinate system to unify image coordinates from all three arrays. The transformation from an image coordinate system to a reference coordinate system involves a translation and three rotations. For any image point within the linear array, its image reference coordinates, the coordinates of the exposure center of the array in the ground coordinate system at the imaging epoch (t), and the corresponding ground point coordinates are related through the collinearity condition that states that all these three points must be on the same line. The rotation angles for each CCD array at the epoch t are defined and included in the transformation model. The exterior orientation parameters of an image line, i.e., coordinates of exposure station and rotation angles, are computed by a polynomial interpolation function in time (t). The parameter (t) is the time at a certain epoch from a certain orbit position. Depending on the types of observations, coordinates, and parameters may be treated as knowns or unknowns differently in various situations. The unknown coefficients are determined in a bundle adjustment. The orientation process starts by extracting the sensor position and, orientation and raw images from the PDS. The parameters of each image line are then estimated and imported into the push-broom sensor model. We also define tie points between image pairs to aid the bundle adjustment model, determine the refined camera parameters, and generate highly accurate topographic maps. The model was tested on different satellite images such as IKONOS, QuickBird, and WorldView-2, HiRISE. It was found that the accuracy of our model is comparable to those of commercial and open-source software, the computational efficiency of the developed model is high, the model could be used in different environments with various sensors, and the implementation process is much more cost-and effort-consuming.

Keywords: photogrammetry, push-broom sensors, IKONOS, HiRISE, collinearity condition

Procedia PDF Downloads 63
7403 REDUCER: An Architectural Design Pattern for Reducing Large and Noisy Data Sets

Authors: Apkar Salatian

Abstract:

To relieve the burden of reasoning on a point to point basis, in many domains there is a need to reduce large and noisy data sets into trends for qualitative reasoning. In this paper we propose and describe a new architectural design pattern called REDUCER for reducing large and noisy data sets that can be tailored for particular situations. REDUCER consists of 2 consecutive processes: Filter which takes the original data and removes outliers, inconsistencies or noise; and Compression which takes the filtered data and derives trends in the data. In this seminal article, we also show how REDUCER has successfully been applied to 3 different case studies.

Keywords: design pattern, filtering, compression, architectural design

Procedia PDF Downloads 212
7402 Comprehensive Evaluation of COVID-19 Through Chest Images

Authors: Parisa Mansour

Abstract:

The coronavirus disease 2019 (COVID-19) was discovered and rapidly spread to various countries around the world since the end of 2019. Computed tomography (CT) images have been used as an important alternative to the time-consuming RT. PCR test. However, manual segmentation of CT images alone is a major challenge as the number of suspected cases increases. Thus, accurate and automatic segmentation of COVID-19 infections is urgently needed. Because the imaging features of the COVID-19 infection are different and similar to the background, existing medical image segmentation methods cannot achieve satisfactory performance. In this work, we try to build a deep convolutional neural network adapted for the segmentation of chest CT images with COVID-19 infections. First, we maintain a large and novel chest CT image database containing 165,667 annotated chest CT images from 861 patients with confirmed COVID-19. Inspired by the observation that the boundary of an infected lung can be improved by global intensity adjustment, we introduce a feature variable block into the proposed deep CNN, which adjusts the global features of features to segment the COVID-19 infection. The proposed PV array can effectively and adaptively improve the performance of functions in different cases. We combine features of different scales by proposing a progressive atrocious space pyramid fusion scheme to deal with advanced infection regions with various aspects and shapes. We conducted experiments on data collected in China and Germany and showed that the proposed deep CNN can effectively produce impressive performance.

Keywords: chest, COVID-19, chest Image, coronavirus, CT image, chest CT

Procedia PDF Downloads 57
7401 Effect of Constant and Variable Temperature on the Morphology of TiO₂ Nanotubes Prepared by Two-Step Anodization Method

Authors: Tayyaba Ghani, Mazhar Mehmood, Mohammad Mujahid

Abstract:

TiO₂ nanotubes are receiving immense attraction in the field of dye-sensitized solar cells due to their well-defined nanostructures, efficient electron transport and large surface area as compared to other one dimensional structures. In the present work, we have investigated the influence of temperature on the morphology of anodically produced self-organized Titanium oxide nanotubes (TiNTs). TiNTs are synthesized by two-step anodization method in an ethylene glycol based electrolytes containing ammonium fluoride. Experiments are performed at constant anodization voltage for two hours. An investigation by the SEM images reveals that if the temperature is kept constant during the anodizing experiment, variation in the average tube diameter is significantly reduced. However, if the temperature is not controlled then due to the exothermic nature of reactions for the formation of TiNTs, the temperature of electrolyte keep on increasing. This variation in electrolyte bath temperature introduced strong variations in tube diameter (20 nm to 160 nm) along the length of tubes. Current profiles, recorded during the anodization experiment, predict the effect of constant and varying experimental temperatures as well. In both cases, XRD results show the complete anatase crystal structure of nanotube upon annealing at 450 °C. Present work highlights the importance of constant temperature during the anodization experiments in order to develop an ordered array of nanotubes with a uniform tube diameter.

Keywords: anodization, ordering, temperature, TiO₂ nanotubes

Procedia PDF Downloads 171
7400 Public Transportation Demand and Policy in Kabul, Afghanistan

Authors: Ahmad Samim Ranjbar, Shoshi Mizokami

Abstract:

Kabul is the heart of political, commercial, cultural, educational and social life in Afghanistan and the Kabul fifth fastest growing city in the world, since 2001 with the establishment of new government Lack of adequate employment opportunities and basic utility services in remote provinces have prompted people to move to Kabul and other urban areas. From 2001 to the present, a rapid increase in population, and also less income of the people most of residence tend to use public transport, especially buses, however there is no proper bus system exist in Kabul city, because of wars, from 1992 to 2001 Kabul suffered damage and destruction of its transportation facilities including pavements, sidewalks, traffic circles, drainage systems, traffic signs and signals, trolleybuses and almost all of the public transit buses (e.g. Millie bus). This research is a primary and very important phase into Kabul city transportation and especially an initial and important step toward using large bus in Kabul city, which the main purpose of this research is to find the demand of Kabul city residence for public transport (Large Bus) and compare it with the actual supply from government. Finding of this research shows that the demand of Kabul city residence for the public transport (Large Bus) exceed the supply from the government, means that current public transportation (Large Bus) is not sufficient to serve people of Kabul city, it is mentionable that according to this research there is no need to build a new road or exclusive way for bus, this research propose to government for investment on the public transportation and exceed the number of large buses to can handle the current demand for public transport.

Keywords: transportation, planning, public transport, large bus, Kabul, Afghanistan

Procedia PDF Downloads 297
7399 One Step Further: Pull-Process-Push Data Processing

Authors: Romeo Botes, Imelda Smit

Abstract:

In today’s modern age of technology vast amounts of data needs to be processed in real-time to keep users satisfied. This data comes from various sources and in many formats, including electronic and mobile devices such as GPRS modems and GPS devices. They make use of different protocols including TCP, UDP, and HTTP/s for data communication to web servers and eventually to users. The data obtained from these devices may provide valuable information to users, but are mostly in an unreadable format which needs to be processed to provide information and business intelligence. This data is not always current, it is mostly historical data. The data is not subject to implementation of consistency and redundancy measures as most other data usually is. Most important to the users is that the data are to be pre-processed in a readable format when it is entered into the database. To accomplish this, programmers build processing programs and scripts to decode and process the information stored in databases. Programmers make use of various techniques in such programs to accomplish this, but sometimes neglect the effect some of these techniques may have on database performance. One of the techniques generally used,is to pull data from the database server, process it and push it back to the database server in one single step. Since the processing of the data usually takes some time, it keeps the database busy and locked for the period of time that the processing takes place. Because of this, it decreases the overall performance of the database server and therefore the system’s performance. This paper follows on a paper discussing the performance increase that may be achieved by utilizing array lists along with a pull-process-push data processing technique split in three steps. The purpose of this paper is to expand the number of clients when comparing the two techniques to establish the impact it may have on performance of the CPU storage and processing time.

Keywords: performance measures, algorithm techniques, data processing, push data, process data, array list

Procedia PDF Downloads 244
7398 Frequent Item Set Mining for Big Data Using MapReduce Framework

Authors: Tamanna Jethava, Rahul Joshi

Abstract:

Frequent Item sets play an essential role in many data Mining tasks that try to find interesting patterns from the database. Typically it refers to a set of items that frequently appear together in transaction dataset. There are several mining algorithm being used for frequent item set mining, yet most do not scale to the type of data we presented with today, so called “BIG DATA”. Big Data is a collection of large data sets. Our approach is to work on the frequent item set mining over the large dataset with scalable and speedy way. Big Data basically works with Map Reduce along with HDFS is used to find out frequent item sets from Big Data on large cluster. This paper focuses on using pre-processing & mining algorithm as hybrid approach for big data over Hadoop platform.

Keywords: frequent item set mining, big data, Hadoop, MapReduce

Procedia PDF Downloads 435
7397 Domain specific Ontology-Based Knowledge Extraction Using R-GNN and Large Language Models

Authors: Andrey Khalov

Abstract:

The rapid proliferation of unstructured data in IT infrastructure management demands innovative approaches for extracting actionable knowledge. This paper presents a framework for ontology-based knowledge extraction that combines relational graph neural networks (R-GNN) with large language models (LLMs). The proposed method leverages the DOLCE framework as the foundational ontology, extending it with concepts from ITSMO for domain-specific applications in IT service management and outsourcing. A key component of this research is the use of transformer-based models, such as DeBERTa-v3-large, for automatic entity and relationship extraction from unstructured texts. Furthermore, the paper explores how transfer learning techniques can be applied to fine-tune large language models (LLaMA) for using to generate synthetic datasets to improve precision in BERT-based entity recognition and ontology alignment. The resulting IT Ontology (ITO) serves as a comprehensive knowledge base that integrates domain-specific insights from ITIL processes, enabling more efficient decision-making. Experimental results demonstrate significant improvements in knowledge extraction and relationship mapping, offering a cutting-edge solution for enhancing cognitive computing in IT service environments.

Keywords: ontology mapping, R-GNN, knowledge extraction, large language models, NER, knowlege graph

Procedia PDF Downloads 16
7396 Optimization of MAG Welding Process Parameters Using Taguchi Design Method on Dead Mild Steel

Authors: Tadele Tesfaw, Ajit Pal Singh, Abebaw Mekonnen Gezahegn

Abstract:

Welding is a basic manufacturing process for making components or assemblies. Recent welding economics research has focused on developing the reliable machinery database to ensure optimum production. Research on welding of materials like steel is still critical and ongoing. Welding input parameters play a very significant role in determining the quality of a weld joint. The metal active gas (MAG) welding parameters are the most important factors affecting the quality, productivity and cost of welding in many industrial operations. The aim of this study is to investigate the optimization process parameters for metal active gas welding for 60x60x5mm dead mild steel plate work-piece using Taguchi method to formulate the statistical experimental design using semi-automatic welding machine. An experimental study was conducted at Bishoftu Automotive Industry, Bishoftu, Ethiopia. This study presents the influence of four welding parameters (control factors) like welding voltage (volt), welding current (ampere), wire speed (m/min.), and gas (CO2) flow rate (lit./min.) with three different levels for variability in the welding hardness. The objective functions have been chosen in relation to parameters of MAG welding i.e., welding hardness in final products. Nine experimental runs based on an L9 orthogonal array Taguchi method were performed. An orthogonal array, signal-to-noise (S/N) ratio and analysis of variance (ANOVA) are employed to investigate the welding characteristics of dead mild steel plate and used in order to obtain optimum levels for every input parameter at 95% confidence level. The optimal parameters setting was found is welding voltage at 22 volts, welding current at 125 ampere, wire speed at 2.15 m/min and gas flow rate at 19 l/min by using the Taguchi experimental design method within the constraints of the production process. Finally, six conformations welding have been carried out to compare the existing values; the predicated values with the experimental values confirm its effectiveness in the analysis of welding hardness (quality) in final products. It is found that welding current has a major influence on the quality of welded joints. Experimental result for optimum setting gave a better hardness of welding condition than initial setting. This study is valuable for different material and thickness variation of welding plate for Ethiopian industries.

Keywords: Weld quality, metal active gas welding, dead mild steel plate, orthogonal array, analysis of variance, Taguchi method

Procedia PDF Downloads 480
7395 A New Approach for Assertions Processing during Assertion-Based Software Testing

Authors: Ali M. Alakeel

Abstract:

Assertion-based software testing has been shown to be a promising tool for generating test cases that reveal program faults. Because the number of assertions may be very large for industry-size programs, one of the main concerns to the applicability of assertion-based testing is the amount of search time required to explore a large number of assertions. This paper presents a new approach for assertions exploration during the process of Assertion-Based software testing. Our initial exterminations with the proposed approach show that the performance of Assertion-Based testing may be improved, therefore, making this approach more efficient when applied on programs with large number of assertions.

Keywords: software testing, assertion-based testing, program assertions, generating test

Procedia PDF Downloads 460
7394 Monoallelic and Biallelic Deletions of 13q14 in a Group of 36 CLL Patients Investigated by CGH Haematological Cancer and SNP Array (8x60K)

Authors: B. Grygalewicz, R. Woroniecka, J. Rygier, K. Borkowska, A. Labak, B. Nowakowska, B. Pienkowska-Grela

Abstract:

Introduction: Chronic lymphocytic leukemia (CLL) is the most common form of adult leukemia in the Western world. Hemizygous and or homozygous loss at 13q14 occur in more than half of cases and constitute the most frequent chromosomal abnormality in CLL. It is believed that deletions 13q14 play a role in CLL pathogenesis. Two microRNA genes miR-15a and miR- 16-1 are targets of 13q14 deletions and plays a tumor suppressor role by targeting antiapoptotic BCL2 gene. Deletion size, as a single change detected in FISH analysis, has haprognostic significance. Patients with small deletions, without RB1 gene involvement, have the best prognosis and the longest overall survival time (OS 133 months). In patients with bigger deletion region, containing RB1 gene, prognosis drops to intermediate, like in patients with normal karyotype and without changes in FISH with overall survival 111 months. Aim: Precise delineation of 13q14 deletions regions in two groups of CLL patients, with mono- and biallelic deletions and qualifications of their prognostic significance. Methods: Detection of 13q14 deletions was performed by FISH analysis with CLL probe panel (D13S319, LAMP1, TP53, ATM, CEP-12). Accurate deletion size detection was performed by CGH Haematological Cancer and SNP array (8x60K). Results: Our investigated group of CLL patients with the 13q14 deletion, detected by FISH analysis, comprised two groups: 18 patients with monoallelic deletions and 18 patients with biallelic deletions. In FISH analysis, in the monoallelic group the range of cells with deletion, was 43% to 97%, while in biallelic group deletion was detected in 11% to 94% of cells. Microarray analysis revealed precise deletion regions. In the monoallelic group, the range of size was 348,12 Kb to 34,82 Mb, with median deletion size 7,93 Mb. In biallelic group discrepancy of total deletions, size was 135,27 Kb to 33,33 Mb, with median deletion size 2,52 Mb. The median size of smaller deletion regions on one copy chromosome 13 was 1,08 Mb while the average region of bigger deletion on the second chromosome 13 was 4,04 Mb. In the monoallelic group, in 8/18 deletion region covered RB1 gene. In the biallelic group, in 4/18 cases, revealed deletion on one copy of biallelic deletion and in 2/18 showed deletion of RB1 gene on both deleted 13q14 regions. All minimal deleted regions included miR-15a and miR-16-1 genes. Genetic results will be correlated with clinical data. Conclusions: Application of CGH microarrays technique in CLL allows accurately delineate the size of 13q14 deletion regions, what have a prognostic value. All deleted regions included miR15a and miR-16-1, what confirms the essential role of these genes in CLL pathogenesis. In our investigated groups of CLL patients with mono- and biallelic 13q14 deletions, patients with biallelic deletion presented smaller deletion sizes (2,52 Mb vs 7,93 Mb), what is connected with better prognosis.

Keywords: CLL, deletion 13q14, CGH microarrays, SNP array

Procedia PDF Downloads 255
7393 Model Order Reduction of Continuous LTI Large Descriptor System Using LRCF-ADI and Square Root Balanced Truncation

Authors: Mohammad Sahadet Hossain, Shamsil Arifeen, Mehrab Hossian Likhon

Abstract:

In this paper, we analyze a linear time invariant (LTI) descriptor system of large dimension. Since these systems are difficult to simulate, compute and store, we attempt to reduce this large system using Low Rank Cholesky Factorized Alternating Directions Implicit (LRCF-ADI) iteration followed by Square Root Balanced Truncation. LRCF-ADI solves the dual Lyapunov equations of the large system and gives low-rank Cholesky factors of the gramians as the solution. Using these cholesky factors, we compute the Hankel singular values via singular value decomposition. Later, implementing square root balanced truncation, the reduced system is obtained. The bode plots of original and lower order systems are used to show that the magnitude and phase responses are same for both the systems.

Keywords: low-rank cholesky factor alternating directions implicit iteration, LTI Descriptor system, Lyapunov equations, Square-root balanced truncation

Procedia PDF Downloads 418
7392 Association of Leptin Gene T3469C Polymorphism on Reproductive Performance of Purebred Sows

Authors: Mariedel Autriz, Angel Lambio, Renato Vega, Severino Capitan, Rita Laude

Abstract:

The study was conducted to associate genetic polymorphism of the leptin gene T3469C with reproductive performance in purebred sows. DNA were isolated from hair follicles of 29 Landrace and 24 Large White sows. Amplification of the leptin gene was done followed by Hinf1digestion to determine the base at the T3469C site. Electrophoresis of the digestion products revealed that there were 25 Landrace and 15 Large White sows with the TT genotype while there were 3 Landrace and 6 Large White TC. There was 1 CC for Landrace and 3 for Large White. Significant genotype associations were observed for total litter size born and total born alive. Significant breed differences, on the other hand, was observed for gestation length and average birth weight. Significant breed by genotype interaction was observed in litter size total born and litter size born alive.

Keywords: genetic polymorphism, leptin, swine, T3469C

Procedia PDF Downloads 419
7391 Thermal Network Model for a Large Scale AC Induction Motor

Authors: Sushil Kumar, M. Dakshina Murty

Abstract:

Thermal network modelling has proven to be important tool for thermal analysis of electrical machine. This article investigates numerical thermal network model and experimental performance of a large-scale AC motor. Experimental temperatures were measured using RTD in the stator which have been compared with the numerical data. Thermal network modelling fairly predicts the temperature of various components inside the large-scale AC motor. Results of stator winding temperature is compared with experimental results which are in close agreement with accuracy of 6-10%. This method of predicting hot spots within AC motors can be readily used by the motor designers for estimating the thermal hot spots of the machine.

Keywords: AC motor, thermal network, heat transfer, modelling

Procedia PDF Downloads 326
7390 A Theoretical Model for Pattern Extraction in Large Datasets

Authors: Muhammad Usman

Abstract:

Pattern extraction has been done in past to extract hidden and interesting patterns from large datasets. Recently, advancements are being made in these techniques by providing the ability of multi-level mining, effective dimension reduction, advanced evaluation and visualization support. This paper focuses on reviewing the current techniques in literature on the basis of these parameters. Literature review suggests that most of the techniques which provide multi-level mining and dimension reduction, do not handle mixed-type data during the process. Patterns are not extracted using advanced algorithms for large datasets. Moreover, the evaluation of patterns is not done using advanced measures which are suited for high-dimensional data. Techniques which provide visualization support are unable to handle a large number of rules in a small space. We present a theoretical model to handle these issues. The implementation of the model is beyond the scope of this paper.

Keywords: association rule mining, data mining, data warehouses, visualization of association rules

Procedia PDF Downloads 223