Search results for: soft computing techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8443

Search results for: soft computing techniques

7303 Efficacy of a Wiener Filter Based Technique for Speech Enhancement in Hearing Aids

Authors: Ajish K. Abraham

Abstract:

Hearing aid is the most fundamental technology employed towards rehabilitation of persons with sensory neural hearing impairment. Hearing in noise is still a matter of major concern for many hearing aid users and thus continues to be a challenging issue for the hearing aid designers. Several techniques are being currently used to enhance the speech at the hearing aid output. Most of these techniques, when implemented, result in reduction of intelligibility of the speech signal. Thus the dissatisfaction of the hearing aid user towards comprehending the desired speech amidst noise is prevailing. Multichannel Wiener Filter is widely implemented in binaural hearing aid technology for noise reduction. In this study, Wiener filter based noise reduction approach is experimented for a single microphone based hearing aid set up. This method checks the status of the input speech signal in each frequency band and then selects the relevant noise reduction procedure. Results showed that the Wiener filter based algorithm is capable of enhancing speech even when the input acoustic signal has a very low Signal to Noise Ratio (SNR). Performance of the algorithm was compared with other similar algorithms on the basis of improvement in intelligibility and SNR of the output, at different SNR levels of the input speech. Wiener filter based algorithm provided significant improvement in SNR and intelligibility compared to other techniques.

Keywords: hearing aid output speech, noise reduction, SNR improvement, Wiener filter, speech enhancement

Procedia PDF Downloads 247
7302 Observing Sustainability: Case Studies of Chandigarh Boutiques and Their Textile Waste Reuse

Authors: Prabhdip Brar

Abstract:

Since the ancient times recycling, reusing and upcycling has been strongly practiced in India. However, previously reprocess was common due to lack of resources and availability of free time, especially with women who were homemakers. The upward strategy of design philosophy and drift of sustainability is sustainable fashion which is also termed eco fashion, the aspiration of which is to craft a classification which can be supported ad infinitum in terms of environmentalism and social responsibility. The viable approach of sustaining fashion is part of the larger trend of justifiable design where a product is generated and produced while considering its social impact to the environment. The purpose of this qualitative research paper is to find out if the apparel design boutiques in Chandigarh, (an educated fashion-conscious city) are contributing towards making conscious efforts with the re-use of environmentally responsive materials to rethink about eco-conscious traditional techniques and socially responsible approaches of the invention. Observation method and case studies of ten renowned boutiques of Chandigarh were conducted to find out about the creativity of their waste management and social contribution. Owners were interviewed with open-ended questions to find out their understanding of sustainability. This paper concludes that there are many sustainable ideas existing within India from olden times that can be incorporated into modern manufacturing techniques. The results showed all the designers are aware of sustainability as a concept. In all practical purposes, a patch of fabric is being used for bindings or one over the other as surface ornamentation techniques. Plain Fabrics and traditional prints and fabrics are valued more by the owners for using on other garments. Few of them sort their leftover pieces according to basic colors. Few boutique owners preferred donating it to Non-Government organizations. Still, they have enough waste which is not utilized because of lack of time and labor. This paper discusses how the Indian traditional techniques still derive influences though design and techniques, making India one of the contributing countries to the sustainability of fashion and textiles.

Keywords: eco-fashion textile, sustainable textiles, sustainability in india, waste management

Procedia PDF Downloads 107
7301 GC-MS-Based Untargeted Metabolomics to Study the Metabolism of Pectobacterium Strains

Authors: Magdalena Smoktunowicz, Renata Wawrzyniak, Malgorzata Waleron, Krzysztof Waleron

Abstract:

Pectobacterium spp. were previously classified into the Erwinia genus founded in 1917 to unite at that time all Gram-negative, fermentative, nonsporulating and peritrichous flagellated plant pathogenic bacteria. After work of Waldee (1945), on Approved Lists of Bacterial Names and bacteriology manuals in 1980, they were described either under the species named Erwinia or Pectobacterium. The Pectobacterium genus was formally described in 1998 of 265 Pectobacterium strains. Currently, there are 21 species of Pectobacterium bacteria, including Pectobacterium betavasculorum since 2003, which caused soft rot on sugar beet tubers. Based on the biochemical experiments carried out for this, it is known that these bacteria are gram-negative, catalase-positive, oxidase-negative, facultatively anaerobic, using gelatin and causing symptoms of soft rot on potato and sugar beet tubers. The mere fact of growing on sugar beet may indicate a metabolism characteristic only for this species. Metabolomics, broadly defined as the biology of the metabolic systems, which allows to make comprehensive measurements of metabolites. Metabolomics, in combination with genomics, are complementary tools for the identification of metabolites and their reactions, and thus for the reconstruction of metabolic networks. The aim of this study was to apply the GC-MS-based untargeted metabolomics to study the metabolism of P. betavasculorum in different growing conditions. The metabolomic profiles of biomass and biomass media were determined. For sample preparation the following protocol was used: extraction with 900 µl of methanol: chloroform: water mixture (10: 3: 1, v: v) were added to 900 µl of biomass from the bottom of the tube and up to 900 µl of nutrient medium from the bacterial biomass. After centrifugation (13,000 x g, 15 min, 4oC), 300µL of the obtained supernatants were concentrated by rotary vacuum and evaporated to dryness. Afterwards, two-step derivatization procedure was performed before GC-MS analyses. The obtained results were subjected to statistical calculations with the use of both uni- and multivariate tests. The obtained results were evaluated using KEGG database, to asses which metabolic pathways are activated and which genes are responsible for it, during the metabolism of given substrates contained in the growing environment. The observed metabolic changes, combined with biochemical and physiological tests, may enable pathway discovery, regulatory inference and understanding of the homeostatic abilities of P. betavasculorum.

Keywords: GC-MS chromatograpfy, metabolomics, metabolism, pectobacterium strains, pectobacterium betavasculorum

Procedia PDF Downloads 79
7300 Comparison of the Distillation Curve Obtained Experimentally with the Curve Extrapolated by a Commercial Simulator

Authors: Lívia B. Meirelles, Erika C. A. N. Chrisman, Flávia B. de Andrade, Lilian C. M. de Oliveira

Abstract:

True Boiling Point distillation (TBP) is one of the most common experimental techniques for the determination of petroleum properties. This curve provides information about the performance of petroleum in terms of its cuts. The experiment is performed in a few days. Techniques are used to determine the properties faster with a software that calculates the distillation curve when a little information about crude oil is known. In order to evaluate the accuracy of distillation curve prediction, eight points of the TBP curve and specific gravity curve (348 K and 523 K) were inserted into the HYSYS Oil Manager, and the extended curve was evaluated up to 748 K. The methods were able to predict the curve with the accuracy of 0.6%-9.2% error (Software X ASTM), 0.2%-5.1% error (Software X Spaltrohr).

Keywords: distillation curve, petroleum distillation, simulation, true boiling point curve

Procedia PDF Downloads 442
7299 Engineering the Human Mind: Social Engineering Attack Using Kali Linux

Authors: Joy Winston James, Abdul Kadher Jilani

Abstract:

This review article provides a comprehensive overview of social engineering attacks, specifically those executed through the Kali Linux operating system. It aims to present an in-depth analysis of the background and importance of social engineering in cybersecurity, the tools, and techniques used in these attacks, real-world case studies that demonstrate their effectiveness, and ethical considerations that need to be taken into account while using them. The article highlights the Kali Linux tools that are commonly used in social engineering attacks, including SET, Metasploit, and BeEF, and discusses techniques such as phishing, pretexting, and baiting that are crucial in conducting successful social engineering attacks. It further explores real-world case studies that demonstrate the effectiveness of these techniques, emphasizing the importance of implementing effective countermeasures to reduce the risk of successful social engineering attacks. Moreover, the article sheds light on ethical considerations that need to be taken into account while using social engineering tools, emphasizing the importance of using them ethically and legally. Finally, the article provides potential countermeasures such as two-factor authentication, strong password policies, and regular security audits to help individuals and organizations better protect themselves against this growing threat. By understanding the tools and techniques used in social engineering attacks and implementing appropriate countermeasures, individuals and organizations can minimize the risk of successful social engineering attacks and improve their cybersecurity posture. To illustrate the effectiveness of social engineering attacks, we present real-world case studies that demonstrate how easily individuals and organizations can fall prey to these attacks. We also discuss ethical considerations that must be taken into account while using social engineering tools, emphasizing the need for responsible and legal use of these tools.

Keywords: pen testing, hacking, Kali Linux, social engineering

Procedia PDF Downloads 100
7298 Schematic Study of Groundwater Potential Zones in Granitic Terrain Using Remotesensing and GIS Techniques, in Miyapur and Bollaram Areas of Hyderabad, India

Authors: Ishrath, Tapas Kumar Chatterjee

Abstract:

The present study aims developing interpretation and evaluation to integrate various data types for management of existing water resources for sustainable use. Proper study should be followed based on the geomorphology of the area. Thematic maps such as lithology, base map, land use/land cover, geomorphology, drainage and lineaments maps are prepared to study the area by using area toposheet, IRS P6 and LISIII Satellite imagery. These thematic layers are finally integrated by using Arc GIS, Arc View, and software to prepare a ground water potential zones map of the study area. In this study, an integrated approach involving remote sensing and GIS techniques has successfully been used in identifying groundwater potential zones in the study area to classify them as good, moderate and poor. It has been observed that Pediplain shallow (PPS) has good recharge, Pediplain moderate (PPM) has moderately good recharge, Pediment Inselberg complex (PIC) has poor recharge and Inselberg (I) has no recharge. The study has concluded that remote sensing and GIS techniques are very efficient and useful for identifying ground water potential zones.

Keywords: satellite remote sensing, GIS, ground water potential zones, Miyapur

Procedia PDF Downloads 445
7297 Inversion of Electrical Resistivity Data: A Review

Authors: Shrey Sharma, Gunjan Kumar Verma

Abstract:

High density electrical prospecting has been widely used in groundwater investigation, civil engineering and environmental survey. For efficient inversion, the forward modeling routine, sensitivity calculation, and inversion algorithm must be efficient. This paper attempts to provide a brief summary of the past and ongoing developments of the method. It includes reviews of the procedures used for data acquisition, processing and inversion of electrical resistivity data based on compilation of academic literature. In recent times there had been a significant evolution in field survey designs and data inversion techniques for the resistivity method. In general 2-D inversion for resistivity data is carried out using the linearized least-square method with the local optimization technique .Multi-electrode and multi-channel systems have made it possible to conduct large 2-D, 3-D and even 4-D surveys efficiently to resolve complex geological structures that were not possible with traditional 1-D surveys. 3-D surveys play an increasingly important role in very complex areas where 2-D models suffer from artifacts due to off-line structures. Continued developments in computation technology, as well as fast data inversion techniques and software, have made it possible to use optimization techniques to obtain model parameters to a higher accuracy. A brief discussion on the limitations of the electrical resistivity method has also been presented.

Keywords: inversion, limitations, optimization, resistivity

Procedia PDF Downloads 365
7296 System for the Detecting of Fake Profiles on Online Social Networks Using Machine Learning and the Bio-Inspired Algorithms

Authors: Sekkal Nawel, Mahammed Nadir

Abstract:

The proliferation of online activities on Online Social Networks (OSNs) has captured significant user attention. However, this growth has been hindered by the emergence of fraudulent accounts that do not represent real individuals and violate privacy regulations within social network communities. Consequently, it is imperative to identify and remove these profiles to enhance the security of OSN users. In recent years, researchers have turned to machine learning (ML) to develop strategies and methods to tackle this issue. Numerous studies have been conducted in this field to compare various ML-based techniques. However, the existing literature still lacks a comprehensive examination, especially considering different OSN platforms. Additionally, the utilization of bio-inspired algorithms has been largely overlooked. Our study conducts an extensive comparison analysis of various fake profile detection techniques in online social networks. The results of our study indicate that supervised models, along with other machine learning techniques, as well as unsupervised models, are effective for detecting false profiles in social media. To achieve optimal results, we have incorporated six bio-inspired algorithms to enhance the performance of fake profile identification results.

Keywords: machine learning, bio-inspired algorithm, detection, fake profile, system, social network

Procedia PDF Downloads 67
7295 Managing Data from One Hundred Thousand Internet of Things Devices Globally for Mining Insights

Authors: Julian Wise

Abstract:

Newcrest Mining is one of the world’s top five gold and rare earth mining organizations by production, reserves and market capitalization in the world. This paper elaborates on the data acquisition processes employed by Newcrest in collaboration with Fortune 500 listed organization, Insight Enterprises, to standardize machine learning solutions which process data from over a hundred thousand distributed Internet of Things (IoT) devices located at mine sites globally. Through the utilization of software architecture cloud technologies and edge computing, the technological developments enable for standardized processes of machine learning applications to influence the strategic optimization of mineral processing. Target objectives of the machine learning optimizations include time savings on mineral processing, production efficiencies, risk identification, and increased production throughput. The data acquired and utilized for predictive modelling is processed through edge computing by resources collectively stored within a data lake. Being involved in the digital transformation has necessitated the standardization software architecture to manage the machine learning models submitted by vendors, to ensure effective automation and continuous improvements to the mineral process models. Operating at scale, the system processes hundreds of gigabytes of data per day from distributed mine sites across the globe, for the purposes of increased improved worker safety, and production efficiency through big data applications.

Keywords: mineral technology, big data, machine learning operations, data lake

Procedia PDF Downloads 112
7294 Computer-Aided Detection of Liver and Spleen from CT Scans using Watershed Algorithm

Authors: Belgherbi Aicha, Bessaid Abdelhafid

Abstract:

In the recent years a great deal of research work has been devoted to the development of semi-automatic and automatic techniques for the analysis of abdominal CT images. The first and fundamental step in all these studies is the semi-automatic liver and spleen segmentation that is still an open problem. In this paper, a semi-automatic liver and spleen segmentation method by the mathematical morphology based on watershed algorithm has been proposed. Our algorithm is currency in two parts. In the first, we seek to determine the region of interest by applying the morphological to extract the liver and spleen. The second step consists to improve the quality of the image gradient. In this step, we propose a method for improving the image gradient to reduce the over-segmentation problem by applying the spatial filters followed by the morphological filters. Thereafter we proceed to the segmentation of the liver, spleen. The aim of this work is to develop a method for semi-automatic segmentation liver and spleen based on watershed algorithm, improve the accuracy and the robustness of the liver and spleen segmentation and evaluate a new semi-automatic approach with the manual for liver segmentation. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work. The system has been evaluated by computing the sensitivity and specificity between the semi-automatically segmented (liver and spleen) contour and the manually contour traced by radiological experts. Liver segmentation has achieved the sensitivity and specificity; sens Liver=96% and specif Liver=99% respectively. Spleen segmentation achieves similar, promising results sens Spleen=95% and specif Spleen=99%.

Keywords: CT images, liver and spleen segmentation, anisotropic diffusion filter, morphological filters, watershed algorithm

Procedia PDF Downloads 325
7293 Intelligent and Optimized Placement for CPLD Devices

Authors: Abdelkader Hadjoudja, Hajar Bouazza

Abstract:

The PLD/CPLD devices are widely used for logic synthesis since several decades. Based on sum of product terms (PTs) architecture, the PLD/CPLD offer a high degree of flexibility to support various application requirements. They are suitable for large combinational logic, finite state machines as well as intensive I/O designs. CPLDs offer very predictable timing characteristics and are therefore ideal for critical control applications. This paper describes how the logic synthesis techniques, such as 1) XOR detection, 2) logic doubling, 3) complement of a Boolean function are combined, applied and used to optimize the CPLDs devices architecture that is based on PAL-like macrocells. Our goal is to use these techniques for minimizing the number of macrocells required to implement a circuit and minimize the delay of mapped circuit.

Keywords: CPLD, doubling, optimization, XOR

Procedia PDF Downloads 282
7292 Advanced Techniques in Robotic Mitral Valve Repair

Authors: Abraham J. Rizkalla, Tristan D. Yan

Abstract:

Purpose: Durable mitral valve repair is preferred to a replacement, avoiding the need for anticoagulation or re-intervention, with a reduced risk of endocarditis. Robotic mitral repair has been gaining favour globally as a safe, effective, and reproducible method of minimally invasive valve repair. In this work, we showcase the use of the Davinci© Xi robotic platform to perform several advanced techniques, working synergistically to achieve successful mitral repair in advanced mitral disease. Techniques: We present the case of a Barlow type mitral valve disease with a tall and redundant posterior leaflet resulting in severe mitral regurgitation and systolic anterior motion. Firstly, quadrangular resection of P2 is performed to remove the excess and redundant leaflet. Secondly, a sliding leaflet plasty of P1 and P3 is used to reconstruct the posterior leaflet. To anchor the newly formed posterior leaflet to the papillary muscle, CV-4 Goretex neochordae are fashioned using the innovative string, ruler, and bulldog technique. Finally, mitral valve annuloplasty and closure of a patent foramen ovale complete the repair. Results: There was no significant residual mitral regurgitation and complete resolution of the systolic anterior motion of the mitral valve on post operative transoesophageal echocardiography. Conclusion: This work highlights the robotic approach to complex repair techniques for advanced mitral valve disease. Familiarity with resection and sliding plasty, neochord implantation, and annuloplasty allows the modern cardiac surgeon to achieve a minimally-invasive and durable mitral valve repair when faced with complex mitral valve pathology.

Keywords: robotic mitral valve repair, Barlow's valve, sliding plasty, neochord, annuloplasty, quadrangular resection

Procedia PDF Downloads 86
7291 Stock Movement Prediction Using Price Factor and Deep Learning

Authors: Hy Dang, Bo Mei

Abstract:

The development of machine learning methods and techniques has opened doors for investigation in many areas such as medicines, economics, finance, etc. One active research area involving machine learning is stock market prediction. This research paper tries to consider multiple techniques and methods for stock movement prediction using historical price or price factors. The paper explores the effectiveness of some deep learning frameworks for forecasting stock. Moreover, an architecture (TimeStock) is proposed which takes the representation of time into account apart from the price information itself. Our model achieves a promising result that shows a potential approach for the stock movement prediction problem.

Keywords: classification, machine learning, time representation, stock prediction

Procedia PDF Downloads 147
7290 Scalable CI/CD and Scalable Automation: Assisting in Optimizing Productivity and Fostering Delivery Expansion

Authors: Solanki Ravirajsinh, Kudo Kuniaki, Sharma Ankit, Devi Sherine, Kuboshima Misaki, Tachi Shuntaro

Abstract:

In software development life cycles, the absence of scalable CI/CD significantly impacts organizations, leading to increased overall maintenance costs, prolonged release delivery times, heightened manual efforts, and difficulties in meeting tight deadlines. Implementing CI/CD with standard serverless technologies using cloud services overcomes all the above-mentioned issues and helps organizations improve efficiency and faster delivery without the need to manage server maintenance and capacity. By integrating scalable CI/CD with scalable automation testing, productivity, quality, and agility are enhanced while reducing the need for repetitive work and manual efforts. Implementing scalable CI/CD for development using cloud services like ECS (Container Management Service), AWS Fargate, ECR (to store Docker images with all dependencies), Serverless Computing (serverless virtual machines), Cloud Log (for monitoring errors and logs), Security Groups (for inside/outside access to the application), Docker Containerization (Docker-based images and container techniques), Jenkins (CI/CD build management tool), and code management tools (GitHub, Bitbucket, AWS CodeCommit) can efficiently handle the demands of diverse development environments and are capable of accommodating dynamic workloads, increasing efficiency for faster delivery with good quality. CI/CD pipelines encourage collaboration among development, operations, and quality assurance teams by providing a centralized platform for automated testing, deployment, and monitoring. Scalable CI/CD streamlines the development process by automatically fetching the latest code from the repository every time the process starts, building the application based on the branches, testing the application using a scalable automation testing framework, and deploying the builds. Developers can focus more on writing code and less on managing infrastructure as it scales based on the need. Serverless CI/CD eliminates the need to manage and maintain traditional CI/CD infrastructure, such as servers and build agents, reducing operational overhead and allowing teams to allocate resources more efficiently. Scalable CI/CD adjusts the application's scale according to usage, thereby alleviating concerns about scalability, maintenance costs, and resource needs. Creating scalable automation testing using cloud services (ECR, ECS Fargate, Docker, EFS, Serverless Computing) helps organizations run more than 500 test cases in parallel, aiding in the detection of race conditions, performance issues, and reducing execution time. Scalable CI/CD offers flexibility, dynamically adjusting to varying workloads and demands, allowing teams to scale resources up or down as needed. It optimizes costs by only paying for the resources as they are used and increases reliability. Scalable CI/CD pipelines employ automated testing and validation processes to detect and prevent errors early in the development cycle.

Keywords: achieve parallel execution, cloud services, scalable automation testing, scalable continuous integration and deployment

Procedia PDF Downloads 44
7289 Factors Affecting Sustainability of a 3D Printed Object

Authors: Kadrefi Athanasia, Fronimaki Evgenia, Mavri Maria

Abstract:

3D Printing (3DP) is a distinct, disruptive technology that belongs to a wider group of manufacturing technologies, Additive Manufacturing (AM). In 3DP, a custom digital file turns into a solid object using a single computer and a 3D printer. Among multiple advantages, 3DP offers production with fewer steps compared to conventional manufacturing, lower production costs, and customizable designs. 3DP can be performed by several techniques, while the most common is Fused Deposition Modeling (FDM). FDM belongs to a wider group of AM techniques, material extrusion, where a digital file converts into a solid object using raw material (called filament) melted in high temperatures. As in most manufacturing procedures, environmental issues have been raised here, too. This study aims to review the literature on issues that determine technical and mechanical factors that affect the sustainability and resilience of a final 3D-printed object. The research focuses on the collection of papers that deal with 3D printing techniques and use keywords or phrases like ‘3D printed objects’, ‘factors of 3DP sustainability’, ‘waste materials,’ ‘infill patterns,’ and ‘support structures.’ After determining factors, a pilot survey will be conducted at the 3D Printing Lab in order to define the significance of each factor in the final 3D printed object.

Keywords: additive manufacturing, 3D printing, sustainable manufacturing, sustainable production

Procedia PDF Downloads 65
7288 Principal Component Analysis in Drug-Excipient Interactions

Authors: Farzad Khajavi

Abstract:

Studies about the interaction between active pharmaceutical ingredients (API) and excipients are so important in the pre-formulation stage of development of all dosage forms. Analytical techniques such as differential scanning calorimetry (DSC), Thermal gravimetry (TG), and Furrier transform infrared spectroscopy (FTIR) are commonly used tools for investigating regarding compatibility and incompatibility of APIs with excipients. Sometimes the interpretation of data obtained from these techniques is difficult because of severe overlapping of API spectrum with excipients in their mixtures. Principal component analysis (PCA) as a powerful factor analytical method is used in these situations to resolve data matrices acquired from these analytical techniques. Binary mixtures of API and interested excipients are considered and produced. Peaks of FTIR, DSC, or TG of pure API and excipient and their mixtures at different mole ratios will construct the rows of the data matrix. By applying PCA on the data matrix, the number of principal components (PCs) is determined so that it contains the total variance of the data matrix. By plotting PCs or factors obtained from the score of the matrix in two-dimensional spaces if the pure API and its mixture with the excipient at the high amount of API and the 1:1mixture form a separate cluster and the other cluster comprise of the pure excipient and its blend with the API at the high amount of excipient. This confirms the existence of compatibility between API and the interested excipient. Otherwise, the incompatibility will overcome a mixture of API and excipient.

Keywords: API, compatibility, DSC, TG, interactions

Procedia PDF Downloads 133
7287 A Conceptual Framework of Impact of Lean on the Performance of Construction Industry

Authors: Jaber Shurrab, Matloub Hussain

Abstract:

The rapid pace of changes in the construction industry, technological advancements, and rising costs present tremendous challenges for project managers. Project managers are under severe pressure to minimize the waste, improve the efficiency of the entire operations and the philosophy of ‘lean thinking’ so that ‘more could be achieved with less’ is becoming very popular. Though, lean management has strong roots in manufacturing industry and over the last decade lean philosophy has started gaining attention in the service industry as well. However, little has been known in the context of waste minimization and lean implementation in the construction industry and this paper deals with this important issue. The primary objective of this paper is to propose a conceptual framework for the exploration of appropriate lean techniques applicable to medium and large construction companies and measure their impact on the competitiveness and economic performance of construction companies of United Arab Emirates (UAE). To this end, a comprehensive literature review and interviews with eight project managers of medium and large construction companies of UAE have been conducted. It has been found that competitive, reduce waste and costs are critical to the construction industry. This is an ongoing research in lean management, giving project managers a practical framework for improving the efficiency of their project through various lean techniques. Originality/value: Research significance emphasizes increasing the effectiveness of the construction industry, influences the development of lean construction framework which improves lean construction practices using the lean techniques. This contributes to the effort of applying lean techniques in the construction industry. Limited publications were done in the construction industry mainly in United Arab Emirates (UAE) compared to the lean manufacturing. This research will recommend a systematic approach for the implementing of the anticipated framework within a cyclical look-ahead period and emphasizes the practical implications of the proposed approach.

Keywords: construction, lean, lean manufacturing, waste

Procedia PDF Downloads 286
7286 Solar Heating System to Promote the Disinfection

Authors: Elmo Thiago Lins Cöuras Ford, Valentina Alessandra Carvalho do Vale

Abstract:

It presents a heating system using low cost alternative solar collectors to promote the disinfection of water in low income communities that take water contaminated by bacteria. The system consists of two solar collectors, with total area of 4 m² and was built using PET bottles and cans of beer and soft drinks. Each collector is made up of 8 PVC tubes, connected in series and work in continuous flow. It will determine the flux the most appropriate to generate the temperature to promote the disinfection. Will be presented results of the efficiency and thermal loss of system and results of analysis of water after undergoing the process of heating.

Keywords: disinfection of water, solar heating system, poor communities, PVC

Procedia PDF Downloads 479
7285 Speckle-Based Phase Contrast Micro-Computed Tomography with Neural Network Reconstruction

Authors: Y. Zheng, M. Busi, A. F. Pedersen, M. A. Beltran, C. Gundlach

Abstract:

X-ray phase contrast imaging has shown to yield a better contrast compared to conventional attenuation X-ray imaging, especially for soft tissues in the medical imaging energy range. This can potentially lead to better diagnosis for patients. However, phase contrast imaging has mainly been performed using highly brilliant Synchrotron radiation, as it requires high coherence X-rays. Many research teams have demonstrated that it is also feasible using a laboratory source, bringing it one step closer to clinical use. Nevertheless, the requirement of fine gratings and high precision stepping motors when using a laboratory source prevents it from being widely used. Recently, a random phase object has been proposed as an analyzer. This method requires a much less robust experimental setup. However, previous studies were done using a particular X-ray source (liquid-metal jet micro-focus source) or high precision motors for stepping. We have been working on a much simpler setup with just small modification of a commercial bench-top micro-CT (computed tomography) scanner, by introducing a piece of sandpaper as the phase analyzer in front of the X-ray source. However, it needs a suitable algorithm for speckle tracking and 3D reconstructions. The precision and sensitivity of speckle tracking algorithm determine the resolution of the system, while the 3D reconstruction algorithm will affect the minimum number of projections required, thus limiting the temporal resolution. As phase contrast imaging methods usually require much longer exposure time than traditional absorption based X-ray imaging technologies, a dynamic phase contrast micro-CT with a high temporal resolution is particularly challenging. Different reconstruction methods, including neural network based techniques, will be evaluated in this project to increase the temporal resolution of the phase contrast micro-CT. A Monte Carlo ray tracing simulation (McXtrace) was used to generate a large dataset to train the neural network, in order to address the issue that neural networks require large amount of training data to get high-quality reconstructions.

Keywords: micro-ct, neural networks, reconstruction, speckle-based x-ray phase contrast

Procedia PDF Downloads 258
7284 Computational Study of Composite Films

Authors: Rudolf Hrach, Stanislav Novak, Vera Hrachova

Abstract:

Composite and nanocomposite films represent the class of promising materials and are often objects of the study due to their mechanical, electrical and other properties. The most interesting ones are probably the composite metal/dielectric structures consisting of a metal component embedded in an oxide or polymer matrix. Behaviour of composite films varies with the amount of the metal component inside what is called filling factor. The structures contain individual metal particles or nanoparticles completely insulated by the dielectric matrix for small filling factors and the films have more or less dielectric properties. The conductivity of the films increases with increasing filling factor and finally a transition into metallic state occurs. The behaviour of composite films near a percolation threshold, where the change of charge transport mechanism from a thermally-activated tunnelling between individual metal objects to an ohmic conductivity is observed, is especially important. Physical properties of composite films are given not only by the concentration of metal component but also by the spatial and size distributions of metal objects which are influenced by a technology used. In our contribution, a study of composite structures with the help of methods of computational physics was performed. The study consists of two parts: -Generation of simulated composite and nanocomposite films. The techniques based on hard-sphere or soft-sphere models as well as on atomic modelling are used here. Characterizations of prepared composite structures by image analysis of their sections or projections follow then. However, the analysis of various morphological methods must be performed as the standard algorithms based on the theory of mathematical morphology lose their sensitivity when applied to composite films. -The charge transport in the composites was studied by the kinetic Monte Carlo method as there is a close connection between structural and electric properties of composite and nanocomposite films. It was found that near the percolation threshold the paths of tunnel current forms so-called fuzzy clusters. The main aim of the present study was to establish the correlation between morphological properties of composites/nanocomposites and structures of conducting paths in them in the dependence on the technology of composite films.

Keywords: composite films, computer modelling, image analysis, nanocomposite films

Procedia PDF Downloads 393
7283 Comparing ITV Definitions From 4D CT-PET and Breath-Hold Technique with Abdominal Compression

Authors: R. D. Esposito, P. Dorado Rodriguez, D. Planes Meseguer

Abstract:

In this work, we compare the contour of Internal Target Volume (ITV), for Stereotactic Body Radiation Therapy (SBRT) of a patient affected by a single liver metastasis, obtained from two different patient data acquisition techniques. The first technique consists in a free breathing Computer Tomography (CT) scan acquisition, followed by exhalation breath-hold and inhalation breath-hold CT scans, all of them applying abdominal compression while the second technique consists in a free breathing 4D CT-PET (Positron Emission Tomography) scan. Results obtained with these two methods are consistent, which demonstrate that at least for this specific case, both techniques are adequate for ITV contouring in SBRT treatments.

Keywords: 4D CT-PET, abdominal compression, ITV, SBRT

Procedia PDF Downloads 443
7282 Pre-Industrial Local Architecture According to Natural Properties

Authors: Selin Küçük

Abstract:

Pre-industrial architecture is integration of natural and subsequent properties by intelligence and experience. Since various settlements relatively industrialized or non-industrialized at any time, ‘pre-industrial’ term does not refer to a definite time. Natural properties, which are existent conditions and materials in natural local environment, are climate, geomorphology and local materials. Subsequent properties, which are all anthropological comparatives, are culture of societies, requirements of people and construction techniques that people use. Yet, after industrialization, technology took technique’s place, cultural effects are manipulated, requirements are changed and local/natural properties are almost disappeared in architecture. Technology is universal, global and expands simply; conversely technique is time and experience dependent and should has a considerable cultural background. This research is about construction techniques according to natural properties of a region and classification of these techniques. Understanding local architecture is only possible by searching its background which is hard to reach. There are always changes in positive and negative in architectural techniques through the time. Archaeological layers of a region sometimes give more accurate information about transformation of architecture. However, natural properties of any region are the most helpful elements to perceive construction techniques. Many international sources from different cultures are interested in local architecture by mentioning natural properties separately. Unfortunately, there is no literature deals with this subject as far as systematically in the correct way. This research aims to improve a clear perspective of local architecture existence by categorizing archetypes according to natural properties. The ultimate goal of this research is generating a clear classification of local architecture independent from subsequent (anthropological) properties over the world such like a handbook. Since local architecture is the most sustainable architecture with refer to its economic, ecologic and sociological properties, there should be an excessive information about construction techniques to be learned from. Constructing the same buildings in all over the world is one of the main criticism of modern architectural system. While this critics going on, the same buildings without identity increase incrementally. In post-industrial term, technology widely took technique’s place, yet cultural effects are manipulated, requirements are changed and natural local properties are almost disappeared in architecture. These study does not offer architects to use local techniques, but it indicates the progress of pre-industrial architectural evolution which is healthier, cheaper and natural. Immigration from rural areas to developing/developed cities should be prohibited, thus culture and construction techniques can be preserved. Since big cities have psychological, sensational and sociological impact on people, rural settlers can be convinced to not to immigrate by providing new buildings designed according to natural properties and maintaining their settlements. Improving rural conditions would remove the economical and sociological gulf between cities and rural. What result desired to arrived in, is if there is no deformation (adaptation process of another traditional buildings because of immigration) or assimilation in a climatic region, there should be very similar solutions in the same climatic regions of the world even if there is no relationship (trade, communication etc.) among them.

Keywords: climate zones, geomorphology, local architecture, local materials

Procedia PDF Downloads 429
7281 An Evolutionary Approach for QAOA for Max-Cut

Authors: Francesca Schiavello

Abstract:

This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.

Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization

Procedia PDF Downloads 60
7280 Identifying Lead Poisoning Risk Factors among Non-Pregnant Adults in New York City through Motivational Interviewing Techniques

Authors: Nevila Bardhi, Joanna Magda, Kolapo Alex-Oni, Slavenka Sedlar, Paromita Hore

Abstract:

The New York City Department of Health and Mental Hygiene (NYC DOHMH) receives blood lead test results for NYC residents and conducts lead poisoning case investigations for individuals with elevated blood lead levels exposed to lead occupationally and non-occupationally. To (1) improve participant engagement, (2) aid the identification of potential lead sources, and (3) better tailor recommendations to reduce lead exposure, Motivational Interviewing (MI) techniques were incorporated during risk assessment interviews of non-pregnant adults by DOHMH’s Adult Lead Poisoning Prevention (ALP) Program. MI is an evidence-based counselling method used in clinical settings that have been effective in promoting behavior change by resolving ambivalence and enhancing motivation in treating both physiological and psychological health conditions. The incorporation of MI techniques in the ALP risk assessment interview was effective in improving the identification of lead sources for non-pregnant adult cases, thus, allowing for the opportunity to better tailor lead poisoning prevention recommendations. The embedding of MI cues in the ALP risk assessment interview also significantly increased engagement in the interview process, resulting in approximately 50 more interviews conducted per year and a decrease in interview refusals during case investigations. Additionally, the pre-MI interview completion rate was 57%, while the post-MI Interview completion rate was 68%. We recommend MI techniques to be used by other lead poisoning prevention programs during lead poisoning investigations in similar diverse populations.

Keywords: lead poisoning prevention, motivational interviewing, behavior change, lead poisoning risk factors, self-efficacy

Procedia PDF Downloads 89
7279 Enhancement of coupler-based delay line filters modulation techniques using optical wireless channel and amplifiers at 100 Gbit/s

Authors: Divya Sisodiya, Deepika Sipal

Abstract:

Optical wireless communication (OWC) is a relatively new technology in optical communication systems that allows for high-speed wireless optical communication. This research focuses on developing a cost-effective OWC system using a hybrid configuration of optical amplifiers. In addition to using EDFA amplifiers, a comparison study was conducted to determine which modulation technique is more effective for communication. This research examines the performance of an OWC system based on ASK and PSK modulation techniques by varying OWC parameters under various atmospheric conditions such as rain, mist, haze, and snow. Finally, the simulation results are discussed and analyzed.

Keywords: OWC, bit error rate, amplitude shift keying, phase shift keying, attenuation, amplifiers

Procedia PDF Downloads 132
7278 Artificial Intelligence Techniques for Enhancing Supply Chain Resilience: A Systematic Literature Review, Holistic Framework, and Future Research

Authors: Adane Kassa Shikur

Abstract:

Today’s supply chains (SC) have become vulnerable to unexpected and ever-intensifying disruptions from myriad sources. Consequently, the concept of supply chain resilience (SCRes) has become crucial to complement the conventional risk management paradigm, which has failed to cope with unexpected SC disruptions, resulting in severe consequences affecting SC performances and making business continuity questionable. Advancements in cutting-edge technologies like artificial intelligence (AI) and their potential to enhance SCRes by improving critical antecedents in the different phases have attracted the attention of scholars and practitioners. The research from academia and the practical interest of the industry have yielded significant publications at the nexus of AI and SCRes during the last two decades. However, the applications and examinations have been primarily conducted independently, and the extant literature is dispersed into research streams despite the complex nature of SCRes. To close this research gap, this study conducts a systematic literature review of 106 peer-reviewed articles by curating, synthesizing, and consolidating up-to-date literature and presents the state-of-the-art development from 2010 to 2022. Bayesian networks are the most topical ones among the 13 AI techniques evaluated. Concerning the critical antecedents, visibility is the first ranking to be realized by the techniques. The study revealed that AI techniques support only the first 3 phases of SCRes (readiness, response, and recovery), and readiness is the most popular one, while no evidence has been found for the growth phase. The study proposed an AI-SCRes framework to inform research and practice to approach SCRes holistically. It also provided implications for practice, policy, and theory as well as gaps for impactful future research.

Keywords: ANNs, risk, Bauesian networks, vulnerability, resilience

Procedia PDF Downloads 97
7277 An Analytical Metric and Process for Critical Infrastructure Architecture System Availability Determination in Distributed Computing Environments under Infrastructure Attack

Authors: Vincent Andrew Cappellano

Abstract:

In the early phases of critical infrastructure system design, translating distributed computing requirements to an architecture has risk given the multitude of approaches (e.g., cloud, edge, fog). In many systems, a single requirement for system uptime / availability is used to encompass the system’s intended operations. However, when architected systems may perform to those availability requirements only during normal operations and not during component failure, or during outages caused by adversary attacks on critical infrastructure (e.g., physical, cyber). System designers lack a structured method to evaluate availability requirements against candidate system architectures through deep degradation scenarios (i.e., normal ops all the way down to significant damage of communications or physical nodes). This increases risk of poor selection of a candidate architecture due to the absence of insight into true performance for systems that must operate as a piece of critical infrastructure. This research effort proposes a process to analyze critical infrastructure system availability requirements and a candidate set of systems architectures, producing a metric assessing these architectures over a spectrum of degradations to aid in selecting appropriate resilient architectures. To accomplish this effort, a set of simulation and evaluation efforts are undertaken that will process, in an automated way, a set of sample requirements into a set of potential architectures where system functions and capabilities are distributed across nodes. Nodes and links will have specific characteristics and based on sampled requirements, contribute to the overall system functionality, such that as they are impacted/degraded, the impacted functional availability of a system can be determined. A machine learning reinforcement-based agent will structurally impact the nodes, links, and characteristics (e.g., bandwidth, latency) of a given architecture to provide an assessment of system functional uptime/availability under these scenarios. By varying the intensity of the attack and related aspects, we can create a structured method of evaluating the performance of candidate architectures against each other to create a metric rating its resilience to these attack types/strategies. Through multiple simulation iterations, sufficient data will exist to compare this availability metric, and an architectural recommendation against the baseline requirements, in comparison to existing multi-factor computing architectural selection processes. It is intended that this additional data will create an improvement in the matching of resilient critical infrastructure system requirements to the correct architectures and implementations that will support improved operation during times of system degradation due to failures and infrastructure attacks.

Keywords: architecture, resiliency, availability, cyber-attack

Procedia PDF Downloads 109
7276 Exploring Data Stewardship in Fog Networking Using Blockchain Algorithm

Authors: Ruvaitha Banu, Amaladhithyan Krishnamoorthy

Abstract:

IoT networks today solve various consumer problems, from home automation systems to aiding in driving autonomous vehicles with the exploration of multiple devices. For example, in an autonomous vehicle environment, multiple sensors are available on roads to monitor weather and road conditions and interact with each other to aid the vehicle in reaching its destination safely and timely. IoT systems are predominantly dependent on the cloud environment for data storage, and computing needs that result in latency problems. With the advent of Fog networks, some of this storage and computing is pushed to the edge/fog nodes, saving the network bandwidth and reducing the latency proportionally. Managing the data stored in these fog nodes becomes crucial as it might also store sensitive information required for a certain application. Data management in fog nodes is strenuous because Fog networks are dynamic in terms of their availability and hardware capability. It becomes more challenging when the nodes in the network also live a short span, detaching and joining frequently. When an end-user or Fog Node wants to access, read, or write data stored in another Fog Node, then a new protocol becomes necessary to access/manage the data stored in the fog devices as a conventional static way of managing the data doesn’t work in Fog Networks. The proposed solution discusses a protocol that acts by defining sensitivity levels for the data being written and read. Additionally, a distinct data distribution and replication model among the Fog nodes is established to decentralize the access mechanism. In this paper, the proposed model implements stewardship towards the data stored in the Fog node using the application of Reinforcement Learning so that access to the data is determined dynamically based on the requests.

Keywords: IoT, fog networks, data stewardship, dynamic access policy

Procedia PDF Downloads 59
7275 End-to-End Spanish-English Sequence Learning Translation Model

Authors: Vidhu Mitha Goutham, Ruma Mukherjee

Abstract:

The low availability of well-trained, unlimited, dynamic-access models for specific languages makes it hard for corporate users to adopt quick translation techniques and incorporate them into product solutions. As translation tasks increasingly require a dynamic sequence learning curve; stable, cost-free opensource models are scarce. We survey and compare current translation techniques and propose a modified sequence to sequence model repurposed with attention techniques. Sequence learning using an encoder-decoder model is now paving the path for higher precision levels in translation. Using a Convolutional Neural Network (CNN) encoder and a Recurrent Neural Network (RNN) decoder background, we use Fairseq tools to produce an end-to-end bilingually trained Spanish-English machine translation model including source language detection. We acquire competitive results using a duo-lingo-corpus trained model to provide for prospective, ready-made plug-in use for compound sentences and document translations. Our model serves a decent system for large, organizational data translation needs. While acknowledging its shortcomings and future scope, it also identifies itself as a well-optimized deep neural network model and solution.

Keywords: attention, encoder-decoder, Fairseq, Seq2Seq, Spanish, translation

Procedia PDF Downloads 175
7274 A Classical Method of Optimizing Manufacturing Systems Using a Number of Industrial Engineering Techniques

Authors: John M. Ikome, Martha E. Ikome, Therese Van Wyk

Abstract:

Productivity optimization of a company can significantly increase the company’s output and productivity which can be in the form of corrective actions of ineffective activities, process simplification, and reduction of variations, responsiveness, and reduction of set-up-time which are all under the classification of waste within the manufacturing environment. Deriving a means to eliminate a number of these issues has a key importance for manufacturing organization. This paper focused on a number of industrial engineering techniques which include a cause and effect diagram, to identify and optimize the method or systems being used. Based on our results, it shows that there are a number of variations within the production processes that can significantly disrupt the expected output.

Keywords: optimization, fishbone, diagram, productivity

Procedia PDF Downloads 312