Search results for: Digital Microscopy
32 Questions Categorization in E-Learning Environment Using Data Mining Technique
Authors: Vilas P. Mahatme, K. K. Bhoyar
Abstract:
Nowadays, education cannot be imagined without digital technologies. It broadens the horizons of teaching learning processes. Several universities are offering online courses. For evaluation purpose, e-examination systems are being widely adopted in academic environments. Multiple-choice tests are extremely popular. Moving away from traditional examinations to e-examination, Moodle as Learning Management Systems (LMS) is being used. Moodle logs every click that students make for attempting and navigational purposes in e-examination. Data mining has been applied in various domains including retail sales, bioinformatics. In recent years, there has been increasing interest in the use of data mining in e-learning environment. It has been applied to discover, extract, and evaluate parameters related to student’s learning performance. The combination of data mining and e-learning is still in its babyhood. Log data generated by the students during online examination can be used to discover knowledge with the help of data mining techniques. In web based applications, number of right and wrong answers of the test result is not sufficient to assess and evaluate the student’s performance. So, assessment techniques must be intelligent enough. If student cannot answer the question asked by the instructor then some easier question can be asked. Otherwise, more difficult question can be post on similar topic. To do so, it is necessary to identify difficulty level of the questions. Proposed work concentrate on the same issue. Data mining techniques in specific clustering is used in this work. This method decide difficulty levels of the question and categories them as tough, easy or moderate and later this will be served to the desire students based on their performance. Proposed experiment categories the question set and also group the students based on their performance in examination. This will help the instructor to guide the students more specifically. In short mined knowledge helps to support, guide, facilitate and enhance learning as a whole.Keywords: Data mining, e-examination, e-learning, moodle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 207331 The Implications of Technological Advancements on the Constitutional Principles of Contract Law
Authors: Laura Çami (Vorpsi), Xhon Skënderi
Abstract:
In today's rapidly evolving technological landscape, the traditional principles of contract law are facing significant challenges. The emergence of new technologies, such as electronic signatures, smart contracts, and online dispute resolution mechanisms, is transforming the way contracts are formed, interpreted, and enforced. This paper examines the implications of these technological advancements on the constitutional principles of contract law. One of the fundamental principles of contract law is freedom of contract, which ensures that parties have the autonomy to negotiate and enter into contracts as they see fit. However, the use of technology in the contracting process has the potential to disrupt this principle. For example, online platforms and marketplaces often offer standard-form contracts, which may not reflect the specific needs or interests of individual parties. This raises questions about the equality of bargaining power between parties and the extent to which parties are truly free to negotiate the terms of their contracts. Another important principle of contract law is the requirement of consideration, which requires that each party receives something of value in exchange for their promise. The use of digital assets, such as cryptocurrencies, has created new challenges in determining what constitutes valuable consideration in a contract. Due to the ambiguity in this area, disagreements about the legality and enforceability of such contracts may arise. Furthermore, the use of technology in dispute resolution mechanisms, such as online arbitration and mediation, may raise concerns about due process and access to justice. The use of algorithms and artificial intelligence to determine the outcome of disputes may also raise questions about the impartiality and fairness of the process. Finally, it should be noted that there are many different and complex effects of technical improvements on the fundamental constitutional foundations of contract law. As technology continues to evolve, it will be important for policymakers and legal practitioners to consider the potential impacts on contract law and to ensure that the principles of fairness, equality, and access to justice are preserved in the contracting process.
Keywords: Technological advancements, constitutional principles, contract law, smart contracts, online dispute resolution, freedom of contract.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24730 Object Detection in Digital Images under Non-Standardized Conditions Using Illumination and Shadow Filtering
Authors: Waqqas-ur-Rehman Butt, Martin Servin, Marion Pause
Abstract:
In recent years, object detection has gained much attention and very encouraging research area in the field of computer vision. The robust object boundaries detection in an image is demanded in numerous applications of human computer interaction and automated surveillance systems. Many methods and approaches have been developed for automatic object detection in various fields, such as automotive, quality control management and environmental services. Inappropriately, to the best of our knowledge, object detection under illumination with shadow consideration has not been well solved yet. Furthermore, this problem is also one of the major hurdles to keeping an object detection method from the practical applications. This paper presents an approach to automatic object detection in images under non-standardized environmental conditions. A key challenge is how to detect the object, particularly under uneven illumination conditions. Image capturing conditions the algorithms need to consider a variety of possible environmental factors as the colour information, lightening and shadows varies from image to image. Existing methods mostly failed to produce the appropriate result due to variation in colour information, lightening effects, threshold specifications, histogram dependencies and colour ranges. To overcome these limitations we propose an object detection algorithm, with pre-processing methods, to reduce the interference caused by shadow and illumination effects without fixed parameters. We use the Y CrCb colour model without any specific colour ranges and predefined threshold values. The segmented object regions are further classified using morphological operations (Erosion and Dilation) and contours. Proposed approach applied on a large image data set acquired under various environmental conditions for wood stack detection. Experiments show the promising result of the proposed approach in comparison with existing methods.Keywords: Image processing, Illumination equalization, Shadow filtering, Object detection, Colour models, Image segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 101929 Impact of Ownership Structure on Provision of Staff and Infrastructure for Implementing Computer Aided Design Curriculum in Universities in South-East Nigeria
Authors: Kelechi E. Ezeji
Abstract:
Instruction towards acquiring skills in the use of Computer Aided Design technologies has become a vital part of architectural education curriculum in the digital era. Its implementation, however, requires deployment of extra resources to build new infrastructure, acquisition and maintenance of new equipment, retraining of staff and recruitment of new ones who are knowledgeable in this area. This study sought to examine the impact that ownership structure of Nigerian universities had on provision of staff and infrastructure for implementing computer aided design curriculum with a view to developing a framework for the evaluation for appropriate implementation by the institutions. Survey research design was employed. The focus was on departments of architecture in universities in south-east Nigeria accredited by the National Universities Commission. Data were obtained in the areas of infrastructure and personnel for CAD implementation. A multi-stage stratified random sampling method was adopted. The first stage of stratification involved the accredited departments. Random sampling by balloting was then carried out. At the second stage, sampling size formulae was applied to obtain respondents’ number. For data analysis, analysis of variance tool for testing differences of means was used. With ρ < 0.5, the study found that there was significant difference between private-funded, state-funded and federal-funded departments of architecture in the provision of personnel and infrastructure. The implications of these findings were that for successful implementation leading to attainment of CAD proficiency to occur in every institution regardless of ownership structure, minimum evaluation guidelines needed to be set. A regular comparison of implementation in institutions was recommended as a means of rating performance. This will inform better interaction with those who consistently show weakness to challenge them towards improvement.
Keywords: Computer-aided design, curriculum, funding, infrastructure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 78028 Utilizing Fly Ash Cenosphere and Aerogel for Lightweight Thermal Insulating Cement-Based Composites
Authors: Asad Hanif, Pavithra Parthasarathy, Zongjin Li
Abstract:
Thermal insulating composites help to reduce the total power consumption in a building by creating a barrier between external and internal environment. Such composites can be used in the roofing tiles or wall panels for exterior surfaces. This study purposes to develop lightweight cement-based composites for thermal insulating applications. Waste materials like silica fume (an industrial by-product) and fly ash cenosphere (FAC) (hollow micro-spherical shells obtained as a waste residue from coal fired power plants) were used as partial replacement of cement and lightweight filler, respectively. Moreover, aerogel, a nano-porous material made of silica, was also used in different dosages for improved thermal insulating behavior, while poly vinyl alcohol (PVA) fibers were added for enhanced toughness. The raw materials including binders and fillers were characterized by X-Ray Diffraction (XRD), X-Ray Fluorescence spectroscopy (XRF), and Brunauer–Emmett–Teller (BET) analysis techniques in which various physical and chemical properties of the raw materials were evaluated like specific surface area, chemical composition (oxide form), and pore size distribution (if any). Ultra-lightweight cementitious composites were developed by varying the amounts of FAC and aerogel with 28-day unit weight ranging from 1551.28 kg/m3 to 1027.85 kg/m3. Excellent mechanical and thermal insulating properties of the resulting composites were obtained ranging from 53.62 MPa to 8.66 MPa compressive strength, 9.77 MPa to 3.98 MPa flexural strength, and 0.3025 W/m-K to 0.2009 W/m-K as thermal conductivity coefficient (QTM-500). The composites were also tested for peak temperature difference between outer and inner surfaces when subjected to heating (in a specially designed experimental set-up) by a 275W infrared lamp. The temperature difference up to 16.78 oC was achieved, which indicated outstanding properties of the developed composites to act as a thermal barrier for building envelopes. Microstructural studies were carried out by Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray Spectroscopy (EDS) for characterizing the inner structure of the composite specimen. Also, the hydration products were quantified using the surface area mapping and line scale technique in EDS. The microstructural analyses indicated excellent bonding of FAC and aerogel in the cementitious system. Also, selective reactivity of FAC was ascertained from the SEM imagery where the partially consumed FAC shells were observed. All in all, the lightweight fillers, FAC, and aerogel helped to produce the lightweight composites due to their physical characteristics, while exceptional mechanical properties, owing to FAC partial reactivity, were achieved.
Keywords: Sustainable development, fly ash cenosphere, aerogel, lightweight, cement, composite.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 220727 A Dynamic Mechanical Thermal T-Peel Test Approach to Characterize Interfacial Behavior of Polymeric Textile Composites
Authors: J. R. Büttler, T. Pham
Abstract:
Basic understanding of interfacial mechanisms is of importance for the development of polymer composites. For this purpose, we need techniques to analyze the quality of interphases, their chemical and physical interactions and their strength and fracture resistance. In order to investigate the interfacial phenomena in detail, advanced characterization techniques are favorable. Dynamic mechanical thermal analysis (DMTA) using a rheological system is a sensitive tool. T-peel tests were performed with this system, to investigate the temperature-dependent peel behavior of woven textile composites. A model system was made of polyamide (PA) woven fabric laminated with films of polypropylene (PP) or PP modified by grafting with maleic anhydride (PP-g-MAH). Firstly, control measurements were performed with solely PP matrixes. Polymer melt investigations, as well as the extensional stress, extensional viscosity and extensional relaxation modulus at -10°C, 100 °C and 170 °C, demonstrate similar viscoelastic behavior for films made of PP-g-MAH and its non-modified PP-control. Frequency sweeps have shown that PP-g-MAH has a zero phase viscosity of around 1600 Pa·s and PP-control has a similar zero phase viscosity of 1345 Pa·s. Also, the gelation points are similar at 2.42*104 Pa (118 rad/s) and 2.81*104 Pa (161 rad/s) for PP-control and PP-g-MAH, respectively. Secondly, the textile composite was analyzed. The extensional stress of PA66 fabric laminated with either PP-control or PP-g-MAH at -10 °C, 25 °C and 170 °C for strain rates of 0.001 – 1 s-1 was investigated. The laminates containing the modified PP need more stress for T-peeling. However, the strengthening effect due to the modification decreases by increasing temperature and at 170 °C, just above the melting temperature of the matrix, the difference disappears. Independent of the matrix used in the textile composite, there is a decrease of extensional stress by increasing temperature. It appears that the more viscous is the matrix, the weaker the laminar adhesion. Possibly, the measurement is influenced by the fact that the laminate becomes stiffer at lower temperatures. Adhesive lap-shear testing at room temperature supports the findings obtained with the T-peel test. Additional analysis of the textile composite at the microscopic level ensures that the fibers are well embedded in the matrix. Atomic force microscopy (AFM) imaging of a cross section of the composite shows no gaps between the fibers and matrix. Measurements of the water contact angle show that the MAH grafted PP is more polar than the virgin-PP, and that suggests a more favorable chemical interaction of PP-g-MAH with PA, compared to the non-modified PP. In fact, this study indicates that T-peel testing by DMTA is a technique to achieve more insights into polymeric textile composites.
Keywords: Dynamic mechanical thermal analysis, interphase, polyamide, polypropylene, textile composite, T-peel test.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 73026 Fusion of Finger Inner Knuckle Print and Hand Geometry Features to Enhance the Performance of Biometric Verification System
Authors: M. L. Anitha, K. A. Radhakrishna Rao
Abstract:
With the advent of modern computing technology, there is an increased demand for developing recognition systems that have the capability of verifying the identity of individuals. Recognition systems are required by several civilian and commercial applications for providing access to secured resources. Traditional recognition systems which are based on physical identities are not sufficiently reliable to satisfy the security requirements due to the use of several advances of forgery and identity impersonation methods. Recognizing individuals based on his/her unique physiological characteristics known as biometric traits is a reliable technique, since these traits are not transferable and they cannot be stolen or lost. Since the performance of biometric based recognition system depends on the particular trait that is utilized, the present work proposes a fusion approach which combines Inner knuckle print (IKP) trait of the middle, ring and index fingers with the geometrical features of hand. The hand image captured from a digital camera is preprocessed to find finger IKP as region of interest (ROI) and hand geometry features. Geometrical features are represented as the distances between different key points and IKP features are extracted by applying local binary pattern descriptor on the IKP ROI. The decision level AND fusion was adopted, which has shown improvement in performance of the combined scheme. The proposed approach is tested on the database collected at our institute. Proposed approach is of significance since both hand geometry and IKP features can be extracted from the palm region of the hand. The fusion of these features yields a false acceptance rate of 0.75%, false rejection rate of 0.86% for verification tests conducted, which is less when compared to the results obtained using individual traits. The results obtained confirm the usefulness of proposed approach and suitability of the selected features for developing biometric based recognition system based on features from palmar region of hand.
Keywords: Biometrics, hand geometry features, inner knuckle print, recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 115125 Spatial Data Science for Data Driven Urban Planning: The Youth Economic Discomfort Index for Rome
Authors: Iacopo Testi, Diego Pajarito, Nicoletta Roberto, Carmen Greco
Abstract:
Today, a consistent segment of the world’s population lives in urban areas, and this proportion will vastly increase in the next decades. Therefore, understanding the key trends in urbanization, likely to unfold over the coming years, is crucial to the implementation of sustainable urban strategies. In parallel, the daily amount of digital data produced will be expanding at an exponential rate during the following years. The analysis of various types of data sets and its derived applications have incredible potential across different crucial sectors such as healthcare, housing, transportation, energy, and education. Nevertheless, in city development, architects and urban planners appear to rely mostly on traditional and analogical techniques of data collection. This paper investigates the prospective of the data science field, appearing to be a formidable resource to assist city managers in identifying strategies to enhance the social, economic, and environmental sustainability of our urban areas. The collection of different new layers of information would definitely enhance planners' capabilities to comprehend more in-depth urban phenomena such as gentrification, land use definition, mobility, or critical infrastructural issues. Specifically, the research results correlate economic, commercial, demographic, and housing data with the purpose of defining the youth economic discomfort index. The statistical composite index provides insights regarding the economic disadvantage of citizens aged between 18 years and 29 years, and results clearly display that central urban zones and more disadvantaged than peripheral ones. The experimental set up selected the city of Rome as the testing ground of the whole investigation. The methodology aims at applying statistical and spatial analysis to construct a composite index supporting informed data-driven decisions for urban planning.
Keywords: Data science, spatial analysis, composite index, Rome, urban planning, youth economic discomfort index.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 89624 Dynamic Web-Based 2D Medical Image Visualization and Processing Software
Authors: Abdelhalim. N. Mohammed, Mohammed. Y. Esmail
Abstract:
In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP ‘apache server’ is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error ‘MSE’, peak signal to noise ratio ‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when ‘coif3’ wavelet filter is used.Keywords: DICOM, discrete wavelet transform, PACS, HIS, LAN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 79323 Heterogeneous-Resolution and Multi-Source Terrain Builder for CesiumJS WebGL Virtual Globe
Authors: Umberto Di Staso, Marco Soave, Alessio Giori, Federico Prandi, Raffaele De Amicis
Abstract:
The increasing availability of information about earth surface elevation (Digital Elevation Models DEM) generated from different sources (remote sensing, Aerial Images, Lidar) poses the question about how to integrate and make available to the most than possible audience this huge amount of data. In order to exploit the potential of 3D elevation representation the quality of data management plays a fundamental role. Due to the high acquisition costs and the huge amount of generated data, highresolution terrain surveys tend to be small or medium sized and available on limited portion of earth. Here comes the need to merge large-scale height maps that typically are made available for free at worldwide level, with very specific high resolute datasets. One the other hand, the third dimension increases the user experience and the data representation quality, unlocking new possibilities in data analysis for civil protection, real estate, urban planning, environment monitoring, etc. The open-source 3D virtual globes, which are trending topics in Geovisual Analytics, aim at improving the visualization of geographical data provided by standard web services or with proprietary formats. Typically, 3D Virtual globes like do not offer an open-source tool that allows the generation of a terrain elevation data structure starting from heterogeneous-resolution terrain datasets. This paper describes a technological solution aimed to set up a so-called “Terrain Builder”. This tool is able to merge heterogeneous-resolution datasets, and to provide a multi-resolution worldwide terrain services fully compatible with CesiumJS and therefore accessible via web using traditional browser without any additional plug-in.Keywords: Terrain builder, WebGL, virtual globe, CesiumJS, tiled map service, TMS, height-map, regular grid, Geovisual analytics, DTM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 239622 Methods for Material and Process Monitoring by Characterization of (Second and Third Order) Elastic Properties with Lamb Waves
Abstract:
In accordance with the industry 4.0 concept, manufacturing process steps as well as the materials themselves are going to be more and more digitalized within the next years. The “digital twin” representing the simulated and measured dataset of the (semi-finished) product can be used to control and optimize the individual processing steps and help to reduce costs and expenditure of time in product development, manufacturing, and recycling. In the present work, two material characterization methods based on Lamb waves were evaluated and compared. For demonstration purpose, both methods were shown at a standard industrial product - copper ribbons, often used in photovoltaic modules as well as in high-current microelectronic devices. By numerical approximation of the Rayleigh-Lamb dispersion model on measured phase velocities second order elastic constants (Young’s modulus, Poisson’s ratio) were determined. Furthermore, the effective third order elastic constants were evaluated by applying elastic, “non-destructive”, mechanical stress on the samples. In this way, small microstructural variations due to mechanical preconditioning could be detected for the first time. Both methods were compared with respect to precision and inline application capabilities. Microstructure of the samples was systematically varied by mechanical loading and annealing. Changes in the elastic ultrasound transport properties were correlated with results from microstructural analysis and mechanical testing. In summary, monitoring the elastic material properties of plate-like structures using Lamb waves is valuable for inline and non-destructive material characterization and manufacturing process control. Second order elastic constants analysis is robust over wide environmental and sample conditions, whereas the effective third order elastic constants highly increase the sensitivity with respect to small microstructural changes. Both Lamb wave based characterization methods are fitting perfectly into the industry 4.0 concept.
Keywords: Lamb waves, industry 4.0, process control, elasticity, acoustoelasticity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 109721 Investigation of New Method to Achieve Well Dispersed Multiwall Carbon Nanotubes Reinforced Al Matrix Composites
Authors: A.H.Javadi, Sh.Mirdamadi, M.A.Faghisani, S.Shakhesi
Abstract:
Nanostructured materials have attracted many researchers due to their outstanding mechanical and physical properties. For example, carbon nanotubes (CNTs) or carbon nanofibres (CNFs) are considered to be attractive reinforcement materials for light weight and high strength metal matrix composites. These composites are being projected for use in structural applications for their high specific strength as well as functional materials for their exciting thermal and electrical characteristics. The critical issues of CNT-reinforced MMCs include processing techniques, nanotube dispersion, interface, strengthening mechanisms and mechanical properties. One of the major obstacles to the effective use of carbon nanotubes as reinforcements in metal matrix composites is their agglomeration and poor distribution/dispersion within the metallic matrix. In order to tap into the advantages of the properties of CNTs (or CNFs) in composites, the high dispersion of CNTs (or CNFs) and strong interfacial bonding are the key issues which are still challenging. Processing techniques used for synthesis of the composites have been studied with an objective to achieve homogeneous distribution of carbon nanotubes in the matrix. Modified mechanical alloying (ball milling) techniques have emerged as promising routes for the fabrication of carbon nanotube (CNT) reinforced metal matrix composites. In order to obtain a homogeneous product, good control of the milling process, in particular control of the ball movement, is essential. The control of the ball motion during the milling leads to a reduction in grinding energy and a more homogeneous product. Also, the critical inner diameter of the milling container at a particular rotational speed can be calculated. In the present work, we use conventional and modified mechanical alloying to generate a homogenous distribution of 2 wt. % CNT within Al powders. 99% purity Aluminium powder (Acros, 200mesh) was used along with two different types of multiwall carbon nanotube (MWCNTs) having different aspect ratios to produce Al-CNT composites. The composite powders were processed into bulk material by compaction, and sintering using a cylindrical compaction and tube furnace. Field Emission Scanning electron microscopy (FESEM), X-Ray diffraction (XRD), Raman spectroscopy and Vickers macro hardness tester were used to evaluate CNT dispersion, powder morphology, CNT damage, phase analysis, mechanical properties and crystal size determination. Despite the success of ball milling in dispersing CNTs in Al powder, it is often accompanied with considerable strain hardening of the Al powder, which may have implications on the final properties of the composite. The results show that particle size and morphology vary with milling time. Also, by using the mixing process and sonication before mechanical alloying and modified ball mill, dispersion of the CNTs in Al matrix improves.Keywords: multiwall carbon nanotube, Aluminum matrixcomposite, dispersion, mechanical alloying, sintering
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 232320 Automated Transformation of 3D Point Cloud to Building Information Model: Leveraging Algorithmic Modeling for Efficient Reconstruction
Authors: Radul Shishkov, Petar Penchev
Abstract:
The digital era has revolutionized architectural practices, with Building Information Modeling (BIM) emerging as a pivotal tool for architects, engineers, and construction professionals. However, the transition from traditional methods to BIM-centric approaches poses significant challenges, particularly in the context of existing structures. This research presents a technical approach to bridge this gap through the development of algorithms that facilitate the automated transformation of 3D point cloud data into detailed BIM models. The core of this research lies in the application of algorithmic modeling and computational design methods to interpret and reconstruct point cloud data — a collection of data points in space, typically produced by 3D scanners — into comprehensive BIM models. This process involves complex stages of data cleaning, feature extraction, and geometric reconstruction, which are traditionally time-consuming and prone to human error. By automating these stages, our approach significantly enhances the efficiency and accuracy of creating BIM models for existing buildings. The proposed algorithms are designed to identify key architectural elements within point clouds, such as walls, windows, doors, and other structural components, and to translate these elements into their corresponding BIM representations. This includes the integration of parametric modeling techniques to ensure that the generated BIM models are not only geometrically accurate but also embedded with essential architectural and structural information. This research contributes significantly to the field of architectural technology by providing a scalable and efficient solution for the integration of existing structures into the BIM framework. It paves the way for more seamless and integrated workflows in renovation and heritage conservation projects, where the accuracy of existing conditions plays a critical role. The implications of this study extend beyond architectural practices, offering potential benefits in urban planning, facility management, and historical preservation.
Keywords: Algorithmic modeling, Building Information Modeling, point cloud, reconstruction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1019 MIMO Radar-Based System for Structural Health Monitoring and Geophysical Applications
Authors: Davide D’Aria, Paolo Falcone, Luigi Maggi, Aldo Cero, Giovanni Amoroso
Abstract:
The paper presents a methodology for real-time structural health monitoring and geophysical applications. The key elements of the system are a high performance MIMO RADAR sensor, an optical camera and a dedicated set of software algorithms encompassing interferometry, tomography and photogrammetry. The MIMO Radar sensor proposed in this work, provides an extremely high sensitivity to displacements making the system able to react to tiny deformations (up to tens of microns) with a time scale which spans from milliseconds to hours. The MIMO feature of the system makes the system capable of providing a set of two-dimensional images of the observed scene, each mapped on the azimuth-range directions with noticeably resolution in both the dimensions and with an outstanding repetition rate. The back-scattered energy, which is distributed in the 3D space, is projected on a 2D plane, where each pixel has as coordinates the Line-Of-Sight distance and the cross-range azimuthal angle. At the same time, the high performing processing unit allows to sense the observed scene with remarkable refresh periods (up to milliseconds), thus opening the way for combined static and dynamic structural health monitoring. Thanks to the smart TX/RX antenna array layout, the MIMO data can be processed through a tomographic approach to reconstruct the three-dimensional map of the observed scene. This 3D point cloud is then accurately mapped on a 2D digital optical image through photogrammetric techniques, allowing for easy and straightforward interpretations of the measurements. Once the three-dimensional image is reconstructed, a 'repeat-pass' interferometric approach is exploited to provide the user of the system with high frequency three-dimensional motion/vibration estimation of each point of the reconstructed image. At this stage, the methodology leverages consolidated atmospheric correction algorithms to provide reliable displacement and vibration measurements.
Keywords: Interferometry, MIMO RADAR, SAR, tomography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 90818 Security Model of a Unified Communications and Integrated Collaborations System in the Health Sector Environment of Developing Countries: A Case of Uganda
Authors: Excellence Favor, Bakari M. M. Mwinyiwiwa
Abstract:
Access to information holds the key to the empowerment of everybody despite where they are living. This research has been carried out in respect of the people living in developing countries, considering their plight and complex geographical, demographic, social-economic conditions surrounding the areas they live, which hinder access to information and of professionals providing services such as medical workers, which has led to high death rates and development stagnation. Research on Unified Communications and Integrated Collaborations (UCIC) system in the health sector of developing countries aims at creating a possible solution of bridging the digital canyon among the communities. The system is meant to deliver services in a seamless manner to assist health workers situated anywhere to be accessed easily and access information which will enhance service delivery. The proposed UCIC provides the most immersive telepresence experience for one-to-one or many-to-many meetings. Extending to locations anywhere in the world, the transformative platform delivers Ultra-low operating costs through the use of general purpose networks and using special lenses and track systems. The essence of this study is to create a security model for the deployment of the UCIC system in the health sector of developing countries. The model approach used for building the UCIC system security carefully considers the specific requirements for the health sector environment organization such as data centre, national, regional and district hospitals, and health centers IV, III, II and I and then builds the single best possible secure network to meet their needs. The security model demonstrates on how the components of the UCIC system will be protected physically and logically in the health sector environment. The UCIC system once adopted and implemented correctly will bring enhancement to the speed and quality of services offered by health workers. The capacities of UCIC will help health workers shorten decision cycles, accelerate service delivery and save lives by speeding access to information and by making it possible for all health workers and patients to collaborate ubiquitously.
Keywords: Developing Countries, Health Sector Environment, Security, Unified Communications and Integrated Collaborations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 152817 Evaluation of Video Quality Metrics and Performance Comparison on Contents Taken from Most Commonly Used Devices
Authors: Pratik Dhabal Deo, Manoj P.
Abstract:
With the increasing number of social media users, the amount of video content available has also significantly increased. Currently, the number of smartphone users is at its peak, and many are increasingly using their smartphones as their main photography and recording devices. There have been a lot of developments in the field of video quality assessment in since the past years and more research on various other aspects of video and image are being done. Datasets that contain a huge number of videos from different high-end devices make it difficult to analyze the performance of the metrics on the content from most used devices even if they contain contents taken in poor lighting conditions using lower-end devices. These devices face a lot of distortions due to various factors since the spectrum of contents recorded on these devices is huge. In this paper, we have presented an analysis of the objective Video Quality Analysis (VQA) metrics on contents taken only from most used devices and their performance on them, focusing on full-reference metrics. To carry out this research, we created a custom dataset containing a total of 90 videos that have been taken from three most commonly used devices, and Android smartphone, an iOS smartphone and a Digital Single-Lens Reflex (DSLR) camera. On the videos taken on each of these devices, the six most common types of distortions that users face have been applied in addition to already existing H.264 compression based on four reference videos. These six applied distortions have three levels of degradation each. A total of the five most popular VQA metrics have been evaluated on this dataset and the highest values and the lowest values of each of the metrics on the distortions have been recorded. Finally, it is found that blur is the artifact on which most of the metrics did not perform well. Thus, in order to understand the results better the amount of blur in the data set has been calculated and an additional evaluation of the metrics was done using High Efficiency Video Coding (HEVC) codec, which is the next version of H.264 compression, on the camera that proved to be the sharpest among the devices. The results have shown that as the resolution increases, the performance of the metrics tends to become more accurate and the best performing metric among them is VQM with very few inconsistencies and inaccurate results when the compression applied is H.264, but when the compression is applied is HEVC, Structural Similarity (SSIM) metric and Video Multimethod Assessment Fusion (VMAF) have performed significantly better.
Keywords: Distortion, metrics, recording, frame rate, video quality assessment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36416 Application of Transportation Models for Analysing Future Intercity and Intracity Travel Patterns in Kuwait
Authors: Srikanth Pandurangi, Basheer Mohammed, Nezar Al Sayegh
Abstract:
In order to meet the increasing demand for housing care for Kuwaiti citizens, the government authorities in Kuwait are undertaking a series of projects in the form of new large cities, outside the current urban area. Al Mutlaa City located to the north-west of the Kuwait Metropolitan Area is one such project out of the 15 planned new cities. The city accommodates a wide variety of residential developments, employment opportunities, commercial, recreational, health care and institutional uses. This paper examines the application of comprehensive transportation demand modeling works undertaken in VISUM platform to understand the future intracity and intercity travel distribution patterns in Kuwait. The scope of models developed varied in levels of detail: strategic model update, sub-area models representing future demand of Al Mutlaa City, sub-area models built to estimate the demand in the residential neighborhoods of the city. This paper aims at offering model update framework that facilitates easy integration between sub-area models and strategic national models for unified traffic forecasts. This paper presents the transportation demand modeling results utilized in informing the planning of multi-modal transportation system for Al Mutlaa City. This paper also presents the household survey data collection efforts undertaken using GPS devices (first time in Kuwait) and notebook computer based digital survey forms for interviewing representative sample of citizens and residents. The survey results formed the basis of estimating trip generation rates and trip distribution coefficients used in the strategic base year model calibration and validation process.Keywords: GPS based household surveys, transportation infrastructure, origin-destination trip matrices, traffic forecasts, transportation demand modeling, travel behavior patterns.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 170415 Developing a Web-Based Tender Evaluation System Based on Fuzzy Multi-Attributes Group Decision Making for Nigerian Public Sector Tendering
Authors: Bello Abdullahi, Yahaya M. Ibrahim, Ahmed D. Ibrahim, Kabir Bala
Abstract:
Public sector tendering has traditionally been conducted using manual paper-based processes which are known to be inefficient, less transparent and more prone to manipulations and errors. The advent of the Internet and the World Wide Web has led to the development of numerous e-Tendering systems that addressed some of the problems associated with the manual paper-based tendering system. However, most of these systems rarely support the evaluation of tenders and where they do it is mostly based on the single decision maker which is not suitable in public sector tendering, where for the sake of objectivity, transparency, and fairness, it is required that the evaluation is conducted through a tender evaluation committee. Currently, in Nigeria, the public tendering process in general and the evaluation of tenders, in particular, are largely conducted using manual paper-based processes. Automating these manual-based processes to digital-based processes can help in enhancing the proficiency of public sector tendering in Nigeria. This paper is part of a larger study to develop an electronic tendering system that supports the whole tendering lifecycle based on Nigerian procurement law. Specifically, this paper presents the design and implementation of part of the system that supports group evaluation of tenders based on a technique called fuzzy multi-attributes group decision making. The system was developed using Object-Oriented methodologies and Unified Modelling Language and hypothetically applied in the evaluation of technical and financial proposals submitted by bidders. The system was validated by professionals with extensive experiences in public sector procurement. The results of the validation showed that the system called NPS-eTender has an average rating of 74% with respect to correct and accurate modelling of the existing manual tendering domain and an average rating of 67.6% with respect to its potential to enhance the proficiency of public sector tendering in Nigeria. Thus, based on the results of the validation, the automation of the evaluation process to support tender evaluation committee is achievable and can lead to a more proficient public sector tendering system.
Keywords: e-Tendering, e-Procurement, public tendering, tender evaluation, tender evaluation committee, web-based group decision support system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 74214 Educating the Educators: Interdisciplinary Approaches to Enhance Science Teaching
Authors: Denise Levy, Anna Lucia C. H. Villavicencio
Abstract:
In a rapid-changing world, science teachers face considerable challenges. In addition to the basic curriculum, there must be included several transversal themes, which demand creative and innovative strategies to be arranged and integrated to traditional disciplines. In Brazil, nuclear science is still a controversial theme, and teachers themselves seem to be unaware of the issue, most often perpetuating prejudice, errors and misconceptions. This article presents the authors’ experience in the development of an interdisciplinary pedagogical proposal to include nuclear science in the basic curriculum, in a transversal and integrating way. The methodology applied was based on the analysis of several normative documents that define the requirements of essential learning, competences and skills of basic education for all schools in Brazil. The didactic materials and resources were developed according to the best practices to improve learning processes privileging constructivist educational techniques, with emphasis on active learning process, collaborative learning and learning through research. The material consists of an illustrated book for students, a book for teachers and a manual with activities that can articulate nuclear science to different disciplines: Portuguese, mathematics, science, art, English, history and geography. The content counts on high scientific rigor and articulate nuclear technology with topics of interest to society in the most diverse spheres, such as food supply, public health, food safety and foreign trade. Moreover, this pedagogical proposal takes advantage of the potential value of digital technologies, implementing QR codes that excite and challenge students of all ages, improving interaction and engagement. The expected results include the education of the educators for nuclear science communication in a transversal and integrating way, demystifying nuclear technology in a contextualized and significant approach. It is expected that the interdisciplinary pedagogical proposal contributes to improving attitudes towards knowledge construction, privileging reconstructive questioning, fostering a culture of systematic curiosity and encouraging critical thinking skills.
Keywords: Science education, interdisciplinary learning, nuclear science; scientific literacy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 81813 In-Flight Radiometric Performances Analysis of an Airborne Optical Payload
Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yaokai Liu, Xinhong Wang, Yongsheng Zhou
Abstract:
Performances analysis of remote sensing sensor is required to pursue a range of scientific research and application objectives. Laboratory analysis of any remote sensing instrument is essential, but not sufficient to establish a valid inflight one. In this study, with the aid of the in situ measurements and corresponding image of three-gray scale permanent artificial target, the in-flight radiometric performances analyses (in-flight radiometric calibration, dynamic range and response linearity, signal-noise-ratio (SNR), radiometric resolution) of self-developed short-wave infrared (SWIR) camera are performed. To acquire the inflight calibration coefficients of the SWIR camera, the at-sensor radiances (Li) for the artificial targets are firstly simulated with in situ measurements (atmosphere parameter and spectral reflectance of the target) and viewing geometries using MODTRAN model. With these radiances and the corresponding digital numbers (DN) in the image, a straight line with a formulation of L = G × DN + B is fitted by a minimization regression method, and the fitted coefficients, G and B, are inflight calibration coefficients. And then the high point (LH) and the low point (LL) of dynamic range can be described as LH= (G × DNH + B) and LL= B, respectively, where DNH is equal to 2n − 1 (n is the quantization number of the payload). Meanwhile, the sensor’s response linearity (δ) is described as the correlation coefficient of the regressed line. The results show that the calibration coefficients (G and B) are 0.0083 W·sr−1m−2µm−1 and −3.5 W·sr−1m−2µm−1; the low point of dynamic range is −3.5 W·sr−1m−2µm−1 and the high point is 30.5 W·sr−1m−2µm−1; the response linearity is approximately 99%. Furthermore, a SNR normalization method is used to assess the sensor’s SNR, and the normalized SNR is about 59.6 when the mean value of radiance is equal to 11.0 W·sr−1m−2µm−1; subsequently, the radiometric resolution is calculated about 0.1845 W•sr-1m-2μm-1. Moreover, in order to validate the result, a comparison of the measured radiance with a radiative-transfer-code-predicted over four portable artificial targets with reflectance of 20%, 30%, 40%, 50% respectively, is performed. It is noted that relative error for the calibration is within 6.6%.
Keywords: Calibration, dynamic range, radiometric resolution, SNR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 133912 Motor Coordination and Body Mass Index in Primary School Children
Authors: Ingrid Ruzbarska, Martin Zvonar, Piotr Oleśniewicz, Julita Markiewicz-Patkowska, Krzysztof Widawski, Daniel Puciato
Abstract:
Obese children will probably become obese adults, consequently exposed to an increased risk of comorbidity and premature mortality. Body weight may be indirectly determined by continuous development of coordination and motor skills. The level of motor skills and abilities is an important factor that promotes physical activity since early childhood. The aim of the study is to thoroughly understand the internal relations between motor coordination abilities and the somatic development of prepubertal children and to determine the effect of excess body weight on motor coordination by comparing the motor ability levels of children with different body mass index (BMI) values. The data were collected from 436 children aged 7–10 years, without health limitations, fully participating in school physical education classes. Body height was measured with portable stadiometers (Harpenden, Holtain Ltd.), and body mass—with a digital scale (HN-286, Omron). Motor coordination was evaluated with the Kiphard-Schilling body coordination test, Körperkoordinationstest für Kinder. The normality test by Shapiro-Wilk was used to verify the data distribution. The correlation analysis revealed a statistically significant negative association between the dynamic balance and BMI, as well as between the motor quotient and BMI (p<0.01) for both boys and girls. The results showed no effect of gender on the difference in the observed trends. The analysis of variance proved statistically significant differences between normal weight children and their overweight or obese counterparts. Coordination abilities probably play an important role in preventing or moderating the negative trajectory leading to childhood overweight and obesity. At this age, the development of coordination abilities should become a key strategy, targeted at long-term prevention of obesity and the promotion of an active lifestyle in adulthood. Motor performance is essential for implementing a healthy lifestyle in childhood already. Physical inactivity apparently results in motor deficits and a sedentary lifestyle in children, which may be accompanied by excess energy intake and overweight.
Keywords: Childhood, KTK test, Physical education, Psychomotor competence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 136311 Achieving Implementable Nature-Based Solutions While Reshaping Architectural Education: A Case Study of URBiNAT and BUILD Solutions
Authors: C. Farinea, A. Conserva, F. Demeur
Abstract:
Nature has often been something humans have fought against. However, with the changing climate and urban challenges such as air pollution and food shortages, to name but a few, it has never been more crucial to work with nature to find solutions that can help us to adapt to the current planetary situation and mitigate the challenges that we will continue to face in the future. Nature-based solutions (NBS) have been gaining ground as one strategy that can help to create more sustainable solutions for our planet and simultaneously, provide several ecosystem services. As designers, there are a lot of insights that can be extracted and gained from nature. However, nature is a complex and sometimes difficult to predict system and its implementation in cities requires a multidisciplinary knowledge. To keep up with the solutions and prepare the future generations of architects and designers with the skills to be able to implement NBS, educational systems also have to adapt with the times. Architecture is no longer solely about drawing buildings with beautiful forms. It is no longer discipline bound. With the input from different disciplines, the implementation of NBS can be significantly more successful. Transdisciplinary strategies can encourage architects and designers to think beyond their discipline, and ensure the success and realization of the NBS. The paper will demonstrate how transdisciplinary teaching methodologies, including also taking part in participatory processes with experts intended as gathering local knowledge, can be implemented with architectural master students to achieve implementable NBS. Through two projects co-funded by the European Union, strategies such as participatory co-design and transdisciplinary start-ups were implemented into seminars that focused on the development of NBS with a transdisciplinary approach. Within the “Design with Living Systems” seminar, students took part in participatory co-design strategies with experts to design solutions that will be implemented in Porto as part of a healthy corridor, and that respond to the needs of the users and site. On the other hand, within the “Design for Living Systems” seminar, the transdisciplinary start-up approach created start-ups with students of architecture, business and biology focusing on identifying a problem and designing a NBS as a product. Both seminars proved to be successful in achieving implementable NBS through strategies of transdisciplinary education and gave the students the skill sets to be able to work with nature in their future careers.
Keywords: Architectural higher education, digital fabrication, nature-based solutions, transdisciplinary approaches.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14410 Analyzing Political Cartoons in Arabic-Language Media after Trump's Jerusalem Move: A Multimodal Discourse Perspective
Authors: Inas Hussein
Abstract:
Communication in the modern world is increasingly becoming multimodal due to globalization and the digital space we live in which have remarkably affected how people communicate. Accordingly, Multimodal Discourse Analysis (MDA) is an emerging paradigm in discourse studies with the underlying assumption that other semiotic resources such as images, colours, scientific symbolism, gestures, actions, music and sound, etc. combine with language in order to communicate meaning. One of the effective multimodal media that combines both verbal and non-verbal elements to create meaning is political cartoons. Furthermore, since political and social issues are mirrored in political cartoons, these are regarded as potential objects of discourse analysis since they not only reflect the thoughts of the public but they also have the power to influence them. The aim of this paper is to analyze some selected cartoons on the recognition of Jerusalem as Israel's capital by the American President, Donald Trump, adopting a multimodal approach. More specifically, the present research examines how the various semiotic tools and resources utilized by the cartoonists function in projecting the intended meaning. Ten political cartoons, among a surge of editorial cartoons highlighted by the Anti-Defamation League (ADL) - an international Jewish non-governmental organization based in the United States - as publications in different Arabic-language newspapers in Egypt, Saudi Arabia, UAE, Oman, Iran and UK, were purposively selected for semiotic analysis. These editorial cartoons, all published during 6th–18th December 2017, invariably suggest one theme: Jewish and Israeli domination of the United States. The data were analyzed using the framework of Visual Social Semiotics. In accordance with this methodological framework, the selected visual compositions were analyzed in terms of three aspects of meaning: representational, interactive and compositional. In analyzing the selected cartoons, an interpretative approach is being adopted. This approach prioritizes depth to breadth and enables insightful analyses of the chosen cartoons. The findings of the study reveal that semiotic resources are key elements of political cartoons due to the inherent political communication they convey. It is proved that adequate interpretation of the three aspects of meaning is a prerequisite for understanding the intended meaning of political cartoons. It is recommended that further research should be conducted to provide more insightful analyses of political cartoons from a multimodal perspective.
Keywords: Multimodal discourse analysis, multimodal text, political cartoons, visual modality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15489 Bioleaching of Metals Contained in Spent Catalysts by Acidithiobacillus thiooxidans DSM 26636
Authors: Andrea M. Rivas-Castillo, Marlenne Gómez-Ramirez, Isela Rodríguez-Pozos, Norma G. Rojas-Avelizapa
Abstract:
Spent catalysts are considered as hazardous residues of major concern, mainly due to the simultaneous presence of several metals in elevated concentrations. Although hydrometallurgical, pyrometallurgical and chelating agent methods are available to remove and recover some metals contained in spent catalysts; these procedures generate potentially hazardous wastes and the emission of harmful gases. Thus, biotechnological treatments are currently gaining importance to avoid the negative impacts of chemical technologies. To this end, diverse microorganisms have been used to assess the removal of metals from spent catalysts, comprising bacteria, archaea and fungi, whose resistance and metal uptake capabilities differ depending on the microorganism tested. Acidophilic sulfur oxidizing bacteria have been used to investigate the biotreatment and extraction of valuable metals from spent catalysts, namely Acidithiobacillus thiooxidans and Acidithiobacillus ferroxidans, as they present the ability to produce leaching agents such as sulfuric acid and sulfur oxidation intermediates. In the present work, the ability of A. thiooxidans DSM 26636 for the bioleaching of metals contained in five different spent catalysts was assessed by growing the culture in modified Starkey mineral medium (with elemental sulfur at 1%, w/v), and 1% (w/v) pulp density of each residue for up to 21 days at 30 °C and 150 rpm. Sulfur-oxidizing activity was periodically evaluated by determining sulfate concentration in the supernatants according to the NMX-k-436-1977 method. The production of sulfuric acid was assessed in the supernatants as well, by a titration procedure using NaOH 0.5 M with bromothymol blue as acid-base indicator, and by measuring pH using a digital potentiometer. On the other hand, Inductively Coupled Plasma - Optical Emission Spectrometry was used to analyze metal removal from the five different spent catalysts by A. thiooxidans DSM 26636. Results obtained show that, as could be expected, sulfuric acid production is directly related to the diminish of pH, and also to highest metal removal efficiencies. It was observed that Al and Fe are recurrently removed from refinery spent catalysts regardless of their origin and previous usage, although these removals may vary from 9.5 ± 2.2 to 439 ± 3.9 mg/kg for Al, and from 7.13 ± 0.31 to 368.4 ± 47.8 mg/kg for Fe, depending on the spent catalyst proven. Besides, bioleaching of metals like Mg, Ni, and Si was also obtained from automotive spent catalysts, which removals were of up to 66 ± 2.2, 6.2±0.07, and 100±2.4, respectively. Hence, the data presented here exhibit the potential of A. thiooxidans DSM 26636 for the simultaneous bioleaching of metals contained in spent catalysts from diverse provenance.
Keywords: Acidithiobacillus thiooxidans, spent catalysts, bioleaching, metals, sulfuric acid, sulfur-oxidizing activity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10318 How Virtualization, Decentralization and Network Building Change the Manufacturing Landscape: An Industry 4.0 Perspective
Authors: Malte Brettel, Niklas Friederichsen, Michael Keller, Marius Rosenberg
Abstract:
The German manufacturing industry has to withstand an increasing global competition on product quality and production costs. As labor costs are high, several industries have suffered severely under the relocation of production facilities towards aspiring countries, which have managed to close the productivity and quality gap substantially. Established manufacturing companies have recognized that customers are not willing to pay large price premiums for incremental quality improvements. As a consequence, many companies from the German manufacturing industry adjust their production focusing on customized products and fast time to market. Leveraging the advantages of novel production strategies such as Agile Manufacturing and Mass Customization, manufacturing companies transform into integrated networks, in which companies unite their core competencies. Hereby, virtualization of the process- and supply-chain ensures smooth inter-company operations providing real-time access to relevant product and production information for all participating entities. Boundaries of companies deteriorate, as autonomous systems exchange data, gained by embedded systems throughout the entire value chain. By including Cyber-Physical-Systems, advanced communication between machines is tantamount to their dialogue with humans. The increasing utilization of information and communication technology allows digital engineering of products and production processes alike. Modular simulation and modeling techniques allow decentralized units to flexibly alter products and thereby enable rapid product innovation. The present article describes the developments of Industry 4.0 within the literature and reviews the associated research streams. Hereby, we analyze eight scientific journals with regards to the following research fields: Individualized production, end-to-end engineering in a virtual process chain and production networks. We employ cluster analysis to assign sub-topics into the respective research field. To assess the practical implications, we conducted face-to-face interviews with managers from the industry as well as from the consulting business using a structured interview guideline. The results reveal reasons for the adaption and refusal of Industry 4.0 practices from a managerial point of view. Our findings contribute to the upcoming research stream of Industry 4.0 and support decision-makers to assess their need for transformation towards Industry 4.0 practices.
Keywords: Industry 4.0., Mass Customization, Production networks, Virtual Process-Chain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 317307 Power and Delay Optimized Graph Representation for Combinational Logic Circuits
Authors: Padmanabhan Balasubramanian, Karthik Anantha
Abstract:
Structural representation and technology mapping of a Boolean function is an important problem in the design of nonregenerative digital logic circuits (also called combinational logic circuits). Library aware function manipulation offers a solution to this problem. Compact multi-level representation of binary networks, based on simple circuit structures, such as AND-Inverter Graphs (AIG) [1] [5], NAND Graphs, OR-Inverter Graphs (OIG), AND-OR Graphs (AOG), AND-OR-Inverter Graphs (AOIG), AND-XORInverter Graphs, Reduced Boolean Circuits [8] does exist in literature. In this work, we discuss a novel and efficient graph realization for combinational logic circuits, represented using a NAND-NOR-Inverter Graph (NNIG), which is composed of only two-input NAND (NAND2), NOR (NOR2) and inverter (INV) cells. The networks are constructed on the basis of irredundant disjunctive and conjunctive normal forms, after factoring, comprising terms with minimum support. Construction of a NNIG for a non-regenerative function in normal form would be straightforward, whereas for the complementary phase, it would be developed by considering a virtual instance of the function. However, the choice of best NNIG for a given function would be based upon literal count, cell count and DAG node count of the implementation at the technology independent stage. In case of a tie, the final decision would be made after extracting the physical design parameters. We have considered AIG representation for reduced disjunctive normal form and the best of OIG/AOG/AOIG for the minimized conjunctive normal forms. This is necessitated due to the nature of certain functions, such as Achilles- heel functions. NNIGs are found to exhibit 3.97% lesser node count compared to AIGs and OIG/AOG/AOIGs; consume 23.74% and 10.79% lesser library cells than AIGs and OIG/AOG/AOIGs for the various samples considered. We compare the power efficiency and delay improvement achieved by optimal NNIGs over minimal AIGs and OIG/AOG/AOIGs for various case studies. In comparison with functionally equivalent, irredundant and compact AIGs, NNIGs report mean savings in power and delay of 43.71% and 25.85% respectively, after technology mapping with a 0.35 micron TSMC CMOS process. For a comparison with OIG/AOG/AOIGs, NNIGs demonstrate average savings in power and delay by 47.51% and 24.83%. With respect to device count needed for implementation with static CMOS logic style, NNIGs utilize 37.85% and 33.95% lesser transistors than their AIG and OIG/AOG/AOIG counterparts.Keywords: AND-Inverter Graph, OR-Inverter Graph, DirectedAcyclic Graph, Low power design, Delay optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20496 A World Map of Seabed Sediment Based on 50 Years of Knowledge
Authors: T. Garlan, I. Gabelotaud, S. Lucas, E. Marchès
Abstract:
Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.
Keywords: Marine sedimentology, seabed map, sediment classification, World Ocean.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10375 A Comparative Study of Cardio Respiratory Efficiency between Aquatic and Track and Field Performers
Authors: Sumanta Daw, Gopal Chandra Saha
Abstract:
The present study was conducted to explore the basic pulmonary functions which may generally vary according to the bio-physical characteristics including age, height, body weight, and environment etc. of the sports performers. Regular and specific training exercises also change the characteristics of an athlete’s prowess and produce a positive effect on the physiological functioning, mostly upon cardio-pulmonary efficiency and thereby improving the body mechanism. The objective of the present study was to compare the differences in cardio-respiratory functions between aquatics and track and field performers. As cardio-respiratory functions are influenced by pulse rate and blood pressure (systolic and diastolic), so both of the factors were also taken into consideration. The component selected under cardio-respiratory functions for the present study were i) FEVI/FVC ratio (forced expiratory volume divided by forced vital capacity ratio, i.e. the number represents the percentage of lung capacity to exhale in one second) ii) FVC1 (this is the amount of air which can force out of lungs in one second) and iii) FVC (forced vital capacity is the greatest total amount of air forcefully breathe out after breathing in as deeply as possible). All the three selected components of the cardio-respiratory efficiency were measured by spirometry method. Pulse rate was determined manually. The radial artery which is located on the thumb side of our wrist was used to assess the pulse rate. Blood pressure was assessed by sphygmomanometer. All the data were taken in the resting condition. 36subjects were selected for the present study out of which 18were water polo players and rest were sprinters. The age group of the subjects was considered between 18 to 23 years. In this study the obtained data inform of digital score were treated statistically to get result and draw conclusions. The Mean and Standard Deviation (SD) were used as descriptive statistics and the significant difference between the two subject groups was assessed with the help of statistical ‘t’-test. It was found from the study that all the three components i.e. FEVI/FVC ratio (p-value 0.0148 < 0.01), FVC1 (p-value 0.0010 < 0.01) and FVC (p-value 0.0067 < 0.01) differ significantly as water polo players proved to be better in terms of cardio-respiratory functions than sprinters. Thus study clearly suggests that the exercise training as well as the medium of practice arena associated with water polo players has played an important role to determine better cardio respiratory efficiency than track and field athletes. The outcome of the present study revealed that the lung function in land-based activities may not provide much impact than that of in water activities.
Keywords: Cardio-respiratory efficiency, spirometry, water polo players, sprinters.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6094 Assessment of Influence of Short-Lasting Whole-Body Vibration on Joint Position Sense and Body Balance–A Randomised Masked Study
Authors: Anna Słupik, Anna Mosiołek, Sebastian Wójtowicz, Dariusz Białoszewski
Abstract:
Introduction: Whole-Body Vibration (WBV) uses high frequency mechanical stimuli generated by a vibration plate and transmitted through bone, muscle and connective tissues to the whole body. Research has shown that long-term vibration-plate training improves neuromuscular facilitation, especially in afferent neural pathways, responsible for the conduction of vibration and proprioceptive stimuli, muscle function, balance and proprioception. Some researchers suggest that the vibration stimulus briefly inhibits the conduction of afferent signals from proprioceptors and can interfere with the maintenance of body balance. The aim of this study was to evaluate the influence of a single set of exercises associated with whole-body vibration on the joint position sense and body balance. Material and methods: The study enrolled 55 people aged 19-24 years. These individuals were randomly divided into a test group (30 persons) and a control group (25 persons). Both groups performed the same set of exercises on a vibration plate. The following vibration parameters: frequency of 20Hz and amplitude of 3mm, were used in the test group. The control group performed exercises on the vibration plate while it was off. All participants were instructed to perform six dynamic exercises lasting 30 seconds each with a 60-second period of rest between them. The exercises involved large muscle groups of the trunk, pelvis and lower limbs. Measurements were carried out before and immediately after exercise. Joint position sense (JPS) was measured in the knee joint for the starting position at 45° in an open kinematic chain. JPS error was measured using a digital inclinometer. Balance was assessed in a standing position with both feet on the ground with the eyes open and closed (each test lasting 30 sec). Balance was assessed using Matscan with FootMat 7.0 SAM software. The surface of the ellipse of confidence and front-back as well as right-left swing were measured to assess balance. Statistical analysis was performed using Statistica 10.0 PL software. Results: There were no significant differences between the groups, both before and after the exercise (p> 0.05). JPS did not change in both the test (10.7° vs. 8.4°) and control groups (9.0° vs. 8.4°). No significant differences were shown in any of the test parameters during balance tests with the eyes open or closed in both the test and control groups (p> 0.05). Conclusions: 1. Deterioration in proprioception or balance was not observed immediately after the vibration stimulus. This suggests that vibrationinduced blockage of proprioceptive stimuli conduction can have only a short-lasting effect that occurs only as long as a vibration stimulus is present. 2. Short-term use of vibration in treatment does not impair proprioception and seems to be safe for patients with proprioceptive impairment. 3. These results need to be supplemented with an assessment of proprioception during the application of vibration stimuli. Additionally, the impact of vibration parameters used in the exercises should be evaluated.Keywords: Balance, joint position sense, proprioception, whole body vibration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16083 Embedded Semantic Segmentation Network Optimized for Matrix Multiplication Accelerator
Authors: Jaeyoung Lee
Abstract:
Autonomous driving systems require high reliability to provide people with a safe and comfortable driving experience. However, despite the development of a number of vehicle sensors, it is difficult to always provide high perceived performance in driving environments that vary from time to season. The image segmentation method using deep learning, which has recently evolved rapidly, provides high recognition performance in various road environments stably. However, since the system controls a vehicle in real time, a highly complex deep learning network cannot be used due to time and memory constraints. Moreover, efficient networks are optimized for GPU environments, which degrade performance in embedded processor environments equipped simple hardware accelerators. In this paper, a semantic segmentation network, matrix multiplication accelerator network (MMANet), optimized for matrix multiplication accelerator (MMA) on Texas instrument digital signal processors (TI DSP) is proposed to improve the recognition performance of autonomous driving system. The proposed method is designed to maximize the number of layers that can be performed in a limited time to provide reliable driving environment information in real time. First, the number of channels in the activation map is fixed to fit the structure of MMA. By increasing the number of parallel branches, the lack of information caused by fixing the number of channels is resolved. Second, an efficient convolution is selected depending on the size of the activation. Since MMA is a fixed, it may be more efficient for normal convolution than depthwise separable convolution depending on memory access overhead. Thus, a convolution type is decided according to output stride to increase network depth. In addition, memory access time is minimized by processing operations only in L3 cache. Lastly, reliable contexts are extracted using the extended atrous spatial pyramid pooling (ASPP). The suggested method gets stable features from an extended path by increasing the kernel size and accessing consecutive data. In addition, it consists of two ASPPs to obtain high quality contexts using the restored shape without global average pooling paths since the layer uses MMA as a simple adder. To verify the proposed method, an experiment is conducted using perfsim, a timing simulator, and the Cityscapes validation sets. The proposed network can process an image with 640 x 480 resolution for 6.67 ms, so six cameras can be used to identify the surroundings of the vehicle as 20 frame per second (FPS). In addition, it achieves 73.1% mean intersection over union (mIoU) which is the highest recognition rate among embedded networks on the Cityscapes validation set.
Keywords: Edge network, embedded network, MMA, matrix multiplication accelerator and semantic segmentation network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 465