Search results for: geometry in architecture
183 Concepts of Technologies Based on Smart Materials to Improve Aircraft Aerodynamic Performance
Authors: Krzysztof Skiba, Zbigniew Czyz, Ksenia Siadkowska, Piotr Borowiec
Abstract:
The article presents selected concepts of technologies that use intelligent materials in aircraft in order to improve their performance. Most of the research focuses on solutions that improve the performance of fixed wing aircraft due to related to their previously dominant market share. Recently, the development of the rotorcraft has been intensive, so there are not only helicopters but also gyroplanes and unmanned aerial vehicles using rotors and vertical take-off and landing. There are many different technologies to change a shape of the aircraft or its elements. Piezoelectric, deformable actuator systems can be applied in the system of an active control of vibration dampening in the aircraft tail structure. Wires made of shape memory alloys (SMA) could be used instead of hydraulic cylinders in the rear part of the aircraft flap. The aircraft made of intelligent materials (piezoelectrics and SMA) is one of the NASA projects which provide the possibility of changing a wing shape coefficient by 200%, a wing surface by 50%, and wing deflections by 20 degrees. Active surfaces made of shape memory alloys could be used to control swirls in the flowing stream. An intelligent control system for helicopter blades is a method for the active adaptation of blades to flight conditions and the reduction of vibrations caused by the rotor. Shape memory alloys are capable of recovering their pre-programmed shapes. They are divided into three groups: nickel-titanium-based, copper-based, and ferromagnetic. Due to the strongest shape memory effect and the best vibration damping ability, a Ni-Ti alloy is the most commercially important. The subject of this work was to prepare a conceptual design of a rotor blade with SMA actuators. The scope of work included 3D design of the supporting rotor blade, 3D design of beams enabling to change the geometry by changing the angle of rotation and FEM (Finite Element Method) analysis. The FEM analysis was performed using NX 12 software in the Pre/Post module, which includes extended finite element modeling tools and visualizations of the obtained results. Calculations are presented for two versions of the blade girders. For FEM analysis, three types of materials were used for comparison purposes (ABS, aluminium alloy 7057, steel C45). The analysis of internal stresses and extreme displacements of crossbars edges was carried out. The internal stresses in all materials were close to the yield point in the solution of girder no. 1. For girder no. 2 solution, the value of stresses decreased by about 45%. As a result of the displacement analysis, it was found that the best solution was the ABS girder no. 1. The displacement of about 0.5 mm was obtained, which resulted in turning the crossbars (upper and lower) by an angle equal to 3.59 degrees. This is the largest deviation of all the tests. The smallest deviation was obtained for beam no. 2 made of steel. The displacement value of the second girder solution was approximately 30% lower than the first solution. Acknowledgement: This work has been financed by the Polish National Centre for Research and Development under the LIDER program, Grant Agreement No. LIDER/45/0177/L-9/17/NCBR/2018.Keywords: aircraft, helicopters, shape memory alloy, SMA, smart material, unmanned aerial vehicle, UAV
Procedia PDF Downloads 138182 Wind Turbine Scaling for the Investigation of Vortex Shedding and Wake Interactions
Authors: Sarah Fitzpatrick, Hossein Zare-Behtash, Konstantinos Kontis
Abstract:
Traditionally, the focus of horizontal axis wind turbine (HAWT) blade aerodynamic optimisation studies has been the outer working region of the blade. However, recent works seek to better understand, and thus improve upon, the performance of the inboard blade region to enhance power production, maximise load reduction and better control the wake behaviour. This paper presents the design considerations and characterisation of a wind turbine wind tunnel model devised to further the understanding and fundamental definition of horizontal axis wind turbine root vortex shedding and interactions. Additionally, the application of passive and active flow control mechanisms – vortex generators and plasma actuators – to allow for the manipulation and mitigation of unsteady aerodynamic behaviour at the blade inboard section is investigated. A static, modular blade wind turbine model has been developed for use in the University of Glasgow’s de Havilland closed return, low-speed wind tunnel. The model components - which comprise of a half span blade, hub, nacelle and tower - are scaled using the equivalent full span radius, R, for appropriate Mach and Strouhal numbers, and to achieve a Reynolds number in the range of 1.7x105 to 5.1x105 for operational speeds up to 55m/s. The half blade is constructed to be modular and fully dielectric, allowing for the integration of flow control mechanisms with a focus on plasma actuators. Investigations of root vortex shedding and the subsequent wake characteristics using qualitative – smoke visualisation, tufts and china clay flow – and quantitative methods – including particle image velocimetry (PIV), hot wire anemometry (HWA), and laser Doppler anemometry (LDA) – were conducted over a range of blade pitch angles 0 to 15 degrees, and Reynolds numbers. This allowed for the identification of shed vortical structures from the maximum chord position, the transitional region where the blade aerofoil blends into a cylindrical joint, and the blade nacelle connection. Analysis of the trailing vorticity interactions between the wake core and freestream shows the vortex meander and diffusion is notably affected by the Reynold’s number. It is hypothesized that the shed vorticity from the blade root region directly influences and exacerbates the nacelle wake expansion in the downstream direction. As the design of inboard blade region form is, by necessity, driven by function rather than aerodynamic optimisation, a study is undertaken for the application of flow control mechanisms to manipulate the observed vortex phenomenon. The designed model allows for the effective investigation of shed vorticity and wake interactions with a focus on the accurate geometry of a root region which is representative of small to medium power commercial HAWTs. The studies undertaken allow for an enhanced understanding of the interplay of shed vortices and their subsequent effect in the near and far wake. This highlights areas of interest within the inboard blade area for the potential use of passive and active flow control devices which contrive to produce a more desirable wake quality in this region.Keywords: vortex shedding, wake interactions, wind tunnel model, wind turbine
Procedia PDF Downloads 234181 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit
Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic
Abstract:
Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method
Procedia PDF Downloads 117180 Profiling Risky Code Using Machine Learning
Authors: Zunaira Zaman, David Bohannon
Abstract:
This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties
Procedia PDF Downloads 105179 Long Short-Term Memory Stream Cruise Control Method for Automated Drift Detection and Adaptation
Authors: Mohammad Abu-Shaira, Weishi Shi
Abstract:
Adaptive learning, a commonly employed solution to drift, involves updating predictive models online during their operation to react to concept drifts, thereby serving as a critical component and natural extension for online learning systems that learn incrementally from each example. This paper introduces LSTM-SCCM “Long Short-Term Memory Stream Cruise Control Method”, a drift adaptation-as-a-service framework for online learning. LSTM-SCCM automates drift adaptation through prompt detection, drift magnitude quantification, dynamic hyperparameter tuning, performing shortterm optimization and model recalibration for immediate adjustments, and, when necessary, conducting long-term model recalibration to ensure deeper enhancements in model performance. LSTM-SCCM is incorporated into a suite of cutting-edge online regression models, assessing their performance across various types of concept drift using diverse datasets with varying characteristics. The findings demonstrate that LSTM-SCCM represents a notable advancement in both model performance and efficacy in handling concept drift occurrences. LSTM-SCCM stands out as the sole framework adept at effectively tackling concept drifts within regression scenarios. Its proactive approach to drift adaptation distinguishes it from conventional reactive methods, which typically rely on retraining after significant degradation to model performance caused by drifts. Additionally, LSTM-SCCM employs an in-memory approach combined with the Self-Adjusting Memory (SAM) architecture to enhance real-time processing and adaptability. The framework incorporates variable thresholding techniques and does not assume any particular data distribution, making it an ideal choice for managing high-dimensional datasets and efficiently handling large-scale data. Our experiments, which include abrupt, incremental, and gradual drifts across both low- and high-dimensional datasets with varying noise levels, and applied to four state-of-the-art online regression models, demonstrate that LSTM-SCCM is versatile and effective, rendering it a valuable solution for online regression models to address concept drift.Keywords: automated drift detection and adaptation, concept drift, hyperparameters optimization, online and adaptive learning, regression
Procedia PDF Downloads 10178 Body, Experience, Sense, and Place: Past and Present Sensory Mappings of Istiklal Street in Istanbul
Authors: Asiye Nisa Kartal
Abstract:
An attempt to recognize the undiscovered bounds of Istiklal Street in Istanbul between its sensory experiences (intangible qualities) and physical setting (tangible qualities) could be taken as the first inspiration point for this study. ‘The dramatic physical changes’ and ‘their current impacts on sensory attributions’ of Istiklal Street have directed this study to consider the role of changing the physical layout on sensory dimensions which have a subtle but important role in the examination of urban places. The public places have always been subject to transformation, so in the last years, the changing socio-cultural structure, economic and political movements, law and city regulations, innovative transportation and communication activities have resulted in a controversial modification of Istanbul. And, as the culture, entertainment, tourism, and shopping focus of Istanbul, Istiklal Street has witnessed different changing stages within the last years. In this process, because of the projects being implemented, many buildings such as cinemas, theatres, and bookstores have restored, moved, converted, closed and demolished which have been significant elements in terms of the qualitative value of this area. And, the multi-layered socio-cultural, and architectural structure of Istiklal Street has been changing in a dramatical and controversial way. But importantly, while the physical setting of Istiklal Street has changed, the transformation has not been spatial, socio-cultural, economic; avoidably the sensory dimensions of Istiklal Street which have great importance in terms of intangible qualities of this area have begun to lose their distinctive features. This has created the challenge of this research. As the main hypothesis, this study claims that the physical transformations have led to change in the sensory characteristic of Istiklal Street, therefore the Sensescape of Istiklal Street deserve to be recorded, decoded and promoted as expeditiously as possible to observe the sensory reflections of physical transformations in this area. With the help of the method of ‘Sensewalking’ which is an efficient research tool to generate knowledge on sensory dimensions of an urban settlement, this study suggests way of ‘mapping’ to understand how do ‘changes of physical setting’ play role on ‘sensory qualities’ of Istiklal Street which have been changed or lost over time. Basically, this research focuses on the sensory mapping of Istiklal Street from the 1990s until today to picture, interpret, criticize the ‘sensory mapping of Istiklal Street in present’ and the ‘sensory mapping of Istiklal Street in past’. Through the sensory mapping of Istiklal Street, this study intends to increase the awareness about the distinctive sensory qualities of places. It is worthwhile for further studies that consider the sensory dimensions of places especially in the field of architecture.Keywords: Istiklal street, sense, sensewalking, sensory mapping
Procedia PDF Downloads 177177 Building Exoskeletons for Seismic Retrofitting
Authors: Giuliana Scuderi, Patrick Teuffel
Abstract:
The proven vulnerability of the existing social housing building heritage to natural or induced earthquakes requires the development of new design concepts and conceptual method to preserve materials and object, at the same time providing new performances. An integrate intervention between civil engineering, building physics and architecture can convert the social housing districts from a critical part of the city to a strategic resource of revitalization. Referring to bio-mimicry principles the present research proposes a taxonomy with the exoskeleton of the insect, an external, light and resistant armour whose role is to protect the internal organs from external potentially dangerous inputs. In the same way, a “building exoskeleton”, acting from the outside of the building as an enclosing cage, can restore, protect and support the existing building, assuming a complex set of roles, from the structural to the thermal, from the aesthetical to the functional. This study evaluates the structural efficiency of shape memory alloys devices (SMADs) connecting the “building exoskeleton” with the existing structure to rehabilitate, in order to prevent the out-of-plane collapse of walls and for the passive dissipation of the seismic energy, with a calibrated operability in relation to the intensity of the horizontal loads. The two case studies of a masonry structure and of a masonry structure with concrete frame are considered, and for each case, a theoretical social housing building is exposed to earthquake forces, to evaluate its structural response with or without SMADs. The two typologies are modelled with the finite element program SAP2000, and they are respectively defined through a “frame model” and a “diagonal strut model”. In the same software two types of SMADs, called the 00-10 SMAD and the 05-10 SMAD are defined, and non-linear static and dynamic analyses, namely push over analysis and time history analysis, are performed to evaluate the seismic response of the building. The effectiveness of the devices in limiting the control joint displacements resulted higher in one direction, leading to the consideration of a possible calibrated use of the devices in the different walls of the building. The results show also a higher efficiency of the 00-10 SMADs in controlling the interstory drift, but at the same time the necessity to improve the hysteretic behaviour, to maximise the passive dissipation of the seismic energy.Keywords: adaptive structure, biomimetic design, building exoskeleton, social housing, structural envelope, structural retrofitting
Procedia PDF Downloads 419176 SAFECARE: Integrated Cyber-Physical Security Solution for Healthcare Critical Infrastructure
Authors: Francesco Lubrano, Fabrizio Bertone, Federico Stirano
Abstract:
Modern societies strongly depend on Critical Infrastructures (CI). Hospitals, power supplies, water supplies, telecommunications are just few examples of CIs that provide vital functions to societies. CIs like hospitals are very complex environments, characterized by a huge number of cyber and physical systems that are becoming increasingly integrated. Ensuring a high level of security within such critical infrastructure requires a deep knowledge of vulnerabilities, threats, and potential attacks that may occur, as well as defence and prevention or mitigation strategies. The possibility to remotely monitor and control almost everything is pushing the adoption of network-connected devices. This implicitly introduces new threats and potential vulnerabilities, posing a risk, especially to those devices connected to the Internet. Modern medical devices used in hospitals are not an exception and are more and more being connected to enhance their functionalities and easing the management. Moreover, hospitals are environments with high flows of people, that are difficult to monitor and can somehow easily have access to the same places used by the staff, potentially creating damages. It is therefore clear that physical and cyber threats should be considered, analysed, and treated together as cyber-physical threats. This means that an integrated approach is required. SAFECARE, an integrated cyber-physical security solution, tries to respond to the presented issues within healthcare infrastructures. The challenge is to bring together the most advanced technologies from the physical and cyber security spheres, to achieve a global optimum for systemic security and for the management of combined cyber and physical threats and incidents and their interconnections. Moreover, potential impacts and cascading effects are evaluated through impact propagation models that rely on modular ontologies and a rule-based engine. Indeed, SAFECARE architecture foresees i) a macroblock related to cyber security field, where innovative tools are deployed to monitor network traffic, systems and medical devices; ii) a physical security macroblock, where video management systems are coupled with access control management, building management systems and innovative AI algorithms to detect behavior anomalies; iii) an integration system that collects all the incoming incidents, simulating their potential cascading effects, providing alerts and updated information regarding assets availability.Keywords: cyber security, defence strategies, impact propagation, integrated security, physical security
Procedia PDF Downloads 163175 The Outcome of Using Machine Learning in Medical Imaging
Authors: Adel Edwar Waheeb Louka
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deeplearning, image processing, machine learningSarapin, intraarticular, chronic knee pain, osteoarthritisFNS, trauma, hip, neck femur fracture, minimally invasive surgery
Procedia PDF Downloads 72174 Systematic Study of Structure Property Relationship in Highly Crosslinked Elastomers
Authors: Natarajan Ramasamy, Gurulingamurthy Haralur, Ramesh Nivarthu, Nikhil Kumar Singha
Abstract:
Elastomers are polymeric materials with varied backbone architectures ranging from linear to dendrimeric structures and wide varieties of monomeric repeat units. These elastomers show strongly viscous and weakly elastic when it is not cross-linked. But when crosslinked, based on the extent the properties of these elastomers can range from highly flexible to highly stiff nature. Lightly cross-linked systems are well studied and reported. Understanding the nature of highly cross-linked rubber based upon chemical structure and architecture is critical for varieties of applications. One of the critical parameters is cross-link density. In the current work, we have studied the highly cross-linked state of linear, lightly branched to star-shaped branched elastomers and determined the cross-linked density by using different models. Change in hardness, shift in Tg, change in modulus and swelling behavior were measured experimentally as a function of the extent of curing. These properties were analyzed using varied models to determine cross-link density. We used hardness measurements to examine cure time. Hardness to the extent of curing relationship is determined. It is well known that micromechanical transitions like Tg and storage modulus are related to the extent of crosslinking. The Tg of the elastomer in different crosslinked state was determined by DMA, and based on plateau modulus the crosslink density is estimated by using Nielsen’s model. Usually for lightly crosslinked systems, based on equilibrium swelling ratio in solvent the cross link density is estimated by using Flory–Rhener model. When it comes to highly crosslinked system, Flory-Rhener model is not valid because of smaller chain length. So models based on the assumption of polymer as a Non-Gaussian chain like 1) Helmis–Heinrich–Straube (HHS) model, 2) Gloria M.gusler and Yoram Cohen Model, 3) Barbara D. Barr-Howell and Nikolaos A. Peppas model is used for estimating crosslink density. In this work, correction factors are determined to the existing models and based upon it structure-property relationship of highly crosslinked elastomers was studied.Keywords: dynamic mechanical analysis, glass transition temperature, parts per hundred grams of rubber, crosslink density, number of networks per unit volume of elastomer
Procedia PDF Downloads 165173 Exploration of Building Information Modelling Software to Develop Modular Coordination Design Tool for Architects
Authors: Muhammad Khairi bin Sulaiman
Abstract:
The utilization of Building Information Modelling (BIM) in the construction industry has provided an opportunity for designers in the Architecture, Engineering and Construction (AEC) industry to proceed from the conventional method of using manual drafting to a way that creates alternative designs quickly, produces more accurate, reliable and consistent outputs. By using BIM Software, designers can create digital content that manipulates the use of data using the parametric model of BIM. With BIM software, more alternative designs can be created quickly and design problems can be explored further to produce a better design faster than conventional design methods. Generally, BIM is used as a documentation mechanism and has not been fully explored and utilised its capabilities as a design tool. Relative to the current issue, Modular Coordination (MC) design as a sustainable design practice is encouraged since MC design will reduce material wastage through standard dimensioning, pre-fabrication, repetitive, modular construction and components. However, MC design involves a complex process of rules and dimensions. Therefore, a tool is needed to make this process easier. Since the parameters in BIM can easily be manipulated to follow MC rules and dimensioning, thus, the integration of BIM software with MC design is proposed for architects during the design stage. With this tool, there will be an improvement in acceptance and practice in the application of MC design effectively. Consequently, this study will analyse and explore the function and customization of BIM objects and the capability of BIM software to expedite the application of MC design during the design stage for architects. With this application, architects will be able to create building models and locate objects within reference modular grids that adhere to MC rules and dimensions. The parametric modeling capabilities of BIM will also act as a visual tool that will further enhance the automation of the 3-Dimensional space planning modeling process. (Method) The study will first analyze and explore the parametric modeling capabilities of rule-based BIM objects, which eventually customize a reference grid within the rules and dimensioning of MC. Eventually, the approach will further enhance the architect's overall design process and enable architects to automate complex modeling, which was nearly impossible before. A prototype using a residential quarter will be modeled. A set of reference grids guided by specific MC rules and dimensions will be used to develop a variety of space planning and configuration. With the use of the design, the tool will expedite the design process and encourage the use of MC Design in the construction industry.Keywords: building information modeling, modular coordination, space planning, customization, BIM application, MC space planning
Procedia PDF Downloads 83172 Setting up a Prototype for the Artificial Interactive Reality Unified System to Transform Psychosocial Intervention in Occupational Therapy
Authors: Tsang K. L. V., Lewis L. A., Griffith S., Tucker P.
Abstract:
Background: Many children with high incidence disabilities, such as autism spectrum disorder (ASD), struggle to participate in the community in a socially acceptable manner. There are limitations for clinical settings to provide natural, real-life scenarios for them to practice the life skills needed to meet their real-life challenges. Virtual reality (VR) offers potential solutions to resolve the existing limitations faced by clinicians to create simulated natural environments for their clients to generalize the facilitated skills. Research design: The research aimed to develop a prototype of an interactive VR system to provide realistic and immersive environments for clients to practice skills. The descriptive qualitative methodology is employed to design and develop the Artificial Interactive Reality Unified System (AIRUS) prototype, which provided insights on how to use advanced VR technology to create simulated real-life social scenarios and enable users to interact with the objects and people inside the virtual environment using natural eye-gazes, hand and body movements. The eye tracking (e.g., selective or joint attention), hand- or body-tracking (e.g., repetitive stimming or fidgeting), and facial tracking (e.g., emotion recognition) functions allowed behavioral data to be captured and managed in the AIRUS architecture. Impact of project: Instead of using external controllers or sensors, hand tracking software enabled the users to interact naturally with the simulated environment using daily life behavior such as handshaking and waving to control and interact with the virtual objects and people. The AIRUS protocol offers opportunities for breakthroughs in future VR-based psychosocial assessment and intervention in occupational therapy. Implications for future projects: AI technology can allow more efficient data capturing and interpretation of object identification and human facial emotion recognition at any given moment. The data points captured can be used to pinpoint our users’ focus and where their interests lie. AI can further help advance the data interpretation system.Keywords: occupational therapy, psychosocial assessment and intervention, simulated interactive environment, virtual reality
Procedia PDF Downloads 34171 Historiography of European Urbanism in the 20th Century in Slavic Languages
Authors: Aliaksandr Shuba, Max Welch Guerra, Martin Pekar
Abstract:
The research is dedicated to the Historiography of European urbanism in the 20th century with its critical analysis of transnational oriented sources in Slavic languages. The goal of this research was to give an overview of Slavic sources on this subject. In the research, historians, who wrote in influential historiographies on architecture and urbanism in the 20th century history in Slavic languages from Eastern, Central and South-eastern Europe, are analysed. The analysis of historiographies in Slavic languages includes diverse sources from around Europe with authors, who examined European Urbanism in the 20th century through a global prism of or their own perspectives. The main publications are from the second half of the 20th century and the early 21st century with Soviet and Post-Soviet discourses. The necessity to analyse Slavic sources was a result of historiography of urbanism establishment as a discipline in the 20th century and by the USSR, Czechslovak, and Yugoslavian academics, who created strong historiographic bases for a development of their urban historiographic schools for wide studies and analysis of architectural and urban ideas and projects with their history in the early 1970s. That is analyzed in this research within Slavic publications, which often have different perspectives and discourses to Anglo-Saxon, and these bibliographic sources can bring a diversity of new ideas in contemporary academic discourse of the European urban historiography. The publications in Slavic languages are analyzed according to the following aspects: where, when, which types, by whom, and to whom the sources were written. The critical analysis of essential sources on the Historiography of European urbanism in the 20th century with an accomplishment through their comparison and interpretation. The authors’ autonomy is analysed as a central point, along with the influence of the Communist Party and state control on the interpretation of the history of urbanism in Central, Eastern and South-eastern Europe with the main dominant topics and ideas from the second half of the 20th century. Cross-national Slavic Historiographic sources and their perspectives are compared to the main transnational Anglo-Saxon Historiographic topics as some of the dominant subjects are hypothetically similar and others have more local or national oriented directions. Some of the dominant subjects, topics, and subtopics are hypothetically similar, while the others have more local or national oriented directions because of the authors’ autonomy and influences of the Communist Party with the state control in Slavic Socialists countries that were illustrated in this research.Keywords: European urbanism, historiography, different perspectives, 20th century
Procedia PDF Downloads 172170 Workflow Based Inspection of Geometrical Adaptability from 3D CAD Models Considering Production Requirements
Authors: Tobias Huwer, Thomas Bobek, Gunter Spöcker
Abstract:
Driving forces for enhancements in production are trends like digitalization and individualized production. Currently, such developments are restricted to assembly parts. Thus, complex freeform surfaces are not addressed in this context. The need for efficient use of resources and near-net-shape production will require individualized production of complex shaped workpieces. Due to variations between nominal model and actual geometry, this can lead to changes in operations in Computer-aided process planning (CAPP) to make CAPP manageable for an adaptive serial production. In this context, 3D CAD data can be a key to realizing that objective. Along with developments in the geometrical adaptation, a preceding inspection method based on CAD data is required to support the process planner by finding objective criteria to make decisions about the adaptive manufacturability of workpieces. Nowadays, this kind of decisions is depending on the experience-based knowledge of humans (e.g. process planners) and results in subjective decisions – leading to a variability of workpiece quality and potential failure in production. In this paper, we present an automatic part inspection method, based on design and measurement data, which evaluates actual geometries of single workpiece preforms. The aim is to automatically determine the suitability of the current shape for further machining, and to provide a basis for an objective decision about subsequent adaptive manufacturability. The proposed method is realized by a workflow-based approach, keeping in mind the requirements of industrial applications. Workflows are a well-known design method of standardized processes. Especially in applications like aerospace industry standardization and certification of processes are an important aspect. Function blocks, providing a standardized, event-driven abstraction to algorithms and data exchange, will be used for modeling and execution of inspection workflows. Each analysis step of the inspection, such as positioning of measurement data or checking of geometrical criteria, will be carried out by function blocks. One advantage of this approach is its flexibility to design workflows and to adapt algorithms specific to the application domain. In general, within the specified tolerance range it will be checked if a geometrical adaption is possible. The development of particular function blocks is predicated on workpiece specific information e.g. design data. Furthermore, for different product lifecycle phases, appropriate logics and decision criteria have to be considered. For example, tolerances for geometric deviations are different in type and size for new-part production compared to repair processes. In addition to function blocks, appropriate referencing systems are important. They need to support exact determination of position and orientation of the actual geometries to provide a basis for precise analysis. The presented approach provides an inspection methodology for adaptive and part-individual process chains. The analysis of each workpiece results in an inspection protocol and an objective decision about further manufacturability. A representative application domain is the product lifecycle of turbine blades containing a new-part production and a maintenance process. In both cases, a geometrical adaptation is required to calculate individual production data. In contrast to existing approaches, the proposed initial inspection method provides information to decide between different potential adaptive machining processes.Keywords: adaptive, CAx, function blocks, turbomachinery
Procedia PDF Downloads 296169 Emissions and Total Cost of Ownership Assessment of Hybrid Propulsion Concepts for Bus Transport with Compressed Natural Gases or Diesel Engine
Authors: Volker Landersheim, Daria Manushyna, Thinh Pham, Dai-Duong Tran, Thomas Geury, Omar Hegazy, Steven Wilkins
Abstract:
Air pollution is one of the emerging problems in our society. Targets of reduction of CO₂ emissions address low-carbon and resource-efficient transport. (Plug-in) hybrid electric propulsion concepts offer the possibility to reduce total cost of ownership (TCO) and emissions for public transport vehicles (e.g., bus application). In this context, typically, diesel engines are used to form the hybrid propulsion system of the vehicle. Though the technological development of diesel engines experience major advantages, some challenges such as the high amount of particle emissions remain relevant. Gaseous fuels (i.e., compressed natural gases (CNGs) or liquefied petroleum gases (LPGs) represent an attractive alternative to diesel because of their composition. In the framework of the research project 'Optimised Real-world Cost-Competitive Modular Hybrid Architecture' (ORCA), which was funded by the EU, two different hybrid-electric propulsion concepts have been investigated: one using a diesel engine as internal combustion engine and one using CNG as fuel. The aim of the current study is to analyze specific benefits for the aforementioned hybrid propulsion systems for predefined driving scenarios with regard to emissions and total cost of ownership in bus application. Engine models based on experimental data for diesel and CNG were developed. For the purpose of designing optimal energy management strategies for each propulsion system, maps-driven or quasi-static models for specific engine types are used in the simulation framework. An analogous modelling approach has been chosen to represent emissions. This paper compares the two concepts regarding their CO₂ and NOx emissions. This comparison is performed for relevant bus missions (urban, suburban, with and without zero-emission zone) and with different energy management strategies. In addition to the emissions, also the downsizing potential of the combustion engine has been analysed to minimize the powertrain TCO (pTCO) for plug-in hybrid electric buses. The results of the performed analyses show that the hybrid vehicle concept using the CNG engine shows advantages both with respect to emissions as well as to pTCO. The pTCO is 10% lower, CO₂ emissions are 13% lower, and the NOx emissions are more than 50% lower than with the diesel combustion engine. These results are consistent across all usage profiles under investigation.Keywords: bus transport, emissions, hybrid propulsion, pTCO, CNG
Procedia PDF Downloads 146168 Tuning the Emission Colour of Phenothiazine by Introduction of Withdrawing Electron Groups
Authors: Andrei Bejan, Luminita Marin, Dalila Belei
Abstract:
Phenothiazine with electron-rich nitrogen and sulfur heteroatoms has a high electron-donating ability which promotes a good conjugation and therefore low band-gap with consequences upon charge carrier mobility improving and shifting of light emission in visible domain. Moreover, its non-planar butterfly conformation inhibits molecular aggregation and thus preserves quite well the fluorescence quantum yield in solid state compared to solution. Therefore phenothiazine and its derivatives are promising hole transport materials for use in organic electronic and optoelectronic devices as light emitting diodes, photovoltaic cells, integrated circuit sensors or driving circuits for large area display devices. The objective of this paper was to obtain a series of new phenothiazine derivatives by introduction of different electron withdrawing substituents as formyl, carboxyl and cyanoacryl units in order to create a push pull system which has potential to improve the electronic and optical properties. Bromine atom was used as electrono-donor moiety to extend furthermore the existing conjugation. The understudy compounds were structural characterized by FTIR and 1H-NMR spectroscopy and single crystal X-ray diffraction. Besides, the single crystal X-ray diffraction brought information regarding the supramolecular architecture of the compounds. Photophysical properties were monitored by UV-vis and photoluminescence spectroscopy, while the electrochemical behavior was established by cyclic voltammetry. The absorption maxima of the studied compounds vary in a large range (322-455 nm), reflecting the different electronic delocalization degree, depending by the substituent nature. In a similar manner, the emission spectra reveal different color of emitted light, a red shift being evident for the groups with higher electron withdrawing ability. The emitted light is pure and saturated for the compounds containing strong withdrawing formyl or cyanoacryl units and reach the highest quantum yield of 71% for the compound containing bromine and cyanoacrilic units. Electrochemical study show reversible oxidative and reduction processes for all the compounds and a close correlation of the HOMO-LUMO band gap with substituent nature. All these findings suggest the obtained compounds as promising materials for optoelectronic devices.Keywords: electrochemical properties, phenothiazine derivatives, photoluminescence, quantum yield
Procedia PDF Downloads 328167 Revolutions and Cyclic Patterns in Chinese Town Planning: The Case-Study of Shenzhen
Authors: Domenica Bona
Abstract:
Colin Chant and David Goodman argue that historians of Chinese pre-industrial cities tend to underestimate revolutions and overestimate cyclic patterns: periods of peace and prosperity in the earl part of each d nast , followed b peasants’ rebellions and upheavals. Boyd described these cyclic patterns as part of the background of Chinese town planning and architecture. Thus old ideals of city planning-square plan, southward orientation and a palace along the central axis - are revived again and again in the ascendant phases of several d nastic c cles (e.g. Chang’an, Kaifen, and Beijing). Along this line of thought, m paper questions the relationship between the “magic square rule” and modern Chinese urban- planning. As a matter of fact, the classical theme of “cosmic Taoist urbanism” is still a reference for planning cities and new urban developments, whenever there is the intention to express nationalist ideals and “cultural straightforwardness.” Besides, some case studies can be related to “modern d nasties”: the first Republic under the Kuo Min Tang, the red People’s Republic and the post-Maoist open country of Deng Xiao Ping. Considering the project for the new capital of Nanjing in the Thirties, Beijing’s Tianan Men area in the ifties, and Shenzhen’s utian CBD in late 20th century, I argue that cyclic patterns are still in place, though with deformations related to westernization, private interests and lack of spirituality. How far new Chinese cities are - or simply seem to be - westernized? Symbolism, invisible frameworks, repeating features and behavioural patterns make urban China just “superficiall” western. This can be well noticed in cities previousl occupied b foreigners, like Hong Kong, or in newly founded ones, like Shenzhen, where both Asians and non-Asian people can feel the gender-shift from New-York-like landscapes to something else. Current planning in main metropolitan areas shows a blurred relationship between public policies and private investments: two levels of decisions and actions, one addressing the larger scale and infrastructures, the other concerning the micro scale and development of single plots. While zoning is instrumental in this process, master plans are often laid out over a very poor cartography, so much that any relation between the formal characters of new cities and the centuries-old structure of the related territory gets lost.Keywords: China, contemporary cities, cultural heritage, shenzhen, urban planning
Procedia PDF Downloads 360166 Microglia Activation in Animal Model of Schizophrenia
Authors: Esshili Awatef, Manitz Marie-Pierre, Eßlinger Manuela, Gerhardt Alexandra, Plümper Jennifer, Wachholz Simone, Friebe Astrid, Juckel Georg
Abstract:
Maternal immune activation (MIA) resulting from maternal viral infection during pregnancy is a known risk factor for schizophrenia. The neural mechanisms by which maternal infections increase the risk for schizophrenia remain unknown, although the prevailing hypothesis argues that an activation of the maternal immune system induces changes in the maternal-fetal environment that might interact with fetal brain development. It may lead to an activation of fetal microglia inducing long-lasting functional changes of these cells. Based on post-mortem analysis showing an increased number of activated microglial cells in patients with schizophrenia, it can be hypothesized that these cells contribute to disease pathogenesis and may actively be involved in gray matter loss observed in such patients. In the present study, we hypothesize that prenatal treatment with the inflammatory agent Poly(I:C) during embryogenesis at contributes to microglial activation in the offspring, which may, therefore, represent a contributing factor to the pathogenesis of schizophrenia and underlines the need for new pharmacological treatment options. Pregnant rats were treated with intraperitoneal injections a single dose of Poly(I:C) or saline on gestation day 17. Brains of control and Poly(I:C) offspring, were removed and into 20-μm-thick coronal sections were cut by using a Cryostat. Brain slices were fixed and immunostained with ba1 antibody. Subsequently, Iba1-immunoreactivity was detected using a secondary antibody, goat anti-rabbit. The sections were viewed and photographed under microscope. The immunohistochemical analysis revealed increases in microglia cell number in the prefrontal cortex, in offspring of poly(I:C) treated-rats as compared to the controls injected with NaCl. However, no significant differences were observed in microglia activation in the cerebellum among the groups. Prenatal immune challenge with Poly(I:C) was able to induce long-lasting changes in the offspring brains. This lead to a higher activation of microglia cells in the prefrontal cortex, a brain region critical for many higher brain functions, including working memory and cognitive flexibility. which might be implicated in possible changes in cortical neuropil architecture in schizophrenia. Further studies will be needed to clarify the association between microglial cells activation and schizophrenia-related behavioral alterations.Keywords: Microglia, neuroinflammation, PolyI:C, schizophrenia
Procedia PDF Downloads 415165 Systematic Evaluation of Convolutional Neural Network on Land Cover Classification from Remotely Sensed Images
Authors: Eiman Kattan, Hong Wei
Abstract:
In using Convolutional Neural Network (CNN) for classification, there is a set of hyperparameters available for the configuration purpose. This study aims to evaluate the impact of a range of parameters in CNN architecture i.e. AlexNet on land cover classification based on four remotely sensed datasets. The evaluation tests the influence of a set of hyperparameters on the classification performance. The parameters concerned are epoch values, batch size, and convolutional filter size against input image size. Thus, a set of experiments were conducted to specify the effectiveness of the selected parameters using two implementing approaches, named pertained and fine-tuned. We first explore the number of epochs under several selected batch size values (32, 64, 128 and 200). The impact of kernel size of convolutional filters (1, 3, 5, 7, 10, 15, 20, 25 and 30) was evaluated against the image size under testing (64, 96, 128, 180 and 224), which gave us insight of the relationship between the size of convolutional filters and image size. To generalise the validation, four remote sensing datasets, AID, RSD, UCMerced and RSCCN, which have different land covers and are publicly available, were used in the experiments. These datasets have a wide diversity of input data, such as number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in both training and testing. The results have shown that increasing the number of epochs leads to a higher accuracy rate, as expected. However, the convergence state is highly related to datasets. For the batch size evaluation, it has shown that a larger batch size slightly decreases the classification accuracy compared to a small batch size. For example, selecting the value 32 as the batch size on the RSCCN dataset achieves the accuracy rate of 90.34 % at the 11th epoch while decreasing the epoch value to one makes the accuracy rate drop to 74%. On the other extreme, setting an increased value of batch size to 200 decreases the accuracy rate at the 11th epoch is 86.5%, and 63% when using one epoch only. On the other hand, selecting the kernel size is loosely related to data set. From a practical point of view, the filter size 20 produces 70.4286%. The last performed image size experiment shows a dependency in the accuracy improvement. However, an expensive performance gain had been noticed. The represented conclusion opens the opportunities toward a better classification performance in various applications such as planetary remote sensing.Keywords: CNNs, hyperparamters, remote sensing, land cover, land use
Procedia PDF Downloads 165164 Identification of Damage Mechanisms in Interlock Reinforced Composites Using a Pattern Recognition Approach of Acoustic Emission Data
Authors: M. Kharrat, G. Moreau, Z. Aboura
Abstract:
The latest advances in the weaving industry, combined with increasingly sophisticated means of materials processing, have made it possible to produce complex 3D composite structures. Mainly used in aeronautics, composite materials with 3D architecture offer better mechanical properties than 2D reinforced composites. Nevertheless, these materials require a good understanding of their behavior. Because of the complexity of such materials, the damage mechanisms are multiple, and the scenario of their appearance and evolution depends on the nature of the exerted solicitations. The AE technique is a well-established tool for discriminating between the damage mechanisms. Suitable sensors are used during the mechanical test to monitor the structural health of the material. Relevant AE-features are then extracted from the recorded signals, followed by a data analysis using pattern recognition techniques. In order to better understand the damage scenarios of interlock composite materials, a multi-instrumentation was set-up in this work for tracking damage initiation and development, especially in the vicinity of the first significant damage, called macro-damage. The deployed instrumentation includes video-microscopy, Digital Image Correlation, Acoustic Emission (AE) and micro-tomography. In this study, a multi-variable AE data analysis approach was developed for the discrimination between the different signal classes representing the different emission sources during testing. An unsupervised classification technique was adopted to perform AE data clustering without a priori knowledge. The multi-instrumentation and the clustered data served to label the different signal families and to build a learning database. This latter is useful to construct a supervised classifier that can be used for automatic recognition of the AE signals. Several materials with different ingredients were tested under various solicitations in order to feed and enrich the learning database. The methodology presented in this work was useful to refine the damage threshold for the new generation materials. The damage mechanisms around this threshold were highlighted. The obtained signal classes were assigned to the different mechanisms. The isolation of a 'noise' class makes it possible to discriminate between the signals emitted by damages without resorting to spatial filtering or increasing the AE detection threshold. The approach was validated on different material configurations. For the same material and the same type of solicitation, the identified classes are reproducible and little disturbed. The supervised classifier constructed based on the learning database was able to predict the labels of the classified signals.Keywords: acoustic emission, classifier, damage mechanisms, first damage threshold, interlock composite materials, pattern recognition
Procedia PDF Downloads 154163 Numerical Analysis of Mandible Fracture Stabilization System
Authors: Piotr Wadolowski, Grzegorz Krzesinski, Piotr Gutowski
Abstract:
The aim of the presented work is to recognize the impact of mini-plate application approach on the stress and displacement within the stabilization devices and surrounding bones. The mini-plate osteosynthesis technique is widely used by craniofacial surgeons as an improved replacement of wire connection approach. Many different types of metal plates and screws are used to the physical connection of fractured bones. Below investigation is based on a clinical observation of patient hospitalized with mini-plate stabilization system. Analysis was conducted on a solid mandible geometry, which was modeled basis on the computed tomography scan of the hospitalized patient. In order to achieve most realistic connected system behavior, the cortical and cancellous bone layers were assumed. The temporomandibular joint was simplified to the elastic element to allow physiological movement of loaded bone. The muscles of mastication system were reduced to three pairs, modeled as shell structures. Finite element grid was created by the ANSYS software, where hexahedral and tetrahedral variants of SOLID185 element were used. A set of nonlinear contact conditions were applied on connecting devices and bone common surfaces. Properties of particular contact pair depend on screw - mini-plate connection type and possible gaps between fractured bone around osteosynthesis region. Some of the investigated cases contain prestress introduced to the mini-plate during the application, what responds the initial bending of the connecting device to fit the retromolar fossa region. Assumed bone fracture occurs within the mandible angle zone. Due to the significant deformation of the connecting plate in some of the assembly cases the elastic-plastic model of titanium alloy was assumed. The bone tissues were covered by the orthotropic material. As a loading were used the gauge force of magnitude of 100N applied in three different locations. Conducted analysis shows significant impact of mini-plate application methodology on the stress distribution within the miniplate. Prestress effect introduces additional loading, which leads to locally exceed the titanium alloy yield limit. Stress in surrounding bone increases rapidly around the screws application region, exceeding assumed bone yield limit, what indicate the local bone destruction. Approach with the doubled mini-plate shows increased stress within the connector due to the too rigid connection, where the main path of loading leads through the mini-plates instead of plates and connected bones. Clinical observations confirm more frequent plate destruction of stiffer connections. Some of them could be an effect of decreased low cyclic fatigue capability caused by the overloading. The executed analysis prove that the mini-plate system provides sufficient support to mandible fracture treatment, however, many applicable solutions shifts the entire system to the allowable material limits. The results show that connector application with the initial loading needs to be carefully established due to the small material capability tolerances. Comparison to the clinical observations allows optimizing entire connection to prevent future incidents.Keywords: mandible fracture, mini-plate connection, numerical analysis, osteosynthesis
Procedia PDF Downloads 271162 Evaluation of Polymerisation Shrinkage of Randomly Oriented Micro-Sized Fibre Reinforced Dental Composites Using Fibre-Bragg Grating Sensors and Their Correlation with Degree of Conversion
Authors: Sonam Behl, Raju, Ginu Rajan, Paul Farrar, B. Gangadhara Prusty
Abstract:
Reinforcing dental composites with micro-sized fibres can significantly improve the physio-mechanical properties of dental composites. The short fibres can be oriented randomly within dental composites, thus providing quasi-isotropic reinforcing efficiency unlike unidirectional/bidirectional fibre reinforced composites enhancing anisotropic properties. Thus, short fibres reinforced dental composites are getting popular among practitioners. However, despite their popularity, resin-based dental composites are prone to failure on account of shrinkage during photo polymerisation. The shrinkage in the structure may lead to marginal gap formation, causing secondary caries, thus ultimately inducing failure of the restoration. The traditional methods to evaluate polymerisation shrinkage using strain gauges, density-based measurements, dilatometer, or bonded-disk focuses on average value of volumetric shrinkage. Moreover, the results obtained from traditional methods are sensitive to the specimen geometry. The present research aims to evaluate the real-time shrinkage strain at selected locations in the material with the help of optical fibre Bragg grating (FBG) sensors. Due to the miniature size (diameter 250 µm) of FBG sensors, they can be easily embedded into small samples of dental composites. Furthermore, an FBG array into the system can map the real-time shrinkage strain at different regions of the composite. The evaluation of real-time monitoring of shrinkage values may help to optimise the physio-mechanical properties of composites. Previously, FBG sensors have been able to rightfully measure polymerisation strains of anisotropic (unidirectional or bidirectional) reinforced dental composites. However, very limited study exists to establish the validity of FBG based sensors to evaluate volumetric shrinkage for randomly oriented fibres reinforced composites. The present study aims to fill this research gap and is focussed on establishing the usage of FBG based sensors for evaluating the shrinkage of dental composites reinforced with randomly oriented fibres. Three groups of specimens were prepared by mixing the resin (80% UDMA/20% TEGDMA) with 55% of silane treated BaAlSiO₂ particulate fillers or by adding 5% of micro-sized fibres of diameter 5 µm, and length 250/350 µm along with 50% of silane treated BaAlSiO₂ particulate fillers into the resin. For measurement of polymerisation shrinkage strain, an array of three fibre Bragg grating sensors was embedded at a depth of 1 mm into a circular Teflon mould of diameter 15 mm and depth 2 mm. The results obtained are compared with the traditional method for evaluation of the volumetric shrinkage using density-based measurements. Degree of conversion was measured using FTIR spectroscopy (Spotlight 400 FT-IR from PerkinElmer). It is expected that the average polymerisation shrinkage strain values for dental composites reinforced with micro-sized fibres can directly correlate with the measured degree of conversion values, implying that more C=C double bond conversion to C-C single bond values also leads to higher shrinkage strain within the composite. Moreover, it could be established the photonics approach could help assess the shrinkage at any point of interest in the material, suggesting that fibre-Bragg grating sensors are a suitable means for measuring real-time polymerisation shrinkage strain for randomly fibre reinforced dental composites as well.Keywords: dental composite, glass fibre, polymerisation shrinkage strain, fibre-Bragg grating sensors
Procedia PDF Downloads 152161 A Microwave and Millimeter-Wave Transmit/Receive Switch Subsystem for Communication Systems
Authors: Donghyun Lee, Cam Nguyen
Abstract:
Multi-band systems offer a great deal of benefit in modern communication and radar systems. In particular, multi-band antenna-array radar systems with their extended frequency diversity provide numerous advantages in detection, identification, locating and tracking a wide range of targets, including enhanced detection coverage, accurate target location, reduced survey time and cost, increased resolution, improved reliability and target information. An accurate calibration is a critical issue in antenna array systems. The amplitude and phase errors in multi-band and multi-polarization antenna array transceivers result in inaccurate target detection, deteriorated resolution and reduced reliability. Furthermore, the digital beam former without the RF domain phase-shifting is less immune to unfiltered interference signals, which can lead to receiver saturation in array systems. Therefore, implementing integrated front-end architecture, which can support calibration function with low insertion and filtering function from the farthest end of an array transceiver is of great interest. We report a dual K/Ka-band T/R/Calibration switch module with quasi-elliptic dual-bandpass filtering function implementing a Q-enhanced metamaterial transmission line. A unique dual-band frequency response is incorporated in the reception and calibration path of the proposed switch module utilizing the composite right/left-handed meta material transmission line coupled with a Colpitts-style negative generation circuit. The fabricated fully integrated T/R/Calibration switch module in 0.18-μm BiCMOS technology exhibits insertion loss of 4.9-12.3 dB and isolation of more than 45 dB in the reception, transmission and calibration mode of operation. In the reception and calibration mode, the dual-band frequency response centered at 24.5 and 35 GHz exhibits out-of-band rejection of more than 30 dB compared to the pass bands below 10.5 GHz and above 59.5 GHz. The rejection between the pass bands reaches more than 50 dB. In all modes of operation, the IP1-dB is between 4 and 11 dBm. Acknowledgement: This paper was made possible by NPRP grant # 6-241-2-102 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.Keywords: microwaves, millimeter waves, T/R switch, wireless communications, wireless communications
Procedia PDF Downloads 158160 Scalable UI Test Automation for Large-scale Web Applications
Authors: Kuniaki Kudo, Raviraj Solanki, Kaushal Patel, Yash Virani
Abstract:
This research mainly concerns optimizing UI test automation for large-scale web applications. The test target application is the HHAexchange homecare management WEB application that seamlessly connects providers, state Medicaid programs, managed care organizations (MCOs), and caregivers through one platform with large-scale functionalities. This study focuses on user interface automation testing for the WEB application. The quality assurance team must execute many manual users interface test cases in the development process to confirm no regression bugs. The team automated 346 test cases; the UI automation test execution time was over 17 hours. The business requirement was reducing the execution time to release high-quality products quickly, and the quality assurance automation team modernized the test automation framework to optimize the execution time. The base of the WEB UI automation test environment is Selenium, and the test code is written in Python. Adopting a compilation language to write test code leads to an inefficient flow when introducing scalability into a traditional test automation environment. In order to efficiently introduce scalability into Test Automation, a scripting language was adopted. The scalability implementation is mainly implemented with AWS's serverless technology, an elastic container service. The definition of scalability here is the ability to automatically set up computers to test automation and increase or decrease the number of computers running those tests. This means the scalable mechanism can help test cases run parallelly. Then test execution time is dramatically decreased. Also, introducing scalable test automation is for more than just reducing test execution time. There is a possibility that some challenging bugs are detected by introducing scalable test automation, such as race conditions, Etc. since test cases can be executed at same timing. If API and Unit tests are implemented, the test strategies can be adopted more efficiently for this scalability testing. However, in WEB applications, as a practical matter, API and Unit testing cannot cover 100% functional testing since they do not reach front-end codes. This study applied a scalable UI automation testing strategy to the large-scale homecare management system. It confirmed the optimization of the test case execution time and the detection of a challenging bug. This study first describes the detailed architecture of the scalable test automation environment, then describes the actual performance reduction time and an example of challenging issue detection.Keywords: aws, elastic container service, scalability, serverless, ui automation test
Procedia PDF Downloads 105159 The Influence of Salt Body of J. Ech Cheid on the Maturity History of the Cenomanian: Turonian Source Rock
Authors: Mohamed Malek Khenissi, Mohamed Montassar Ben Slama, Anis Belhaj Mohamed, Moncef Saidi
Abstract:
Northern Tunisia is well known by its different and complex structural and geological zones that have been the result of a geodynamic history that extends from the early Mesozoic era to the actual period. One of these zones is the salt province, where the Halokinesis process is manifested by a number of NE/SW salt structures such as Jebel Ech-Cheid which represents masses of materials characterized by a high plasticity and low density. The salt masses extrusions that have been developed due to an extension that started from the late Triassic to late Cretaceous. The evolution of salt bodies within sedimentary basins have not only contributed to modify the architecture of the basin, but it also has certain geochemical effects which touch mainly source rocks that surround it. It has been demonstrated that the presence of salt structures within sedimentary basins can influence its temperature distribution and thermal history. Moreover, it has been creating heat flux anomalies that may affect the maturity of organic matter and the timing of hydrocarbon generation. Field samples of the Bahloul source rock (Cenomanan-Tunonian) were collected from different sights from all around Ech Cheid salt structure and evaluated using Rock-eval pyrolysis and GC/MS techniques in order to assess the degree of maturity evolution and the heat flux anomalies in the different zones analyze. The Total organic Carbon (TOC) values range between 1 to 9% and the (Tmax) ranges between 424 and 445°C, also the distribution of the source rock biomarkers both saturated and aromatic changes in a regular fashions with increasing maturity and this are shown in the chromatography results such as Ts/(Ts+Tm) ratios, 22S/(22S+22R) values for C31 homohopanes, ββ/(ββ+αα)20R and 20S/(20S+20R) ratios for C29 steranes which gives a consistent maturity indications and assessment of the field samples. These analyses are carried to interpret the maturity evolution and the heat flux around Ech Cheid salt structure through the geological history. These analyses also aim to demonstrate that the salt structure can have a direct effect on the geothermal gradient of the basin and on the maturity of the Bahloul Formation source rock. The organic matter has reached different stages of thermal maturity, but delineate a general increasing maturity trend. Our study confirms that the J. Ech Cheid salt body have on the first hand: a huge influence on the local distribution of anoxic depocentre at least within Cenomanian-Turonian time. In the second hand, the thermal anomaly near the salt mass has affected the maturity of Bahloul Formation.Keywords: Bahloul formation, depocentre, GC/MS, rock-eval
Procedia PDF Downloads 238158 Dynamic Facades: A Literature Review on Double-Skin Façade with Lightweight Materials
Authors: Victor Mantilla, Romeu Vicente, António Figueiredo, Victor Ferreira, Sandra Sorte
Abstract:
Integrating dynamic facades into contemporary building design is shaping a new era of energy efficiency and user comfort. These innovative facades, often constructed using lightweight construction systems and materials, offer an opportunity to have a responsive and adaptive nature to the dynamic behavior of the outdoor climate. Therefore, in regions characterized by high fluctuations in daily temperatures, the ability to adapt to environmental changes is of paramount importance and a challenge. This paper presents a thorough review of the state of the art on double-skin facades (DSF), focusing on lightweight solutions for the external envelope. Dynamic facades featuring elements like movable shading devices, phase change materials, and advanced control systems have revolutionized the built environment. They offer a promising path for reducing energy consumption while enhancing occupant well-being. Lightweight construction systems are increasingly becoming the choice for the constitution of these facade solutions, offering benefits such as reduced structural loads and reduced construction waste, improving overall sustainability. However, the performance of dynamic facades based on low thermal inertia solutions in climatic contexts with high thermal amplitude is still in need of research since their ability to adapt is traduced in variability/manipulation of the thermal transmittance coefficient (U-value). Emerging technologies can enable such a dynamic thermal behavior through innovative materials, changes in geometry and control to optimize the facade performance. These innovations will allow a facade system to respond to shifting outdoor temperature, relative humidity, wind, and solar radiation conditions, ensuring that energy efficiency and occupant comfort are both met/coupled. This review addresses the potential configuration of double-skin facades, particularly concerning their responsiveness to seasonal variations in temperature, with a specific focus on addressing the challenges posed by winter and summer conditions. Notably, the design of a dynamic facade is significantly shaped by several pivotal factors, including the choice of materials, geometric considerations, and the implementation of effective monitoring systems. Within the realm of double skin facades, various configurations are explored, encompassing exhaust air, supply air, and thermal buffering mechanisms. According to the review places a specific emphasis on the thermal dynamics at play, closely examining the impact of factors such as the color of the facade, the slat angle's dimensions, and the positioning and type of shading devices employed in these innovative architectural structures.This paper will synthesize the current research trends in this field, with the presentation of case studies and technological innovations with a comprehensive understanding of the cutting-edge solutions propelling the evolution of building envelopes in the face of climate change, namely focusing on double-skin lightweight solutions to create sustainable, adaptable, and responsive building envelopes. As indicated in the review, flexible and lightweight systems have broad applicability across all building sectors, and there is a growing recognition that retrofitting existing buildings may emerge as the predominant approach.Keywords: adaptive, control systems, dynamic facades, energy efficiency, responsive, thermal comfort, thermal transmittance
Procedia PDF Downloads 78157 Analog Railway Signal Object Controller Development
Authors: Ercan Kızılay, Mustafa Demi̇rel, Selçuk Coşkun
Abstract:
Railway signaling systems consist of vital products that regulate railway traffic and provide safe route arrangements and maneuvers of trains. SIL 4 signal lamps are produced by many manufacturers today. There is a need for systems that enable these signal lamps to be controlled by commands from the interlocking. These systems should act as fail-safe and give error indications to the interlocking system when an unexpected situation occurs for the safe operation of railway systems from the RAMS perspective. In the past, driving and proving the lamp in relay-based systems was typically done via signaling relays. Today, the proving of lamps is done by comparing the current values read over the return circuit, the lower and upper threshold values. The purpose is an analog electronic object controller with the possibility of easy integration with vital systems and the signal lamp itself. During the study, the EN50126 standard approach was considered, and the concept, definition, risk analysis, requirements, architecture, design, and prototyping were performed throughout this study. FMEA (Failure Modes and Effects Analysis) and FTA (Fault Tree) Analysis) have been used for safety analysis in accordance with EN 50129. Concerning these analyzes, the 1oo2D reactive fail-safe hardware design of a controller has been researched. Electromagnetic compatibility (EMC) effects on the functional safety of equipment, insulation coordination, and over-voltage protection were discussed during hardware design according to EN 50124 and EN 50122 standards. As vital equipment for railway signaling, railway signal object controllers should be developed according to EN 50126 and EN 50129 standards which identify the steps and requirements of the development in accordance with the SIL 4(Safety Integrity Level) target. In conclusion of this study, an analog railway signal object controller, which takes command from the interlocking system, is processed in driver cards. Driver cards arrange the voltage level according to desired visibility by means of semiconductors. Additionally, prover cards evaluate the current upper and lower thresholds. Evaluated values are processed via logic gates which are composed as 1oo2D by means of analog electronic technologies. This logic evaluates the voltage level of the lamp and mitigates the risks of undue dimming.Keywords: object controller, railway electronic, analog electronic, safety, railway signal
Procedia PDF Downloads 98156 Sustainable Technology and the Production of Housing
Authors: S. Arias
Abstract:
New housing developments and the technological changes that this implies, adapt the styles of living of its residents, as well as new family structures and forms of work due to the particular needs of a specific group of people which involves different techniques of dealing with, organize, equip and use a particular territory. Currently, own their own space is increasingly important and the cities are faced with the challenge of providing the opportunity for such demands, as well as energy, water and waste removal necessary in the process of construction and occupation of new human settlements. Until the day of today, not has failed to give full response to these demands and needs, resulting in cities that grow without control, badly used land, avenues and congested streets. Buildings and dwellings have an important impact on the environment and on the health of the people, therefore environmental quality associated with the comfort of humans to the sustainable development of natural resources. Applied to architecture, this concept involves the incorporation of new technologies in all the constructive process of a dwelling, changing customs of developers and users, what must be a greater effort in planning energy savings and thus reducing the emissions Greenhouse Gases (GHG) depending on the geographical location where it is planned to develop. Since the techniques of occupation of the territory are not the same everywhere, must take into account that these depend on the geographical, social, political, economic and climatic-environmental circumstances of place, which in modified according to the degree of development reached. In the analysis that must be undertaken to check the degree of sustainability of the place, it is necessary to make estimates of the energy used in artificial air conditioning and lighting. In the same way is required to diagnose the availability and distribution of the water resources used for hygiene and for the cooling of artificially air-conditioned spaces, as well as the waste resulting from these technological processes. Based on the results obtained through the different stages of the analysis, it is possible to perform an energy audit in the process of proposing recommendations of sustainability in architectural spaces in search of energy saving, rational use of water and natural resources optimization. The above can be carried out through the development of a sustainable building code in develop technical recommendations to the regional characteristics of each study site. These codes would seek to build bases to promote a building regulations applicable to new human settlements looking for is generated at the same time quality, protection and safety in them. This building regulation must be consistent with other regulations both national and municipal and State, such as the laws of human settlements, urban development and zoning regulations.Keywords: building regulations, housing, sustainability, technology
Procedia PDF Downloads 346155 Artificial Intelligence-Aided Extended Kalman Filter for Magnetometer-Based Orbit Determination
Authors: Gilberto Goracci, Fabio Curti
Abstract:
This work presents a robust, light, and inexpensive algorithm to perform autonomous orbit determination using onboard magnetometer data in real-time. Magnetometers are low-cost and reliable sensors typically available on a spacecraft for attitude determination purposes, thus representing an interesting choice to perform real-time orbit determination without the need to add additional sensors to the spacecraft itself. Magnetic field measurements can be exploited by Extended/Unscented Kalman Filters (EKF/UKF) for orbit determination purposes to make up for GPS outages, yielding errors of a few kilometers and tens of meters per second in the position and velocity of a spacecraft, respectively. While this level of accuracy shows that Kalman filtering represents a solid baseline for autonomous orbit determination, it is not enough to provide a reliable state estimation in the absence of GPS signals. This work combines the solidity and reliability of the EKF with the versatility of a Recurrent Neural Network (RNN) architecture to further increase the precision of the state estimation. Deep learning models, in fact, can grasp nonlinear relations between the inputs, in this case, the magnetometer data and the EKF state estimations, and the targets, namely the true position, and velocity of the spacecraft. The model has been pre-trained on Sun-Synchronous orbits (SSO) up to 2126 kilometers of altitude with different initial conditions and levels of noise to cover a wide range of possible real-case scenarios. The orbits have been propagated considering J2-level dynamics, and the geomagnetic field has been modeled using the International Geomagnetic Reference Field (IGRF) coefficients up to the 13th order. The training of the module can be completed offline using the expected orbit of the spacecraft to heavily reduce the onboard computational burden. Once the spacecraft is launched, the model can use the GPS signal, if available, to fine-tune the parameters on the actual orbit onboard in real-time and work autonomously during GPS outages. In this way, the provided module shows versatility, as it can be applied to any mission operating in SSO, but at the same time, the training is completed and eventually fine-tuned, on the specific orbit, increasing performances and reliability. The results provided by this study show an increase of one order of magnitude in the precision of state estimate with respect to the use of the EKF alone. Tests on simulated and real data will be shown.Keywords: artificial intelligence, extended Kalman filter, orbit determination, magnetic field
Procedia PDF Downloads 103154 Implicit U-Net Enhanced Fourier Neural Operator for Long-Term Dynamics Prediction in Turbulence
Authors: Zhijie Li, Wenhui Peng, Zelong Yuan, Jianchun Wang
Abstract:
Turbulence is a complex phenomenon that plays a crucial role in various fields, such as engineering, atmospheric science, and fluid dynamics. Predicting and understanding its behavior over long time scales have been challenging tasks. Traditional methods, such as large-eddy simulation (LES), have provided valuable insights but are computationally expensive. In the past few years, machine learning methods have experienced rapid development, leading to significant improvements in computational speed. However, ensuring stable and accurate long-term predictions remains a challenging task for these methods. In this study, we introduce the implicit U-net enhanced Fourier neural operator (IU-FNO) as a solution for stable and efficient long-term predictions of the nonlinear dynamics in three-dimensional (3D) turbulence. The IU-FNO model combines implicit re-current Fourier layers to deepen the network and incorporates the U-Net architecture to accurately capture small-scale flow structures. We evaluate the performance of the IU-FNO model through extensive large-eddy simulations of three types of 3D turbulence: forced homogeneous isotropic turbulence (HIT), temporally evolving turbulent mixing layer, and decaying homogeneous isotropic turbulence. The results demonstrate that the IU-FNO model outperforms other FNO-based models, including vanilla FNO, implicit FNO (IFNO), and U-net enhanced FNO (U-FNO), as well as the dynamic Smagorinsky model (DSM), in predicting various turbulence statistics. Specifically, the IU-FNO model exhibits improved accuracy in predicting the velocity spectrum, probability density functions (PDFs) of vorticity and velocity increments, and instantaneous spatial structures of the flow field. Furthermore, the IU-FNO model addresses the stability issues encountered in long-term predictions, which were limitations of previous FNO models. In addition to its superior performance, the IU-FNO model offers faster computational speed compared to traditional large-eddy simulations using the DSM model. It also demonstrates generalization capabilities to higher Taylor-Reynolds numbers and unseen flow regimes, such as decaying turbulence. Overall, the IU-FNO model presents a promising approach for long-term dynamics prediction in 3D turbulence, providing improved accuracy, stability, and computational efficiency compared to existing methods.Keywords: data-driven, Fourier neural operator, large eddy simulation, fluid dynamics
Procedia PDF Downloads 73