Search results for: applicability
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 683

Search results for: applicability

203 Development of Transmission and Packaging for Parallel Hybrid Light Commercial Vehicle

Authors: Vivek Thorat, Suhasini Desai

Abstract:

The hybrid electric vehicle is widely accepted as a promising short to mid-term technical solution due to noticeably improved efficiency and low emissions at competitive costs. Retro fitment of hybrid components into a conventional vehicle for achieving better performance is the best solution so far. But retro fitment includes major modifications into a conventional vehicle with a high cost. This paper focuses on the development of a P3x hybrid prototype with rear wheel drive parallel hybrid electric Light Commercial Vehicle (LCV) with minimum and low-cost modifications. This diesel Hybrid LCV is different from another hybrid with regard to the powertrain. The additional powertrain consists of continuous contact helical gear pair followed by chain and sprocket as a coupler for traction motor. Vehicle powertrain which is designed for the intended high-speed application. This work focuses on targeting of design, development, and packaging of this unique parallel diesel-electric vehicle which is based on multimode hybrid advantages. To demonstrate the practical applicability of this transmission with P3x hybrid configuration, one concept prototype vehicle has been build integrating the transmission. The hybrid system makes it easy to retrofit existing vehicle because the changes required into the vehicle chassis are a minimum. The additional system is designed for mainly five modes of operations which are engine only mode, electric-only mode, hybrid power mode, engine charging battery mode and regenerative braking mode. Its driving performance, fuel economy and emissions are measured and results are analyzed over a given drive cycle. Finally, the output results which are achieved by the first vehicle prototype during experimental testing is carried out on a chassis dynamometer using MIDC driving cycle. The results showed that the prototype hybrid vehicle is about 27% faster than the equivalent conventional vehicle. The fuel economy is increased by 20-25% approximately compared to the conventional powertrain.

Keywords: P3x configuration, LCV, hybrid electric vehicle, ROMAX, transmission

Procedia PDF Downloads 252
202 Teachers' Beliefs About the Environment: The Case of Azerbaijan

Authors: Aysel Mehdiyeva

Abstract:

As a driving force of society, the role of teachers is important in inspiring, motivating, and encouraging the younger generation to protect the environment. In light of these, the study aims to explore teachers’ beliefs to understand teachers’ engagement with teaching about the environment. Though teachers’ beliefs about the environment have been explored by a number of researchers, the influence of these beliefs in their professional lives and in shaping their classroom instructions has not been widely investigated in Azerbaijan. To this end, this study aims to reveal the beliefs of secondary school geography teachers about the environment and find out the ways teachers’ beliefs of the environment are enacted in their classroom practice in Azerbaijan. Different frameworks have been suggested for measuring environmental beliefs stemming from well-known anthropocentric and biocentric worldviews. The study addresses New Ecological Paradigm (NEP) by Dunlap to formulate the interview questions as discussion with teachers around these questions aligns with the research aims serving to well-capture the beliefs of teachers about the environment. Despite the extensive applicability of the NEP scale, it has not been used to explore in-service teachers’ beliefs about the environment. Besides, it has been used as a tool for quantitative measurement; however, the study addresses the scale within the framework of the qualitative study. The research population for semi-structured interviews and observations was recruited via purposeful sampling. Teachers’ being a unit of analysis is related to the gap in the literature as to how teachers’ beliefs are related to their classroom instructions within the environmental context, as well as teachers’ beliefs about the environment in Azerbaijan have not been well researched. 6 geography teachers from 4 different schools were involved in the research process. The schools are located in one of the most polluted parts of the capital city Baku where the first oil well in the world was drilled in 1848 and is called “Black City” due to the black smoke and smell that covered that part of the city. Semi-structured interviews were conducted with the teachers to reveal their stated beliefs. Later, teachers were observed during geography classes to understand the overlap between teachers’ ideas presented during the interview and their teaching practice. Research findings aim to indicate teachers’ ecological beliefs and practice, as well as elaborate on possible causes of compatibility/incompatibility between teachers’ stated and observed beliefs.

Keywords: environmental education, anthropocentric beliefs, biocentric beliefs, new ecological paradigm

Procedia PDF Downloads 100
201 Exploring Regularity Results in the Context of Extremely Degenerate Elliptic Equations

Authors: Zahid Ullah, Atlas Khan

Abstract:

This research endeavors to explore the regularity properties associated with a specific class of equations, namely extremely degenerate elliptic equations. These equations hold significance in understanding complex physical systems like porous media flow, with applications spanning various branches of mathematics. The focus is on unraveling and analyzing regularity results to gain insights into the smoothness of solutions for these highly degenerate equations. Elliptic equations, fundamental in expressing and understanding diverse physical phenomena through partial differential equations (PDEs), are particularly adept at modeling steady-state and equilibrium behaviors. However, within the realm of elliptic equations, the subset of extremely degenerate cases presents a level of complexity that challenges traditional analytical methods, necessitating a deeper exploration of mathematical theory. While elliptic equations are celebrated for their versatility in capturing smooth and continuous behaviors across different disciplines, the introduction of degeneracy adds a layer of intricacy. Extremely degenerate elliptic equations are characterized by coefficients approaching singular behavior, posing non-trivial challenges in establishing classical solutions. Still, the exploration of extremely degenerate cases remains uncharted territory, requiring a profound understanding of mathematical structures and their implications. The motivation behind this research lies in addressing gaps in the current understanding of regularity properties within solutions to extremely degenerate elliptic equations. The study of extreme degeneracy is prompted by its prevalence in real-world applications, where physical phenomena often exhibit characteristics defying conventional mathematical modeling. Whether examining porous media flow or highly anisotropic materials, comprehending the regularity of solutions becomes crucial. Through this research, the aim is to contribute not only to the theoretical foundations of mathematics but also to the practical applicability of mathematical models in diverse scientific fields.

Keywords: elliptic equations, extremely degenerate, regularity results, partial differential equations, mathematical modeling, porous media flow

Procedia PDF Downloads 67
200 Application of Seismic Refraction Method in Geotechnical Study

Authors: Abdalla Mohamed M. Musbahi

Abstract:

The study area lies in Al-Falah area on Airport-Tripoli in Zone (16) Where planned establishment of complex multi-floors for residential and commercial, this part was divided into seven subzone. In each sup zone, were collected Orthogonal profiles by using Seismic refraction method. The overall aim with this project is to investigate the applicability of Seismic refraction method is a commonly used traditional geophysical technique to determine depth-to-bedrock, competence of bedrock, depth to the water table, or depth to other seismic velocity boundaries The purpose of the work is to make engineers and decision makers recognize the importance of planning and execution of a pre-investigation program including geophysics and in particular seismic refraction method. The overall aim with this thesis is achieved by evaluation of seismic refraction method in different scales, determine the depth and velocity of the base layer (bed-rock). Calculate the elastic property in each layer in the region by using the Seismic refraction method. The orthogonal profiles was carried out in every subzones of (zone 16). The layout of the seismic refraction set up is schematically, the geophones are placed on the linear imaginary line whit a 5 m spacing, the three shot points (in beginning of layout–mid and end of layout) was used, in order to generate the P and S waves. The 1st and last shot point is placed about 5 meters from the geophones and the middle shot point is put in between 12th to 13th geophone, from time-distance curve the P and S waves was calculated and the thickness was estimated up to three-layers. As we know any change in values of physical properties of medium (shear modulus, bulk modulus, density) leads to change waves velocity which passing through medium where any change in properties of rocks cause change in velocity of waves. because the change in properties of rocks cause change in parameters of medium density (ρ), bulk modulus (κ), shear modulus (μ). Therefore, the velocity of waves which travel in rocks have close relationship with these parameters. Therefore we can estimate theses parameters by knowing primary and secondary velocity (p-wave, s-wave).

Keywords: application of seismic, geotechnical study, physical properties, seismic refraction

Procedia PDF Downloads 488
199 Development of Pothole Management Method Using Automated Equipment with Multi-Beam Sensor

Authors: Sungho Kim, Jaechoul Shin, Yujin Baek, Nakseok Kim, Kyungnam Kim, Shinhaeng Jo

Abstract:

The climate change and increase in heavy traffic have been accelerating damages that cause the problems such as pothole on asphalt pavement. Pothole causes traffic accidents, vehicle damages, road casualties and traffic congestion. A quick and efficient maintenance method is needed because pothole is caused by stripping and accelerates pavement distress. In this study, we propose a rapid and systematic pothole management by developing a pothole automated repairing equipment including a volume measurement system of pothole. Three kinds of cold mix asphalt mixture were investigated to select repair materials. The materials were evaluated for satisfaction with quality standard and applicability to automated equipment. The volume measurement system of potholes was composed of multi-sensor that are combined with laser sensor and ultrasonic sensor and installed in front and side of the automated repair equipment. An algorithm was proposed to calculate the amount of repair material according to the measured pothole volume, and the system for releasing the correct amount of material was developed. Field test results showed that the loss of repair material amount could be reduced from approximately 20% to 6% per one point of pothole. Pothole rapid automated repair equipment will contribute to improvement on quality and efficient and economical maintenance by not only reducing materials and resources but also calculating appropriate materials. Through field application, it is possible to improve the accuracy of pothole volume measurement, to correct the calculation of material amount, and to manage the pothole data of roads, thereby enabling more efficient pavement maintenance management. Acknowledgment: The author would like to thank the MOLIT(Ministry of Land, Infrastructure, and Transport). This work was carried out through the project funded by the MOLIT. The project name is 'development of 20mm grade for road surface detecting roadway condition and rapid detection automation system for removal of pothole'.

Keywords: automated equipment, management, multi-beam sensor, pothole

Procedia PDF Downloads 222
198 Reliability Levels of Reinforced Concrete Bridges Obtained by Mixing Approaches

Authors: Adrián D. García-Soto, Alejandro Hernández-Martínez, Jesús G. Valdés-Vázquez, Reyna A. Vizguerra-Alvarez

Abstract:

Reinforced concrete bridges designed by code are intended to achieve target reliability levels adequate for the geographical environment where the code is applicable. Several methods can be used to estimate such reliability levels. Many of them require the establishment of an explicit limit state function (LSF). When such LSF is not available as a close-form expression, the simulation techniques are often employed. The simulation methods are computing intensive and time consuming. Note that if the reliability of real bridges designed by code is of interest, numerical schemes, the finite element method (FEM) or computational mechanics could be required. In these cases, it can be quite difficult (or impossible) to establish a close-form of the LSF, and the simulation techniques may be necessary to compute reliability levels. To overcome the need for a large number of simulations when no explicit LSF is available, the point estimate method (PEM) could be considered as an alternative. It has the advantage that only the probabilistic moments of the random variables are required. However, in the PEM, fitting of the resulting moments of the LSF to a probability density function (PDF) is needed. In the present study, a very simple alternative which allows the assessment of the reliability levels when no explicit LSF is available and without the need of extensive simulations is employed. The alternative includes the use of the PEM, and its applicability is shown by assessing reliability levels of reinforced concrete bridges in Mexico when a numerical scheme is required. Comparisons with results by using the Monte Carlo simulation (MCS) technique are included. To overcome the problem of approximating the probabilistic moments from the PEM to a PDF, a well-known distribution is employed. The approach mixes the PEM and other classic reliability method (first order reliability method, FORM). The results in the present study are in good agreement whit those computed with the MCS. Therefore, the alternative of mixing the reliability methods is a very valuable option to determine reliability levels when no close form of the LSF is available, or if numerical schemes, the FEM or computational mechanics are employed.

Keywords: structural reliability, reinforced concrete bridges, combined approach, point estimate method, monte carlo simulation

Procedia PDF Downloads 344
197 Cadmium Telluride Quantum Dots (CdTe QDs)-Thymine Conjugate Based Fluorescence Biosensor for Sensitive Determination of Nucleobases/Nucleosides

Authors: Lucja Rodzik, Joanna Lewandowska-Lancucka, Michal Szuwarzynski, Krzysztof Szczubialka, Maria Nowakowska

Abstract:

The analysis of nucleobases is of great importance for bioscience since their abnormal concentration in body fluids suggests the deficiency and mutation of the immune system, and it is considered to be an important parameter for diagnosis of various diseases. The presented conjugate meets the need for development of the effective, selective and highly sensitive sensor for nucleobase/nucleoside detection. The novel, highly fluorescent cadmium telluride quantum dots (CdTe QDs) functionalized with thymine and stabilized with thioglycolic acid (TGA) conjugates has been developed and thoroughly characterized. Successful formation of the material was confirmed by elemental analysis, and UV–Vis fluorescence and FTIR spectroscopies. The crystalline structure of the obtained product was characterized with X-ray diffraction (XRD) method. The composition of CdTe QDs and their thymine conjugate was also examined using X-ray photoelectron spectroscopy (XPS). The size of the CdTe-thymine was 3-6 nm as demonstrated using atomic force microscopy (AFM) and high resolution transmission electron microscopy (HRTEM) imaging. The plasmon resonance fluorescence band at 540 nm on excitation at 351 nm was observed for these nanoparticles. The intensity of this band increased with the increase in the amount of conjugated thymine with no shift in its position. Based on the fluorescence measurements, it was found that the CdTe-thymine conjugate interacted efficiently and selectively not only with adenine, a nucleobase complementary to thymine, but also with nucleosides and adenine-containing modified nucleosides, i.e., 5′-deoxy-5′-(methylthio)adenosine (MTA) and 2’-O-methyladenosine, the urinary tumor markers which allow monitoring of the disease progression. The applicability of the CdTe-thymine sensor for the real sample analysis was also investigated in simulated urine conditions. High sensitivity and selectivity of CdTe-thymine fluorescence towards adenine, adenosine and modified adenosine suggest that obtained conjugate can be potentially useful for development of the biosensor for complementary nucleobase/nucleoside detection.

Keywords: CdTe quantum dots, conjugate, sensor, thymine

Procedia PDF Downloads 408
196 Segmenting 3D Optical Coherence Tomography Images Using a Kalman Filter

Authors: Deniz Guven, Wil Ward, Jinming Duan, Li Bai

Abstract:

Over the past two decades or so, Optical Coherence Tomography (OCT) has been used to diagnose retina and optic nerve diseases. The retinal nerve fibre layer, for example, is a powerful diagnostic marker for detecting and staging glaucoma. With the advances in optical imaging hardware, the adoption of OCT is now commonplace in clinics. More and more OCT images are being generated, and for these OCT images to have clinical applicability, accurate automated OCT image segmentation software is needed. Oct image segmentation is still an active research area, as OCT images are inherently noisy, with the multiplicative speckling noise. Simple edge detection algorithms are unsuitable for detecting retinal layer boundaries in OCT images. Intensity fluctuation, motion artefact, and the presence of blood vessels also decrease further OCT image quality. In this paper, we introduce a new method for segmenting three-dimensional (3D) OCT images. This involves the use of a Kalman filter, which is commonly used in computer vision for object tracking. The Kalman filter is applied to the 3D OCT image volume to track the retinal layer boundaries through the slices within the volume and thus segmenting the 3D image. Specifically, after some pre-processing of the OCT images, points on the retinal layer boundaries in the first image are identified, and curve fitting is applied to them such that the layer boundaries can be represented by the coefficients of the curve equations. These coefficients then form the state space for the Kalman Filter. The filter then produces an optimal estimate of the current state of the system by updating its previous state using the measurements available in the form of a feedback control loop. The results show that the algorithm can be used to segment the retinal layers in OCT images. One of the limitations of the current algorithm is that the curve representation of the retinal layer boundary does not work well when the layer boundary is split into two, e.g., at the optic nerve, the layer boundary split into two. This maybe resolved by using a different approach to representing the boundaries, such as b-splines or level sets. The use of a Kalman filter shows promise to developing accurate and effective 3D OCT segmentation methods.

Keywords: optical coherence tomography, image segmentation, Kalman filter, object tracking

Procedia PDF Downloads 479
195 AI/ML Atmospheric Parameters Retrieval Using the “Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN)”

Authors: Thomas Monahan, Nicolas Gorius, Thanh Nguyen

Abstract:

Exoplanet atmospheric parameters retrieval is a complex, computationally intensive, inverse modeling problem in which an exoplanet’s atmospheric composition is extracted from an observed spectrum. Traditional Bayesian sampling methods require extensive time and computation, involving algorithms that compare large numbers of known atmospheric models to the input spectral data. Runtimes are directly proportional to the number of parameters under consideration. These increased power and runtime requirements are difficult to accommodate in space missions where model size, speed, and power consumption are of particular importance. The use of traditional Bayesian sampling methods, therefore, compromise model complexity or sampling accuracy. The Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN) is a deep convolutional generative adversarial network that improves on the previous model’s speed and accuracy. We demonstrate the efficacy of artificial intelligence to quickly and reliably predict atmospheric parameters and present it as a viable alternative to slow and computationally heavy Bayesian methods. In addition to its broad applicability across instruments and planetary types, ARcGAN has been designed to function on low power application-specific integrated circuits. The application of edge computing to atmospheric retrievals allows for real or near-real-time quantification of atmospheric constituents at the instrument level. Additionally, edge computing provides both high-performance and power-efficient computing for AI applications, both of which are critical for space missions. With the edge computing chip implementation, ArcGAN serves as a strong basis for the development of a similar machine-learning algorithm to reduce the downlinked data volume from the Compact Ultraviolet to Visible Imaging Spectrometer (CUVIS) onboard the DAVINCI mission to Venus.

Keywords: deep learning, generative adversarial network, edge computing, atmospheric parameters retrieval

Procedia PDF Downloads 164
194 Enhancing Teaching of Engineering Mathematics

Authors: Tajinder Pal Singh

Abstract:

Teaching of mathematics to engineering students is an open ended problem in education. The main goal of mathematics learning for engineering students is the ability of applying a wide range of mathematical techniques and skills in their engineering classes and later in their professional work. Most of the undergraduate engineering students and faculties feels that no efforts and attempts are made to demonstrate the applicability of various topics of mathematics that are taught thus making mathematics unavoidable for some engineering faculty and their students. The lack of understanding of concepts in engineering mathematics may hinder the understanding of other concepts or even subjects. However, for most undergraduate engineering students, mathematics is one of the most difficult courses in their field of study. Most of the engineering students never understood mathematics or they never liked it because it was too abstract for them and they could never relate to it. A right balance of application and concept based teaching can only fulfill the objectives of teaching mathematics to engineering students. It will surely improve and enhance their problem solving and creative thinking skills. In this paper, some practical (informal) ways of making mathematics-teaching application based for the engineering students is discussed. An attempt is made to understand the present state of teaching mathematics in engineering colleges. The weaknesses and strengths of the current teaching approach are elaborated. Some of the causes of unpopularity of mathematics subject are analyzed and a few pragmatic suggestions have been made. Faculty in mathematics courses should spend more time discussing the applications as well as the conceptual underpinnings rather than focus solely on strategies and techniques to solve problems. They should also introduce more ‘word’ problems as these problems are commonly encountered in engineering courses. Overspecialization in engineering education should not occur at the expense of (or by diluting) mathematics and basic sciences. The role of engineering education is to provide the fundamental (basic) knowledge and to teach the students simple methodology of self-learning and self-development. All these issues would be better addressed if mathematics and engineering faculty join hands together to plan and design the learning experiences for the students who take their classes. When faculties stop competing against each other and start competing against the situation, they will perform better. Without creating any administrative hassles these suggestions can be used by any young inexperienced faculty of mathematics to inspire engineering students to learn engineering mathematics effectively.

Keywords: application based learning, conceptual learning, engineering mathematics, word problem

Procedia PDF Downloads 229
193 Transient Response of Elastic Structures Subjected to a Fluid Medium

Authors: Helnaz Soltani, J. N. Reddy

Abstract:

Presence of fluid medium interacting with a structure can lead to failure of the structure. Since developing efficient computational model for fluid-structure interaction (FSI) problems has broader impact to realistic problems encountered in aerospace industry, ship industry, oil and gas industry, and so on, one can find an increasing need to find a method in order to investigate the effect of fluid domain on structural response. A coupled finite element formulation of problems involving FSI issue is an accurate method to predict the response of structures in contact with a fluid medium. This study proposes a finite element approach in order to study the transient response of the structures interacting with a fluid medium. Since beam and plate are considered to be the fundamental elements of almost any structure, the developed method is applied to beams and plates benchmark problems in order to demonstrate its efficiency. The formulation is a combination of the various structure theories and the solid-fluid interface boundary condition, which is used to represent the interaction between the solid and fluid regimes. Here, three different beam theories as well as three different plate theories are considered to model the solid medium, and the Navier-Stokes equation is used as the theoretical equation governed the fluid domain. For each theory, a coupled set of equations is derived where the element matrices of both regimes are calculated by Gaussian quadrature integration. The main feature of the proposed methodology is to model the fluid domain as an added mass; the external distributed force due to the presence of the fluid. We validate the accuracy of such formulation by means of some numerical examples. Since the formulation presented in this study covers several theories in literature, the applicability of our proposed approach is independent of any structure geometry. The effect of varying parameters such as structure thickness ratio, fluid density and immersion depth, are studied using numerical simulations. The results indicate that maximum vertical deflection of the structure is affected considerably in the presence of a fluid medium.

Keywords: beam and plate, finite element analysis, fluid-structure interaction, transient response

Procedia PDF Downloads 562
192 Broadband Optical Plasmonic Antennas Using Fano Resonance Effects

Authors: Siamak Dawazdah Emami, Amin Khodaei, Harith Bin Ahmad, Hairul A. Adbul-Rashid

Abstract:

The Fano resonance effect on plasmonic nanoparticle materials results in such materials possessing a number of unique optical properties, and the potential applicability for sensing, nonlinear devices and slow-light devices. A Fano resonance is a consequence of coherent interference between superradiant and subradiant hybridized plasmon modes. Incident light on subradiant modes will initiate excitation that results in superradiant modes, and these superradient modes possess zero or finite dipole moments alongside a comparable negligible coupling with light. This research work details the derivation of an electrodynamics coupling model for the interaction of dipolar transitions and radiation via plasmonic nanoclusters such as quadrimers, pentamers and heptamers. The directivity calculation is analyzed in order to qualify the redirection of emission. The geometry of a configured array of nanostructures strongly influenced the transmission and reflection properties, which subsequently resulted in the directivity of each antenna being related to the nanosphere size and gap distances between the nanospheres in each model’s structure. A well-separated configuration of nanospheres resulted in the structure behaving similarly to monomers, with spectra peaks of a broad superradiant mode being centered within the vicinity of 560 nm wavelength. Reducing the distance between ring nanospheres in pentamers and heptamers to 20~60 nm caused the coupling factor and charge distributions to increase and invoke a subradiant mode centered within the vicinity of 690 nm. Increasing the outside ring’s nanosphere distance from the centered nanospheres caused the coupling factor to decrease, with the coupling factor being inversely proportional to cubic of the distance between nanospheres. This phenomenon led to a dramatic decrease of the superradiant mode at a 200 nm distance between the central nanosphere and outer rings. Effects from a superradiant mode vanished beyond a 240 nm distance between central and outer ring nanospheres.

Keywords: fano resonance, optical antenna, plasmonic, nano-clusters

Procedia PDF Downloads 427
191 Modern Methods of Construction (MMC): The Potentials and Challenges of Using Prefabrication Technology for Building Modern Houses in Afghanistan

Authors: Latif Karimi, Yasuhide Mochida

Abstract:

The purpose of this paper is to study Modern Methods of Construction (MMC); specifically, the prefabrication technology and check the applicability, suitability, and benefits of this construction technique over conventional methods for building new houses in Afghanistan. Construction industry and house building sector are a key contributor to Afghanistan’s economy. However, this sector is challenged with lack of innovation and severe impacts that it has on the environment due to huge amount of construction waste from building, demolition and or renovation activities. This paper studies the prefabrication technology, a popular MMC that is becoming more common, improving in quality and being available in a variety of budgets. Several feasibility studies worldwide have revealed that this method is the way forward in improving construction industry performance as it has been proven to reduce construction time, construction wastes and improve the environmental performance of the construction processes. In addition, this study emphasizes on 'sustainability' in-house building, since it is a common challenge in housing construction projects on a global scale. This challenge becomes more severe in the case of under-developed countries, like Afghanistan. Because, most of the houses are being built in the absence of a serious quality control mechanism and dismissive to basic requirements of sustainable houses; well-being, cost-effectiveness, minimization - prevention of wastes production during construction and use, and severe environmental impacts in view of a life cycle assessment. Methodology: A literature review and study of the conventional practices of building houses in urban areas of Afghanistan. A survey is also being completed to study the potentials and challenges of using prefabrication technology for building modern houses in the cities across the country. A residential housing project is selected for case study to determine the drawbacks of current construction methods vs. prefabrication technique for building a new house. Originality: There are little previous research available about MMC considering its specific impacts on sustainability related to house building practices. This study will be specifically of interest to a broad range of people, including planners, construction managers, builders, and house owners.

Keywords: modern methods of construction (MMC), prefabrication, prefab houses, sustainable construction, modern houses

Procedia PDF Downloads 240
190 Increasing Holism: Qualitative, Cross-Dimensional Study of Contemporary Innovation Processes

Authors: Sampo Tukiainen, Jukka Mattila, Niina Erkama, Erkki Ormala

Abstract:

During the past decade, calls for more holistic and integrative organizational innovation research have been increasingly voiced. On the one hand, from the theoretical perspective, the reason for this has been the tendency in contemporary innovation studies to focus on disciplinary subfields, often leading to challenges in integrating theories in meaningful ways. For example, we find that during the past three decades the innovation research has evolved into an academic field consisting of several independent research streams, such as studies on organizational learning, project management, and top management teams, to name but a few. The innovation research has also proliferated according to different dimensions of innovation, such as sources, drivers, forms, and the nature of innovation. On the other hand, from the practical perspective the rationale has been the need to develop understanding of the solving of complex, interdisciplinary issues and problems in contemporary and future societies and organizations. Therefore, for advancing theorizing, as well as the practical applicability of organizational innovation research, we acknowledge the need for more integrative and holistic perspectives and approaches. We contribute to addressing this challenge by developing a ‘box transcendent’ perspective to examine interlinkages in and across four key dimensions of organizational innovation processes, which traditionally have been studied in separate research streams. Building on an in-depth, qualitative analysis of 123 interviews of CTOs (or equivalent) and CEOs in top innovative Finnish companies as well as three in-depth case studies, both as part of an EU-level interview study of more than 700 companies, we specify interlinkages in and between i) strategic management, ii) innovation management, iii) implementation and organization, and iv) commercialization, in innovation processes. We contribute to the existing innovation research in multiple ways. Firstly, we develop a cross-dimensional, ‘box transcendent’ conceptual model at the level of organizational innovation process. Secondly, this modeling enables us to extend existing theorizing by allowing us to distinguish specific cross-dimensional innovation ‘profiles’ in two different company categories: large multinational corporations and SMEs. Finally, from the more practical perspective, we consider the implications of such innovation ‘profiles’ for the societal and institutional, policy-making development.

Keywords: holistic research, innovation management, innovation studies, organizational innovation

Procedia PDF Downloads 321
189 Colour and Travel: Design of an Innovative Infrastructure for Travel Applications with Entertaining and Playful Features

Authors: Avrokomi Zavitsanou, Spiros Papadopoulos, Theofanis Alexandridis

Abstract:

This paper presents the research project ‘Colour & Travel’, which is co-funded by the European Union and national resources through the Operational Programme “Competitiveness, Entrepreneurship and Innovation” 2014-2020, under the Single RTDI State Aid Action "RESEARCH - CREATE - INNOVATE". The research project proposes the design of an innovative, playful framework for exploring a variety of travel destinations and creating personalised travel narratives, aiming to entertain, educate, and promote culture and tourism. Gamification of the cultural and touristic environment can enhance its experiential, multi-sensory aspects and broaden the perception of the traveler. The latter's involvement in creating and shaping his personal travel narrations and the possibility of sharing it with others can offer him an alternative, more binding way of getting acquainted with a place. In particular, the paper presents the design of an infrastructure: (a) for the development of interactive travel guides for mobile devices, where sites with specific points of interest will be recommended, with which the user can interact in playful ways and then create his personal travel narratives, (b) for the development of innovative games within virtual reality environment, where the interaction will be offered while the user is moving within the virtual environment; and (c) for an online application where the content will be offered through the browser and the modern 3D imaging technologies (WebGL). The technological products that will be developed within the proposed project can strengthen important sectors of economic and social life, such as trade, tourism, exploitation and promotion of the cultural environment, creative industries, etc. The final applications delivered at the end of the project will guarantee an improved level of service for visitors and will be a useful tool for content creators with increased adaptability, expansibility, and applicability in many regions of Greece and abroad. This paper aims to present the research project by referencing the state of the art and the methodological scheme, ending with a brief reflection on the expected outcome in terms of results.

Keywords: gamification, culture, tourism, AR, VR, applications

Procedia PDF Downloads 139
188 A Facile One Step Modification of Poly(dimethylsiloxane) via Smart Polymers for Biomicrofluidics

Authors: A. Aslihan Gokaltun, Martin L. Yarmush, Ayse Asatekin, O. Berk Usta

Abstract:

Poly(dimethylsiloxane) (PDMS) is one of the most widely used materials in the fabrication of microfluidic devices. It is easily patterned and can replicate features down to nanometers. Its flexibility, gas permeability that allows oxygenation, and low cost also drive its wide adoption. However, a major drawback of PDMS is its hydrophobicity and fast hydrophobic recovery after surface hydrophilization. This results in significant non-specific adsorption of proteins as well as small hydrophobic molecules such as therapeutic drugs limiting the utility of PDMS in biomedical microfluidic circuitry. While silicon, glass, and thermoplastics have been used, they come with problems of their own such as rigidity, high cost, and special tooling needs, which limit their use to a smaller user base. Many strategies to alleviate these common problems with PDMS are lack of general practical applicability, or have limited shelf lives in terms of the modifications they achieve. This restricts large scale implementation and adoption by industrial and research communities. Accordingly, we aim to tailor biocompatible PDMS surfaces by developing a simple and one step bulk modification approach with novel smart materials to reduce non-specific molecular adsorption and to stabilize long-term cell analysis with PDMS substrates. Smart polymers that blended with PDMS during device manufacture, spontaneously segregate to surfaces when in contact with aqueous solutions and create a < 1 nm layer that reduces non-specific adsorption of organic and biomolecules. Our methods are fully compatible with existing PDMS device manufacture protocols without any additional processing steps. We have demonstrated that our modified PDMS microfluidic system is effective at blocking the adsorption of proteins while retaining the viability of primary rat hepatocytes and preserving the biocompatibility, oxygen permeability, and transparency of the material. We expect this work will enable the development of fouling-resistant biomedical materials from microfluidics to hospital surfaces and tubing.

Keywords: cell culture, microfluidics, non-specific protein adsorption, PDMS, smart polymers

Procedia PDF Downloads 290
187 Adaptive Process Monitoring for Time-Varying Situations Using Statistical Learning Algorithms

Authors: Seulki Lee, Seoung Bum Kim

Abstract:

Statistical process control (SPC) is a practical and effective method for quality control. The most important and widely used technique in SPC is a control chart. The main goal of a control chart is to detect any assignable changes that affect the quality output. Most conventional control charts, such as Hotelling’s T2 charts, are commonly based on the assumption that the quality characteristics follow a multivariate normal distribution. However, in modern complicated manufacturing systems, appropriate control chart techniques that can efficiently handle the nonnormal processes are required. To overcome the shortcomings of conventional control charts for nonnormal processes, several methods have been proposed to combine statistical learning algorithms and multivariate control charts. Statistical learning-based control charts, such as support vector data description (SVDD)-based charts, k-nearest neighbors-based charts, have proven their improved performance in nonnormal situations compared to that of the T2 chart. Beside the nonnormal property, time-varying operations are also quite common in real manufacturing fields because of various factors such as product and set-point changes, seasonal variations, catalyst degradation, and sensor drifting. However, traditional control charts cannot accommodate future condition changes of the process because they are formulated based on the data information recorded in the early stage of the process. In the present paper, we propose a SVDD algorithm-based control chart, which is capable of adaptively monitoring time-varying and nonnormal processes. We reformulated the SVDD algorithm into a time-adaptive SVDD algorithm by adding a weighting factor that reflects time-varying situations. Moreover, we defined the updating region for the efficient model-updating structure of the control chart. The proposed control chart simultaneously allows efficient model updates and timely detection of out-of-control signals. The effectiveness and applicability of the proposed chart were demonstrated through experiments with the simulated data and the real data from the metal frame process in mobile device manufacturing.

Keywords: multivariate control chart, nonparametric method, support vector data description, time-varying process

Procedia PDF Downloads 296
186 Computational Pipeline for Lynch Syndrome Detection: Integrating Alignment, Variant Calling, and Annotations

Authors: Rofida Gamal, Mostafa Mohammed, Mariam Adel, Marwa Gamal, Marwa kamal, Ayat Saber, Maha Mamdouh, Amira Emad, Mai Ramadan

Abstract:

Lynch Syndrome is an inherited genetic condition associated with an increased risk of colorectal and other cancers. Detecting Lynch Syndrome in individuals is crucial for early intervention and preventive measures. This study proposes a computational pipeline for Lynch Syndrome detection by integrating alignment, variant calling, and annotation. The pipeline leverages popular tools such as FastQC, Trimmomatic, BWA, bcftools, and ANNOVAR to process the input FASTQ file, perform quality trimming, align reads to the reference genome, call variants, and annotate them. It is believed that the computational pipeline was applied to a dataset of Lynch Syndrome cases, and its performance was evaluated. It is believed that the quality check step ensured the integrity of the sequencing data, while the trimming process is thought to have removed low-quality bases and adaptors. In the alignment step, it is believed that the reads were accurately mapped to the reference genome, and the subsequent variant calling step is believed to have identified potential genetic variants. The annotation step is believed to have provided functional insights into the detected variants, including their effects on known Lynch Syndrome-associated genes. The results obtained from the pipeline revealed Lynch Syndrome-related positions in the genome, providing valuable information for further investigation and clinical decision-making. The pipeline's effectiveness was demonstrated through its ability to streamline the analysis workflow and identify potential genetic markers associated with Lynch Syndrome. It is believed that the computational pipeline presents a comprehensive and efficient approach to Lynch Syndrome detection, contributing to early diagnosis and intervention. The modularity and flexibility of the pipeline are believed to enable customization and adaptation to various datasets and research settings. Further optimization and validation are believed to be necessary to enhance performance and applicability across diverse populations.

Keywords: Lynch Syndrome, computational pipeline, alignment, variant calling, annotation, genetic markers

Procedia PDF Downloads 67
185 From Knives to Kites: Developments and Dilemmas around the Use of Force in the Israeli–Palestinian Conflict since "Protective Edge"

Authors: Hilly Moodrick-Even Khen

Abstract:

This study analyzes the legal regulation of the use of force in international law in the context of three emerging Palestinian forms of struggle against Israeli occupation: the Knife Intifada, Gaza border disturbances, and the launching of incendiary kites. It discusses what legal paradigms or concepts should regulate the type and level of force used in each situation—a question that is complicated by various dilemmas—and appraises the Israel Defence Forces policies tailored in response. Methodologically, the study is based on analysis of scholarship on the conceptual legal issues as well as dicta of the courts. It evaluates the applicability of two legal paradigms regulating the use of force in military operations—(i) the conduct of hostilities and (ii) law enforcement—as well as the concept of self-defense in international law and the escalation of force procedure. While the “Knife Intifada” clearly falls under the law enforcement paradigm, the disturbances at the border and the launching of incendiary kites raise more difficult questions, as applying law enforcement, especially in the latter case, can have undesirable ramifications for safeguarding humanitarian interests. The use of force in the cases of the border disturbances and the incendiary kites should thus be regulated, mutatis mutandis, by the concept of self-defense and escalation of force procedures; and in the latter case, the hostilities paradigm can also be applied. The study provides a factual description and analysis of the background and nature of the forms of struggle in Gaza and the West Bank—in each case surveying the geo-political developments since operation Protective Edge, contextualizing how the organized and unorganized violent activities evolved, and analyzing them in terms of level of organization and intensity. It then presents the two paradigms of the use of force—law enforcement and conduct of hostilities—and the concept of self-defense. Lastly, it uses the factual findings as the basis for legally analyzing which paradigm or concept regulating the use of force applies for each form of struggle. The study concludes that in most cases, the concept of self-defense is preferable to the hostilities or the law enforcement paradigms, as it best safeguards humanitarian interests and ensures the least loss of civilian lives.

Keywords: Israeli-Palestinian conflict, self defense, terrorism, use of force

Procedia PDF Downloads 120
184 Assessment of Time-variant Work Stress for Human Error Prevention

Authors: Hyeon-Kyo Lim, Tong-Il Jang, Yong-Hee Lee

Abstract:

For an operator in a nuclear power plant, human error is one of the most dreaded factors that may result in unexpected accidents. The possibility of human errors may be low, but the risk of them would be unimaginably enormous. Thus, for accident prevention, it is quite indispensable to analyze the influence of any factors which may raise the possibility of human errors. During the past decades, not a few research results showed that performance of human operators may vary over time due to lots of factors. Among them, stress is known to be an indirect factor that may cause human errors and result in mental illness. Until now, not a few assessment tools have been developed to assess stress level of human workers. However, it still is questionable to utilize them for human performance anticipation which is related with human error possibility, because they were mainly developed from the viewpoint of mental health rather than industrial safety. Stress level of a person may go up or down with work time. In that sense, if they would be applicable in the safety aspect, they should be able to assess the variation resulted from work time at least. Therefore, this study aimed to compare their applicability for safety purpose. More than 10 kinds of work stress tools were analyzed with reference to assessment items, assessment and analysis methods, and follow-up measures which are known to close related factors with work stress. The results showed that most tools mainly focused their weights on some common organizational factors such as demands, supports, and relationships, in sequence. Their weights were broadly similar. However, they failed to recommend practical solutions. Instead, they merely advised to set up overall counterplans in PDCA cycle or risk management activities which would be far from practical human error prevention. Thus, it was concluded that application of stress assessment tools mainly developed for mental health seemed to be impractical for safety purpose with respect to human performance anticipation, and that development of a new assessment tools would be inevitable if anyone wants to assess stress level in the aspect of human performance variation and accident prevention. As a consequence, as practical counterplans, this study proposed a new scheme for assessment of work stress level of a human operator that may vary over work time which is closely related with the possibility of human errors.

Keywords: human error, human performance, work stress, assessment tool, time-variant, accident prevention

Procedia PDF Downloads 668
183 Russian ‘Active Measures’: An Applicable Supporting Tool for Russia`s Foreign Policy Objectives in the 21st Century

Authors: Håkon Riiber

Abstract:

This paper explores the extent to which Russian ‘Active Measures’ play a role in contemporary Russian foreign policy and in what way the legacy of the Soviet Union is still apparent in these practices. The analysis draws on a set of case studies from the 21st century to examine these aspects, showing which ‘Active Measures’ features are old and which are new in the post-Cold War era. The paper highlights that the topic has gained significant academic and political interest in recent years, largely due to the aggressive posture of the Russian Federation on the world stage, exemplified through interventions in Estonia, Georgia, and Ukraine and interference in several democratic elections in the West. However, the paper argues that the long-term impact of these measures may have unintended implications for Russia. While Russia is unlikely to stop using Active Measures, increased awareness of the exploitation of weaknesses, institutions, or other targets may lead to greater security measures and an ability to identify and defend against these activities. The paper contends that Soviet-style ‘Active Measures’ from the Cold War era have been modernized and are now utilized to create an advantageous atmosphere for further exploitation to support contemporary Russian foreign policy. It offers three key points to support this argument: the reenergized legacy of the Cold War era, the use of ‘Active Measures’ in a number of cases in the 21st century, and the applicability of AM to the Russian approach to foreign policy. The analysis reveals that while this is not a new Russian phenomenon, it is still oversimplified and inaccurately understood by the West, which may result in a decreased ability to defend against these activities and limit the unwarranted escalation of the ongoing security situation between the West and Russia. The paper concludes that the legacy of Soviet-era Active Measures continues to influence Russian foreign policy, and modern technological advances have only made them more applicable to the current political climate. Overall, this paper sheds light on the important issue of Russian ‘Active Measures’ and the role they play in contemporary Russian foreign policy. It emphasizes the need for increased awareness, understanding, and security measures to defend against these activities and prevent further escalation of the security situation between the West and Russia.

Keywords: Russian espionage, active measures, disinformation, Russian intelligence

Procedia PDF Downloads 97
182 Evaluating the Process of Biofuel Generation from Grass

Authors: Karan Bhandari

Abstract:

Almost quarter region of Indian terrain is covered by grasslands. Grass being a low maintenance perennial crop is in abundance. Farmers are well acquainted with its nature, yield and storage. The aim of this paper is to study and identify the applicability of grass as a source of bio fuel. Anaerobic break down is a well-recognized technology. This process is vital for harnessing bio fuel from grass. Grass is a lignocellulosic material which is fibrous and can readily cause problems with parts in motion. Further, it also has a tendency to float. This paper also deals with the ideal digester configuration for biogas generation from grass. Intensive analysis of the literature is studied on the optimum production of grass storage in accordance with bio digester specifications. Subsequent to this two different digester systems were designed, fabricated, analyzed. The first setup was a double stage wet continuous arrangement usually known as a Continuously Stirred Tank Reactor (CSTR). The next was a double stage, double phase system implementing Sequentially Fed Leach Beds using an Upflow Anaerobic Sludge Blanket (SLBR-UASB). The above methodologies were carried for the same feedstock acquired from the same field. Examination of grass silage was undertaken using Biomethane Potential values. The outcomes portrayed that the Continuously Stirred Tank Reactor system produced about 450 liters of methane per Kg of volatile solids, at a detention period of 48 days. The second method involving Leach Beds produced about 340 liters of methane per Kg of volatile solids with a detention period of 28 days. The results showcased that CSTR when designed exclusively for grass proved to be extremely efficient in methane production. The SLBR-UASB has significant potential to allow for lower detention times with significant levels of methane production. This technology has immense future for research and development in India in terms utilizing of grass crop as a non-conventional source of fuel.

Keywords: biomethane potential values, bio digester specifications, continuously stirred tank reactor, upflow anaerobic sludge blanket

Procedia PDF Downloads 242
181 The Confounding Role of Graft-versus-Host Disease in Animal Models of Cancer Immunotherapy: A Systematic Review

Authors: Hami Ashraf, Mohammad Heydarnejad

Abstract:

Introduction: The landscape of cancer treatment has been revolutionized by immunotherapy, offering novel therapeutic avenues for diverse cancer types. Animal models play a pivotal role in the development and elucidation of these therapeutic modalities. Nevertheless, the manifestation of Graft-versus-Host Disease (GVHD) in such models poses significant challenges, muddling the interpretation of experimental data within the ambit of cancer immunotherapy. This study is dedicated to scrutinizing the role of GVHD as a confounding factor in animal models used for cancer immunotherapy, alongside proposing viable strategies to mitigate this complication. Method: Employing a systematic review framework, this study undertakes a comprehensive literature survey including academic journals in PubMed, Embase, and Web of Science databases and conference proceedings to collate pertinent research that delves into the impact of GVHD on animal models in cancer immunotherapy. The acquired studies undergo rigorous analysis and synthesis, aiming to assess the influence of GVHD on experimental results while identifying strategies to alleviate its confounding effects. Results: Findings indicate that GVHD incidence significantly skews the reliability and applicability of experimental outcomes, occasionally leading to erroneous interpretations. The literature surveyed also sheds light on various methodologies under exploration to counteract the GVHD dilemma, thereby bolstering the experimental integrity in this domain. Conclusion: GVHD's presence critically affects both the interpretation and validity of experimental findings, underscoring the imperative for strategies to curtail its confounding impacts. Current research endeavors are oriented towards devising solutions to this issue, aiming to augment the dependability and pertinence of experimental results. It is incumbent upon researchers to diligently consider and adjust for GVHD's effects, thereby enhancing the translational potential of animal model findings to clinical applications and propelling progress in the arena of cancer immunotherapy.

Keywords: graft-versus-host disease, cancer immunotherapy, animal models, preclinical model

Procedia PDF Downloads 49
180 Fuzzy Decision Making to the Construction Project Management: Glass Facade Selection

Authors: Katarina Rogulj, Ivana Racetin, Jelena Kilic

Abstract:

In this study, the fuzzy logic approach (FLA) was developed for construction project management (CPM) under uncertainty and duality. The focus was on decision making in selecting the type of the glass facade for a residential-commercial building in the main design. The adoption of fuzzy sets was capable of reflecting construction managers’ reliability level over subjective judgments, and thus the robustness of the system can be achieved. An α-cuts method was utilized for discretizing the fuzzy sets in FLA. This method can communicate all uncertain information in the optimization process, taking into account the values of this information. Furthermore, FLA provides in-depth analyses of diverse policy scenarios that are related to various levels of economic aspects when it comes to the construction projects' valid decision making. The developed approach is applied to CPM to demonstrate its applicability. Analyzing the materials of glass facades, variants were defined. The development of the FLA for the CPM included relevant construction projec'ts stakeholders that were involved in the criteria definition to evaluate each variant. Using fuzzy Decision-Making Trial and Evaluation Laboratory Method (DEMATEL) comparison of the glass facade was conducted. This way, a rank, according to the priorities for inclusion into the main design, of variants is obtained. The concept was tested on a residential-commercial building in the city of Rijeka, Croatia. The newly developed methodology was then compared with the existing one. The aim of the research was to define an approach that will improve current judgments and decisions when it comes to the material selection of buildings facade as one of the most important architectural and engineering tasks in the main design. The advantage of the new methodology compared to the old one is that it includes the subjective side of the managers’ decisions, as an inevitable factor in each decision making. The proposed approach can help construction projects managers to identify the desired type of glass facade according to their preference and practical conditions, as well as facilitate in-depth analyses of tradeoffs between economic efficiency and architectural design.

Keywords: construction projects management, DEMATEL, fuzzy logic approach, glass façade selection

Procedia PDF Downloads 133
179 The Ethical Imperative of Corporate Social Responsibility Practice and Disclosure by Firms in Nigeria Delta Swamplands: A Qualitative Analysis

Authors: Augustar Omoze Ehighalua, Itotenaan Henry Ogiri

Abstract:

As a mono-product economy, Nigeria relies largely on oil revenues for its foreign exchange earnings and the exploration activities of firms operating in the Niger Delta region have left in its wake tales of environmental degradation, poverty and misery. This, no doubt, have created corporate social responsibility issues in the region. The focus of this research is the critical evaluation of the ethical response to Corporate Social Responsibility (CSR) practice by firms operating in Nigeria Delta Swamplands. While CSR is becoming more popular in developed society with effective practice guidelines and reporting benchmark, there is a relatively low level of awareness and selective applicability of existing international guidelines to effectively support CSR practice in Nigeria. This study, haven identified the lack of CSR institutional framework attempts to develop an ethically-driven CSR transparency benchmark laced within a regulatory framework based on international best practices. The research adopts a qualitative methodology and makes use of primary data collected through semi-structured interviews conducted across the six core states of the Niger Delta Region. More importantly, the study adopts an inductive, interpretivist philosophical paradigm that reveal deep phenomenological insights into what local communities, civil society and government officials consider as good ethical benchmark for responsible CSR practice by organizations. The institutional theory provides for the main theoretical foundation, complemented by the stakeholder and legitimacy theories. The Nvivo software was used to analyze the data collected. This study shows that ethical responsibility is lacking in CSR practice by firms in the Niger Delta Region of Nigeria. Furthermore, findings of the study indicate key issues of environmental, health and safety, human rights, and labour as fundamental in developing an effective CSR practice guideline for Nigeria. The study has implications for public policy formulation as well as managerial perspective.

Keywords: corporate social responsibility, CSR, ethics, firms, Niger-Delta Swampland, Nigeria

Procedia PDF Downloads 105
178 Comparing the SALT and START Triage System in Disaster and Mass Casualty Incidents: A Systematic Review

Authors: Hendri Purwadi, Christine McCloud

Abstract:

Triage is a complex decision-making process that aims to categorize a victim’s level of acuity and the need for medical assistance. Two common triage systems have been widely used in Mass Casualty Incidents (MCIs) and disaster situation are START (Simple triage algorithm and rapid treatment) and SALT (sort, asses, lifesaving, intervention, and treatment/transport). There is currently controversy regarding the effectiveness of SALT over START triage system. This systematic review aims to investigate and compare the effectiveness between SALT and START triage system in disaster and MCIs setting. Literatures were searched via systematic search strategy from 2009 until 2019 in PubMed, Cochrane Library, CINAHL, Scopus, Science direct, Medlib, ProQuest. This review included simulated-based and medical record -based studies investigating the accuracy and applicability of SALT and START triage systems of adult and children population during MCIs and disaster. All type of studies were included. Joana Briggs institute critical appraisal tools were used to assess the quality of reviewed studies. As a result, 1450 articles identified in the search, 10 articles were included. Four themes were identified by review, they were accuracy, under-triage, over-triage and time to triage per individual victim. The START triage system has a wide range and inconsistent level of accuracy compared to SALT triage system (44% to 94. 2% of START compared to 70% to 83% of SALT). The under-triage error of START triage system ranged from 2.73% to 20%, slightly lower than SALT triage system (7.6 to 23.3%). The over-triage error of START triage system was slightly greater than SALT triage system (START ranged from 2% to 53% compared to 2% to 22% of SALT). The time for applying START triage system was faster than SALT triage system (START was 70-72.18 seconds compared to 78 second of SALT). Consequently; The START triage system has lower level of under-triage error and faster than SALT triage system in classifying victims of MCIs and disaster whereas SALT triage system is known slightly more accurate and lower level of over-triage. However, the magnitude of these differences is relatively small, and therefore the effect on the patient outcomes is not significance. Hence, regardless of the triage error, either START or SALT triage system is equally effective to triage victims of disaster and MCIs.

Keywords: disaster, effectiveness, mass casualty incidents, START triage system, SALT triage system

Procedia PDF Downloads 128
177 Nanotechnology in Conservation of Artworks: TiO2-Based Nanocoatings for the Protection and Preservation of Stone Monuments

Authors: Sayed M. Ahmed, Sawsan S. Darwish, Nagib A. Elmarzugi, Mohammad A. Al-Dosari, Mahmoud A. Adam, Nadia A. Al-Mouallimi

Abstract:

The preservation of cultural heritage is a worldwide problem. Stone monuments represent an important part of this heritage, but due to their prevalently outdoor location, they are generally subject to a complex series of weathering and decay processes, in addition to physical and chemical factors, also biological agents usually play an important role in deterioration phenomena. The aim of this paper is to experimentally verify applicability and feasibility of titanium dioxide (TiO2) nanoparticles for the preservation of historical (architectural, monumental, archaeological) stone surfaces which enables to reduce the deterioration behaviors mentioned above. TiO2 nanoparticles dispersed in an aqueous colloidal suspension were applied directly on travertine (Marble and limestone often used in historical and monumental buildings) by spray-coating in order to obtain a nanometric film on stone samples. SEM, coupled with EDX microanalysis. (SEM-EDX), in order to obtain information oncoating homogeneity, surface morphology before and after aging and penetration depth of the TiO2 within the samples. Activity of the coated surface was evaluated with UV accelerated aging test. Capillary water absorption, thermal aging and colorimetric measurements have been performed on on coated and uncoated samples to evaluate their properties and estimate change of appearance with colour variation. Results show Tio2 nanoparticles good candidate for coating applications on calcareous stone, good water-repellence was observed on the samples after treatment; analyses were carried out on both untreated and freshly treated samples as well as after artificial aging. Colour change showed negligible variations on the coated or uncoated stone as well as after aging. Results showed that treated stone surfaces seem to be not affected after 1000 hours of exposure to UV radiation, no alteration of the original features.

Keywords: architectural and archaeological heritage, calcareous stone, photocatalysis TiO2, self-cleaning, thermal aging

Procedia PDF Downloads 271
176 A Comparative Analysis of Innovation Maturity Models: Towards the Development of a Technology Management Maturity Model

Authors: Nikolett Deutsch, Éva Pintér, Péter Bagó, Miklós Hetényi

Abstract:

Strategic technology management has emerged and evolved parallelly with strategic management paradigms. It focuses on the opportunity for organizations operating mainly in technology-intensive industries to explore and exploit technological capabilities upon which competitive advantage can be obtained. As strategic technology management involves multifunction within an organization, requires broad and diversified knowledge, and must be developed and implemented with business objectives to enable a firm’s profitability and growth, excellence in strategic technology management provides unique opportunities for organizations in terms of building a successful future. Accordingly, a framework supporting the evaluation of the technological readiness level of management can significantly contribute to developing organizational competitiveness through a better understanding of strategic-level capabilities and deficiencies in operations. In the last decade, several innovation maturity assessment models have appeared and become designated management tools that can serve as references for future practical approaches expected to be used by corporate leaders, strategists, and technology managers to understand and manage technological capabilities and capacities. The aim of this paper is to provide a comprehensive review of the state-of-the-art innovation maturity frameworks, to investigate the critical lessons learned from their application, to identify the similarities and differences among the models, and identify the main aspects and elements valid for the field and critical functions of technology management. To this end, a systematic literature review was carried out considering the relevant papers and articles published in highly ranked international journals around the 27 most widely known innovation maturity models from four relevant digital sources. Key findings suggest that despite the diversity of the given models, there is still room for improvement regarding the common understanding of innovation typologies, the full coverage of innovation capabilities, and the generalist approach to the validation and practical applicability of the structure and content of the models. Furthermore, the paper proposes an initial structure by considering the maturity assessment of the technological capacities and capabilities - i.e., technology identification, technology selection, technology acquisition, technology exploitation, and technology protection - covered by strategic technology management.

Keywords: innovation capabilities, innovation maturity models, technology audit, technology management, technology management maturity models

Procedia PDF Downloads 56
175 Age Estimation from Upper Anterior Teeth by Pulp/Tooth Ratio Using Peri-Apical X-Rays among Egyptians

Authors: Fatma Mohamed Magdy Badr El Dine, Amr Mohamed Abd Allah

Abstract:

Introduction: Age estimation of individuals is one of the crucial steps in forensic practice. Different traditional methods rely on the length of the diaphysis of long bones of limbs, epiphyseal-diaphyseal union, fusion of the primary ossification centers as well as dental eruption. However, there is a growing need for the development of precise and reliable methods to estimate age, especially in cases where dismembered corpses, burnt bodies, purified or fragmented parts are recovered. Teeth are the hardest and indestructible structure in the human body. In recent years, assessment of pulp/tooth area ratio, as an indirect quantification of secondary dentine deposition has received a considerable attention. However, scanty work has been done in Egypt in terms of applicability of pulp/tooth ratio for age estimation. Aim of the Work: The present work was designed to assess the Cameriere’s method for age estimation from pulp/tooth ratio of maxillary canines, central and lateral incisors among a sample from Egyptian population. In addition, to formulate regression equations to be used as population-based standards for age determination. Material and Methods: The present study was conducted on 270 peri-apical X-rays of maxillary canines, central and lateral incisors (collected from 131 males and 139 females aged between 19 and 52 years). The pulp and tooth areas were measured using the Adobe Photoshop software program and the pulp/tooth area ratio was computed. Linear regression equations were determined separately for canines, central and lateral incisors. Results: A significant correlation was recorded between the pulp/tooth area ratio and the chronological age. The linear regression analysis revealed a coefficient of determination (R² = 0.824 for canine, 0.588 for central incisor and 0.737 for lateral incisor teeth). Three regression equations were derived. Conclusion: As a conclusion, the pulp/tooth ratio is a useful technique for estimating age among Egyptians. Additionally, the regression equation derived from canines gave better result than the incisors.

Keywords: age determination, canines, central incisors, Egypt, lateral incisors, pulp/tooth ratio

Procedia PDF Downloads 181
174 Design Development of Floating Performance Structure for Coastal Areas in the Maltese Islands

Authors: Rebecca E. Dalli Gonzi, Joseph Falzon

Abstract:

Background: Islands in the Mediterranean region offer opportunities for various industries to take advantage of the facilitation and use of versatile floating structures in coastal areas. In the context of dense land use, marine structures can contribute to ensure both terrestrial and marine resource sustainability. Objective: The aim of this paper is to present and critically discuss an array of issues that characterize the design process of a floating structure for coastal areas and to present the challenges and opportunities of providing such multifunctional and versatile structures around the Maltese coastline. Research Design: A three-tier research design commenced with a systematic literature review. Semi-structured interviews with stakeholders including a naval architect, a marine engineer and civil designers were conducted. A second stage preceded a focus group with stakeholders in design and construction of marine lightweight structures. The three tier research design ensured triangulation of issues. All phases of the study were governed by research ethics. Findings: Findings were grouped into three main themes: excellence, impact and implementation. These included design considerations, applications and potential impacts on local industry. Literature for the design and construction of marine structures in the Maltese Islands presented multiple gaps in the application of marine structures for local industries. Weather conditions, depth of sea bed and wave actions presented limitations on the design capabilities of the structure. Conclusion: Water structures offer great potential and conclusions demonstrate the applicability of such designs for Maltese waters. There is still no such provision within Maltese coastal areas for multi-purpose use. The introduction of such facilities presents a range of benefits for visiting tourists and locals thereby offering wide range of services to tourism and marine industry. Costs for construction and adverse weather conditions were amongst the main limitations that shaped design capacities of the water structures.

Keywords: coastal areas, lightweight, marine structure, multi purpose, versatile, floating device

Procedia PDF Downloads 160