Search results for: face detection algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8830

Search results for: face detection algorithm

40 Sinhala Sign Language to Grammatically Correct Sentences using NLP

Authors: Anjalika Fernando, Banuka Athuraliya

Abstract:

This paper presents a comprehensive approach for converting Sinhala Sign Language (SSL) into grammatically correct sentences using Natural Language Processing (NLP) techniques in real-time. While previous studies have explored various aspects of SSL translation, the research gap lies in the absence of grammar checking for SSL. This work aims to bridge this gap by proposing a two-stage methodology that leverages deep learning models to detect signs and translate them into coherent sentences, ensuring grammatical accuracy. The first stage of the approach involves the utilization of a Long Short-Term Memory (LSTM) deep learning model to recognize and interpret SSL signs. By training the LSTM model on a dataset of SSL gestures, it learns to accurately classify and translate these signs into textual representations. The LSTM model achieves a commendable accuracy rate of 94%, demonstrating its effectiveness in accurately recognizing and translating SSL gestures. Building upon the successful recognition and translation of SSL signs, the second stage of the methodology focuses on improving the grammatical correctness of the translated sentences. The project employs a Neural Machine Translation (NMT) architecture, consisting of an encoder and decoder with LSTM components, to enhance the syntactical structure of the generated sentences. By training the NMT model on a parallel corpus of Sinhala wrong sentences and their corresponding grammatically correct translations, it learns to generate coherent and grammatically accurate sentences. The NMT model achieves an impressive accuracy rate of 98%, affirming its capability to produce linguistically sound translations. The proposed approach offers significant contributions to the field of SSL translation and grammar correction. Addressing the critical issue of grammar checking, it enhances the usability and reliability of SSL translation systems, facilitating effective communication between hearing-impaired and non-sign language users. Furthermore, the integration of deep learning techniques, such as LSTM and NMT, ensures the accuracy and robustness of the translation process. This research holds great potential for practical applications, including educational platforms, accessibility tools, and communication aids for the hearing-impaired. Furthermore, it lays the foundation for future advancements in SSL translation systems, fostering inclusive and equal opportunities for the deaf community. Future work includes expanding the existing datasets to further improve the accuracy and generalization of the SSL translation system. Additionally, the development of a dedicated mobile application would enhance the accessibility and convenience of SSL translation on handheld devices. Furthermore, efforts will be made to enhance the current application for educational purposes, enabling individuals to learn and practice SSL more effectively. Another area of future exploration involves enabling two-way communication, allowing seamless interaction between sign-language users and non-sign-language users.In conclusion, this paper presents a novel approach for converting Sinhala Sign Language gestures into grammatically correct sentences using NLP techniques in real time. The two-stage methodology, comprising an LSTM model for sign detection and translation and an NMT model for grammar correction, achieves high accuracy rates of 94% and 98%, respectively. By addressing the lack of grammar checking in existing SSL translation research, this work contributes significantly to the development of more accurate and reliable SSL translation systems, thereby fostering effective communication and inclusivity for the hearing-impaired community

Keywords: Sinhala sign language, sign Language, NLP, LSTM, NMT

Procedia PDF Downloads 74
39 A Hardware-in-the-loop Simulation for the Development of Advanced Control System Design for a Spinal Joint Wear Simulator

Authors: Kaushikk Iyer, Richard M Hall, David Keeling

Abstract:

Hardware-in-the-loop (HIL) simulation is an advanced technique for developing and testing complex real-time control systems. This paper presents the benefits of HIL simulation and how it can be implemented and used effectively to develop, test, and validate advanced control algorithms used in a spinal joint Wear simulator for the Tribological testing of spinal disc prostheses. spinal wear simulator is technologically the most advanced machine currently employed For the in-vitro testing of newly developed spinal Discimplants. However, the existing control techniques, such as a simple position control Does not allow the simulator to test non-sinusoidal waveforms. Thus, there is a need for better and advanced control methods that can be developed and tested Rigorouslybut safely before deploying it into the real simulator. A benchtop HILsetupis was created for experimentation, controller verification, and validation purposes, allowing different control strategies to be tested rapidly in a safe environment. The HIL simulation aspect in this setup attempts to replicate similar spinal motion and loading conditions. The spinal joint wear simulator containsa four-Barlinkpowered by electromechanical actuators. LabVIEW software is used to design a kinematic model of the spinal wear Simulator to Validatehow each link contributes towards the final motion of the implant under test. As a result, the implant articulates with an angular motion specified in the international standards, ISO-18192-1, that define fixed, simplified, and sinusoid motion and load profiles for wear testing of cervical disc implants. Using a PID controller, a velocity-based position control algorithm was developed to interface with the benchtop setup that performs HIL simulation. In addition to PID, a fuzzy logic controller (FLC) was also developed that acts as a supervisory controller. FLC provides intelligence to the PID controller by By automatically tuning the controller for profiles that vary in amplitude, shape, and frequency. This combination of the fuzzy-PID controller is novel to the wear testing application for spinal simulators and demonstrated superior performance against PIDwhen tested for a spectrum of frequency. Kaushikk Iyer is a Ph.D. Student at the University of Leeds and an employee at Key Engineering Solutions, Leeds, United Kingdom, (e-mail: [email protected], phone: +44 740 541 5502). Richard M Hall is with the University of Leeds, the United Kingdom as a professor in the Mechanical Engineering Department (e-mail: [email protected]). David Keeling is the managing director of Key Engineering Solutions, Leeds, United Kingdom (e-mail: [email protected]). Results obtained are successfully validated against the load and motion tolerances specified by the ISO18192-1 standard and fall within limits, that is, ±0.5° at the maxima and minima of the motion and ±2 % of the complete cycle for phasing. The simulation results prove the efficacy of the test setup using HIL simulation to verify and validate the accuracy and robustness of the prospective controller before its deployment into the spinal wear simulator. This method of testing controllers enables a wide range of possibilities to test advanced control algorithms that can potentially test even profiles of patients performing various dailyliving activities.

Keywords: Fuzzy-PID controller, hardware-in-the-loop (HIL), real-time simulation, spinal wear simulator

Procedia PDF Downloads 150
38 Blue Economy and Marine Mining

Authors: Fani Sakellariadou

Abstract:

The Blue Economy includes all marine-based and marine-related activities. They correspond to established, emerging as well as unborn ocean-based industries. Seabed mining is an emerging marine-based activity; its operations depend particularly on cutting-edge science and technology. The 21st century will face a crisis in resources as a consequence of the world’s population growth and the rising standard of living. The natural capital stored in the global ocean is decisive for it to provide a wide range of sustainable ecosystem services. Seabed mineral deposits were identified as having a high potential for critical elements and base metals. They have a crucial role in the fast evolution of green technologies. The major categories of marine mineral deposits are deep-sea deposits, including cobalt-rich ferromanganese crusts, polymetallic nodules, phosphorites, and deep-sea muds, as well as shallow-water deposits including marine placers. Seabed mining operations may take place within continental shelf areas of nation-states. In international waters, the International Seabed Authority (ISA) has entered into 15-year contracts for deep-seabed exploration with 21 contractors. These contracts are for polymetallic nodules (18 contracts), polymetallic sulfides (7 contracts), and cobalt-rich ferromanganese crusts (5 contracts). Exploration areas are located in the Clarion-Clipperton Zone, the Indian Ocean, the Mid Atlantic Ridge, the South Atlantic Ocean, and the Pacific Ocean. Potential environmental impacts of deep-sea mining include habitat alteration, sediment disturbance, plume discharge, toxic compounds release, light and noise generation, and air emissions. They could cause burial and smothering of benthic species, health problems for marine species, biodiversity loss, reduced photosynthetic mechanism, behavior change and masking acoustic communication for mammals and fish, heavy metals bioaccumulation up the food web, decrease of the content of dissolved oxygen, and climate change. An important concern related to deep-sea mining is our knowledge gap regarding deep-sea bio-communities. The ecological consequences that will be caused in the remote, unique, fragile, and little-understood deep-sea ecosystems and inhabitants are still largely unknown. The blue economy conceptualizes oceans as developing spaces supplying socio-economic benefits for current and future generations but also protecting, supporting, and restoring biodiversity and ecological productivity. In that sense, people should apply holistic management and make an assessment of marine mining impacts on ecosystem services, including the categories of provisioning, regulating, supporting, and cultural services. The variety in environmental parameters, the range in sea depth, the diversity in the characteristics of marine species, and the possible proximity to other existing maritime industries cause a span of marine mining impact the ability of ecosystems to support people and nature. In conclusion, the use of the untapped potential of the global ocean demands a liable and sustainable attitude. Moreover, there is a need to change our lifestyle and move beyond the philosophy of single-use. Living in a throw-away society based on a linear approach to resource consumption, humans are putting too much pressure on the natural environment. Applying modern, sustainable and eco-friendly approaches according to the principle of circular economy, a substantial amount of natural resource savings will be achieved. Acknowledgement: This work is part of the MAREE project, financially supported by the Division VI of IUPAC. This work has been partly supported by the University of Piraeus Research Center.

Keywords: blue economy, deep-sea mining, ecosystem services, environmental impacts

Procedia PDF Downloads 55
37 Towards Automatic Calibration of In-Line Machine Processes

Authors: David F. Nettleton, Elodie Bugnicourt, Christian Wasiak, Alejandro Rosales

Abstract:

In this presentation, preliminary results are given for the modeling and calibration of two different industrial winding MIMO (Multiple Input Multiple Output) processes using machine learning techniques. In contrast to previous approaches which have typically used ‘black-box’ linear statistical methods together with a definition of the mechanical behavior of the process, we use non-linear machine learning algorithms together with a ‘white-box’ rule induction technique to create a supervised model of the fitting error between the expected and real force measures. The final objective is to build a precise model of the winding process in order to control de-tension of the material being wound in the first case, and the friction of the material passing through the die, in the second case. Case 1, Tension Control of a Winding Process. A plastic web is unwound from a first reel, goes over a traction reel and is rewound on a third reel. The objectives are: (i) to train a model to predict the web tension and (ii) calibration to find the input values which result in a given tension. Case 2, Friction Force Control of a Micro-Pullwinding Process. A core+resin passes through a first die, then two winding units wind an outer layer around the core, and a final pass through a second die. The objectives are: (i) to train a model to predict the friction on die2; (ii) calibration to find the input values which result in a given friction on die2. Different machine learning approaches are tested to build models, Kernel Ridge Regression, Support Vector Regression (with a Radial Basis Function Kernel) and MPART (Rule Induction with continuous value as output). As a previous step, the MPART rule induction algorithm was used to build an explicative model of the error (the difference between expected and real friction on die2). The modeling of the error behavior using explicative rules is used to help improve the overall process model. Once the models are built, the inputs are calibrated by generating Gaussian random numbers for each input (taking into account its mean and standard deviation) and comparing the output to a target (desired) output until a closest fit is found. The results of empirical testing show that a high precision is obtained for the trained models and for the calibration process. The learning step is the slowest part of the process (max. 5 minutes for this data), but this can be done offline just once. The calibration step is much faster and in under one minute obtained a precision error of less than 1x10-3 for both outputs. To summarize, in the present work two processes have been modeled and calibrated. A fast processing time and high precision has been achieved, which can be further improved by using heuristics to guide the Gaussian calibration. Error behavior has been modeled to help improve the overall process understanding. This has relevance for the quick optimal set up of many different industrial processes which use a pull-winding type process to manufacture fibre reinforced plastic parts. Acknowledgements to the Openmind project which is funded by Horizon 2020 European Union funding for Research & Innovation, Grant Agreement number 680820

Keywords: data model, machine learning, industrial winding, calibration

Procedia PDF Downloads 216
36 Generative Syntaxes: Macro-Heterophony and the Form of ‘Synchrony’

Authors: Luminiţa Duţică, Gheorghe Duţică

Abstract:

One of the most powerful language innovation in the twentieth century music was the heterophony–hypostasis of the vertical syntax entered into the sphere of interest of many composers, such as George Enescu, Pierre Boulez, Mauricio Kagel, György Ligeti and others. The heterophonic syntax has a history of its growth, which means a succession of different concepts and writing techniques. The trajectory of settling this phenomenon does not necessarily take into account the chronology: there are highly complex primary stages and advanced stages of returning to the simple forms of writing. In folklore, the plurimelodic simultaneities are free or random and originate from the (unintentional) differences/‘deviations’ from the state of unison, through a variety of ornaments, melismas, imitations, elongations and abbreviations, all in a flexible rhythmic and non-periodic/immeasurable framework, proper to the parlando-rubato rhythmics. Within the general framework of the multivocal organization, the heterophonic syntax in elaborate (academic) version has imposed itself relatively late compared with polyphony and homophony. Of course, the explanation is simple, if we consider the causal relationship between the sound vocabulary elements – in this case, the modalism – and the typologies of vertical organization appropriate for it. Therefore, adding up the ‘classic’ pathway of the writing typologies (monody – polyphony – homophony), heterophony - applied equally to the structures of modal, serial or synthesis vocabulary – reclaims necessarily an own macrotemporal form, in the sense of the analogies enshrined by the evolution of the musical styles and languages: polyphony→fugue, homophony→sonata. Concerned about the prospect of edifying a new musical ontology, the composer Ştefan Niculescu experienced – along with the mathematical organization of heterophony according to his own original methods – the possibility of extrapolation of this phenomenon in macrostructural plan, reaching this way to the unique form of ‘synchrony’. Founded on coincidentia oppositorum principle (involving the ‘one-multiple’ binom), the sound architecture imagined by Ştefan Niculescu consists in one (temporal) model / algorithm of articulation of two sound states: 1. monovocality state (principle of identity) and 2. multivocality state (principle of difference). In this context, the heterophony becomes an (auto)generative mechanism, with macrotemporal amplitude, strategy that will be grown by the composer, practically throughout his creation (see the works: Ison I, Ison II, Unisonos I, Unisonos II, Duplum, Triplum, Psalmus, Héterophonies pour Montreux (Homages to Enescu and Bartók etc.). For the present demonstration, we selected one of the most edifying works of Ştefan Niculescu – Simphony II, Opus dacicum – where the form of (heterophony-)synchrony acquires monumental-symphonic features, representing an emblematic case for the complexity level achieved by this type of vertical syntax in the twentieth century music.

Keywords: heterophony, modalism, serialism, synchrony, syntax

Procedia PDF Downloads 319
35 A Human Factors Approach to Workload Optimization for On-Screen Review Tasks

Authors: Christina Kirsch, Adam Hatzigiannis

Abstract:

Rail operators and maintainers worldwide are increasingly replacing walking patrols in the rail corridor with mechanized track patrols -essentially data capture on trains- and on-screen reviews of track infrastructure in centralized review facilities. The benefit is that infrastructure workers are less exposed to the dangers of the rail corridor. The impact is a significant change in work design from walking track sections and direct observation in the real world to sedentary jobs in the review facility reviewing captured data on screens. Defects in rail infrastructure can have catastrophic consequences. Reviewer performance regarding accuracy and efficiency of reviews within the available time frame is essential to ensure safety and operational performance. Rail operators must optimize workload and resource loading to transition to on-screen reviews successfully. Therefore, they need to know what workload assessment methodologies will provide reliable and valid data to optimize resourcing for on-screen reviews. This paper compares objective workload measures, including track difficulty ratings and review distance covered per hour, and subjective workload assessments (NASA TLX) and analyses the link between workload and reviewer performance, including sensitivity, precision, and overall accuracy. An experimental study was completed with eight on-screen reviewers, including infrastructure workers and engineers, reviewing track sections with different levels of track difficulty over nine days. Each day the reviewers completed four 90-minute sessions of on-screen inspection of the track infrastructure. Data regarding the speed of review (km/ hour), detected defects, false negatives, and false positives were collected. Additionally, all reviewers completed a subjective workload assessment (NASA TLX) after each 90-minute session and a short employee engagement survey at the end of the study period that captured impacts on job satisfaction and motivation. The results showed that objective measures for tracking difficulty align with subjective mental demand, temporal demand, effort, and frustration in the NASA TLX. Interestingly, review speed correlated with subjective assessments of physical and temporal demand, but to mental demand. Subjective performance ratings correlated with all accuracy measures and review speed. The results showed that subjective NASA TLX workload assessments accurately reflect objective workload. The analysis of the impact of workload on performance showed that subjective mental demand correlated with high precision -accurately detected defects, not false positives. Conversely, high temporal demand was negatively correlated with sensitivity and the percentage of detected existing defects. Review speed was significantly correlated with false negatives. With an increase in review speed, accuracy declined. On the other hand, review speed correlated with subjective performance assessments. Reviewers thought their performance was higher when they reviewed the track sections faster, despite the decline in accuracy. The study results were used to optimize resourcing and ensure that reviewers had enough time to review the allocated track sections to improve defect detection rates in accordance with the efficiency-thoroughness trade-off. Overall, the study showed the importance of a multi-method approach to workload assessment and optimization, combining subjective workload assessments with objective workload and performance measures to ensure that recommendations for work system optimization are evidence-based and reliable.

Keywords: automation, efficiency-thoroughness trade-off, human factors, job design, NASA TLX, performance optimization, subjective workload assessment, workload analysis

Procedia PDF Downloads 92
34 Kinematic Gait Analysis Is a Non-Invasive, More Objective and Earlier Measurement of Impairment in the Mdx Mouse Model of Duchenne Muscular Dystrophy

Authors: P. J. Sweeney, T. Ahtoniemi, J. Puoliväli, T. Laitinen, K. Lehtimäki, A. Nurmi, D. Wells

Abstract:

Duchenne muscular dystrophy (DMD) is caused by an X linked mutation in the dystrophin gene; lack of dystrophin causes a progressive muscle necrosis which leads to a progressive decrease in mobility in those suffering from the disease. The MDX mouse, a mutant mouse model which displays a frank dystrophinopathy, is currently widely employed in pre clinical efficacy models for treatments and therapies aimed at DMD. In general the end-points examined within this model have been based on invasive histopathology of muscles and serum biochemical measures like measurement of serum creatine kinase (sCK). It is established that a “critical period” between 4 and 6 weeks exists in the MDX mouse when there is extensive muscle damage that is largely sub clinical but evident with sCK measurements and histopathological staining. However, a full characterization of the MDX model remains largely incomplete especially with respect to the ability to aggravate of the muscle damage beyond the critical period. The purpose of this study was to attempt to aggravate the muscle damage in the MDX mouse and to create a wider, more readily translatable and discernible, therapeutic window for the testing of potential therapies for DMD. The study consisted of subjecting 15 male mutant MDX mice and 15 male wild-type mice to an intense chronic exercise regime that consisted of bi-weekly (two times per week) treadmill sessions over a 12 month period. Each session was 30 minutes in duration and the treadmill speed was gradually built up to 14m/min for the entire session. Baseline plasma creatine kinase (pCK), treadmill training performance and locomotor activity were measured after the “critical period” at around 10 weeks of age and again at 14 weeks of age, 6 months, 9 months and 12 months of age. In addition, kinematic gait analysis was employed using a novel analysis algorithm in order to compare changes in gait and fine motor skills in diseased exercised MDX mice compared to exercised wild type mice and non exercised MDX mice. In addition, a morphological and metabolic profile (including lipid profile), from the muscles most severely affected, the gastrocnemius muscle and the tibialis anterior muscle, was also measured at the same time intervals. Results indicate that by aggravating or exacerbating the underlying muscle damage in the MDX mouse by exercise a more pronounced and severe phenotype in comes to light and this can be picked up earlier by kinematic gait analysis. A reduction in mobility as measured by open field is not apparent at younger ages nor during the critical period, but changes in gait are apparent in the mutant MDX mice. These gait changes coincide with pronounced morphological and metabolic changes by non-invasive anatomical MRI and proton spectroscopy (1H-MRS) we have reported elsewhere. Evidence of a progressive asymmetric pathology in imaging parameters as well as in the kinematic gait analysis was found. Taken together, the data show that chronic exercise regime exacerbates the muscle damage beyond the critical period and the ability to measure through non-invasive means are important factors to consider when performing preclinical efficacy studies in the MDX mouse.

Keywords: Gait, muscular dystrophy, Kinematic analysis, neuromuscular disease

Procedia PDF Downloads 259
33 Learning Curve Effect on Materials Procurement Schedule of Multiple Sister Ships

Authors: Vijaya Dixit Aasheesh Dixit

Abstract:

Shipbuilding industry operates in Engineer Procure Construct (EPC) context. Product mix of a shipyard comprises of various types of ships like bulk carriers, tankers, barges, coast guard vessels, sub-marines etc. Each order is unique based on the type of ship and customized requirements, which are engineered into the product right from design stage. Thus, to execute every new project, a shipyard needs to upgrade its production expertise. As a result, over the long run, holistic learning occurs across different types of projects which contributes to the knowledge base of the shipyard. Simultaneously, in the short term, during execution of a project comprising of multiple sister ships, repetition of similar tasks leads to learning at activity level. This research aims to capture above learnings of a shipyard and incorporate learning curve effect in project scheduling and materials procurement to improve project performance. Extant literature provides support for the existence of such learnings in an organization. In shipbuilding, there are sequences of similar activities which are expected to exhibit learning curve behavior. For example, the nearly identical structural sub-blocks which are successively fabricated, erected, and outfitted with piping and electrical systems. Learning curve representation can model not only a decrease in mean completion time of an activity, but also a decrease in uncertainty of activity duration. Sister ships have similar material requirements. The same supplier base supplies materials for all the sister ships within a project. On one hand, this provides an opportunity to reduce transportation cost by batching the order quantities of multiple ships. On the other hand, it increases the inventory holding cost at shipyard and the risk of obsolescence. Further, due to learning curve effect the production scheduled of each consequent ship gets compressed. Thus, the material requirement schedule of every next ship differs from its previous ship. As more and more ships get constructed, compressed production schedules increase the possibility of batching the orders of sister ships. This work aims at integrating materials management with project scheduling of long duration projects for manufacturing of multiple sister ships. It incorporates the learning curve effect on progressively compressing material requirement schedules and addresses the above trade-off of transportation cost and inventory holding and shortage costs while satisfying budget constraints of various stages of the project. The activity durations and lead time of items are not crisp and are available in the form of probabilistic distribution. A Stochastic Mixed Integer Programming (SMIP) model is formulated which is solved using evolutionary algorithm. Its output provides ordering dates of items and degree of order batching for all types of items. Sensitivity analysis determines the threshold number of sister ships required in a project to leverage the advantage of learning curve effect in materials management decisions. This analysis will help materials managers to gain insights about the scenarios: when and to what degree is it beneficial to treat a multiple ship project as an integrated one by batching the order quantities and when and to what degree to practice distinctive procurement for individual ship.

Keywords: learning curve, materials management, shipbuilding, sister ships

Procedia PDF Downloads 476
32 Urban Heat Islands Analysis of Matera, Italy Based on the Change of Land Cover Using Satellite Landsat Images from 2000 to 2017

Authors: Giuseppina Anna Giorgio, Angela Lorusso, Maria Ragosta, Vito Telesca

Abstract:

Climate change is a major public health threat due to the effects of extreme weather events on human health and on quality of life in general. In this context, mean temperatures are increasing, in particular, extreme temperatures, with heat waves becoming more frequent, more intense, and longer lasting. In many cities, extreme heat waves have drastically increased, giving rise to so-called Urban Heat Island (UHI) phenomenon. In an urban centre, maximum temperatures may be up to 10° C warmer, due to different local atmospheric conditions. UHI occurs in the metropolitan areas as function of the population size and density of a city. It consists of a significant difference in temperature compared to the rural/suburban areas. Increasing industrialization and urbanization have increased this phenomenon and it has recently also been detected in small cities. Weather conditions and land use are one of the key parameters in the formation of UHI. In particular surface urban heat island is directly related to temperatures, to land surface types and surface modifications. The present study concern a UHI analysis of Matera city (Italy) based on the analysis of temperature, change in land use and land cover, using Corine Land Cover maps and satellite Landsat images. Matera, located in Southern Italy, has a typical Mediterranean climate with mild winters and hot and humid summers. Moreover, Matera has been awarded the international title of the 2019 European Capital of Culture. Matera represents a significant example of vernacular architecture. The structure of the city is articulated by a vertical succession of dug layers sometimes excavated or partly excavated and partly built, according to the original shape and height of the calcarenitic slope. In this study, two meteorological stations were selected: MTA (MaTera Alsia, in industrial zone) and MTCP (MaTera Civil Protection, suburban area located in a green zone). In order to evaluate the increase in temperatures (in terms of UHI occurrences) over time, and evaluating the effect of land use on weather conditions, the climate variability of temperatures for both stations was explored. Results show that UHI phenomena is growing in Matera city, with an increase of maximum temperature values at a local scale. Subsequently, spatial analysis was conducted by Landsat satellite images. Four years was selected in the summer period (27/08/2000, 27/07/2006, 11/07/2012, 02/08/2017). In Particular, Landsat 7 ETM+ for 2000, 2006 and 2012 years; Landsat 8 OLI/TIRS for 2017. In order to estimate the LST, Mono Window Algorithm was applied. Therefore, the increase of LST values spatial scale trend has been verified, in according to results obtained at local scale. Finally, the analysis of land use maps over the years by the LST and/or the maximum temperatures measured, show that the development of industrialized area produces a corresponding increase in temperatures and consequently a growth in UHI.

Keywords: climate variability, land surface temperature, LANDSAT images, urban heat island

Procedia PDF Downloads 99
31 An Elasto-Viscoplastic Constitutive Model for Unsaturated Soils: Numerical Implementation and Validation

Authors: Maria Lazari, Lorenzo Sanavia

Abstract:

Mechanics of unsaturated soils has been an active field of research in the last decades. Efficient constitutive models that take into account the partial saturation of soil are necessary to solve a number of engineering problems e.g. instability of slopes and cuts due to heavy rainfalls. A large number of constitutive models can now be found in the literature that considers fundamental issues associated with the unsaturated soil behaviour, like the volume change and shear strength behaviour with suction or saturation changes. Partially saturated soils may either expand or collapse upon wetting depending on the stress level, and it is also possible that a soil might experience a reversal in the volumetric behaviour during wetting. Shear strength of soils also changes dramatically with changes in the degree of saturation, and a related engineering problem is slope failures caused by rainfall. There are several states of the art reviews over the last years for studying the topic, usually providing a thorough discussion of the stress state, the advantages, and disadvantages of specific constitutive models as well as the latest developments in the area of unsaturated soil modelling. However, only a few studies focused on the coupling between partial saturation states and time effects on the behaviour of geomaterials. Rate dependency is experimentally observed in the mechanical response of granular materials, and a viscoplastic constitutive model is capable of reproducing creep and relaxation processes. Therefore, in this work an elasto-viscoplastic constitutive model for unsaturated soils is proposed and validated on the basis of experimental data. The model constitutes an extension of an existing elastoplastic strain-hardening constitutive model capable of capturing the behaviour of variably saturated soils, based on energy conjugated stress variables in the framework of superposed continua. The purpose was to develop a model able to deal with possible mechanical instabilities within a consistent energy framework. The model shares the same conceptual structure of the elastoplastic laws proposed to deal with bonded geomaterials subject to weathering or diagenesis and is capable of modelling several kinds of instabilities induced by the loss of hydraulic bonding contributions. The novelty of the proposed formulation is enhanced with the incorporation of density dependent stiffness and hardening coefficients in order to allow the modeling of the pycnotropy behaviour of granular materials with a single set of material constants. The model has been implemented in the commercial FE platform PLAXIS, widely used in Europe for advanced geotechnical design. The algorithmic strategies adopted for the stress-point algorithm had to be revised to take into account the different approach adopted by PLAXIS developers in the solution of the discrete non-linear equilibrium equations. An extensive comparison between models with a series of experimental data reported by different authors is presented to validate the model and illustrate the capability of the newly developed model. After the validation, the effectiveness of the viscoplastic model is displayed by numerical simulations of a partially saturated slope failure of the laboratory scale and the effect of viscosity and degree of saturation on slope’s stability is discussed.

Keywords: PLAXIS software, slope, unsaturated soils, Viscoplasticity

Procedia PDF Downloads 198
30 Influence of the Local External Pressure on Measured Parameters of Cutaneous Microcirculation

Authors: Irina Mizeva, Elena Potapova, Viktor Dremin, Mikhail Mezentsev, Valeri Shupletsov

Abstract:

The local tissue perfusion is regulated by the microvascular tone which is under the control of a number of physiological mechanisms. Laser Doppler flowmetry (LDF) together with wavelet analyses is the most commonly used technique to study the regulatory mechanisms of cutaneous microcirculation. External factors such as temperature, local pressure of the probe on the skin, etc. influence on the blood flow characteristics and are used as physiological tests to evaluate microvascular regulatory mechanisms. Local probe pressure influences on the microcirculation parameters measured by optical methods: diffuse reflectance spectroscopy, fluorescence spectroscopy, and LDF. Therefore, further study of probe pressure effects can be useful to improve the reliability of optical measurement. During pressure tests variation of the mean perfusion measured by means of LDF usually is estimated. An additional information concerning the physiological mechanisms of the vascular tone regulation system in response to local pressure can be obtained using spectral analyses of LDF samples. The aim of the present work was to develop protocol and algorithm of data processing appropriate for study physiological response to the local pressure test. Involving 6 subjects (20±2 years) and providing 5 measurements for every subject we estimated intersubject and-inter group variability of response of both averaged and oscillating parts of the LDF sample on external surface pressure. The final purpose of the work was to find special features which further can be used in wider clinic studies. The cutaneous perfusion measurements were carried out by LAKK-02 (SPE LAZMA Ltd., Russia), the skin loading was provided by the originally designed device which allows one to distribute the pressure around the LDF probe. The probe was installed on the dorsal part of the distal finger of the index figure. We collected measurements continuously for one hour and varied loading from 0 to 180mmHg stepwise with a step duration of 10 minutes. Further, we post-processed the samples using the wavelet transform and traced the energy of oscillations in five frequency bands over time. Weak loading leads to pressure-induced vasodilation, so one should take into account that the perfusion measured under pressure conditions will be overestimated. On the other hand, we revealed a decrease in endothelial associated fluctuations. Further loading (88 mmHg) induces amplification of pulsations in all frequency bands. We assume that such loading leads to a higher number of closed capillaries, higher input of arterioles in the LDF signal and as a consequence more vivid oscillations which mainly are formed in arterioles. External pressure higher than 144 mmHg leads to the decrease of oscillating components, after removing the loading very rapid restore of the tissue perfusion takes place. In this work, we have demonstrated that local skin loading influence on the microcirculation parameters measured by optic technique; this should be taken into account while developing portable electronic devices. The proposed protocol of local loading allows one to evaluate PIV as far as to trace dynamic of blood flow oscillations. This study was supported by the Russian Science Foundation under project N 18-15-00201.

Keywords: blood microcirculation, laser Doppler flowmetry, pressure-induced vasodilation, wavelet analyses blood

Procedia PDF Downloads 124
29 A Computer-Aided System for Tooth Shade Matching

Authors: Zuhal Kurt, Meral Kurt, Bilge T. Bal, Kemal Ozkan

Abstract:

Shade matching and reproduction is the most important element of success in prosthetic dentistry. Until recently, shade matching procedure was implemented by dentists visual perception with the help of shade guides. Since many factors influence visual perception; tooth shade matching using visual devices (shade guides) is highly subjective and inconsistent. Subjective nature of this process has lead to the development of instrumental devices. Nowadays, colorimeters, spectrophotometers, spectroradiometers and digital image analysing systems are used for instrumental shade selection. Instrumental devices have advantages that readings are quantifiable, can obtain more rapidly and simply, objectively and precisely. However, these devices have noticeable drawbacks. For example, translucent structure and irregular surfaces of teeth lead to defects on measurement with these devices. Also between the results acquired by devices with different measurement principles may make inconsistencies. So, its obligatory to search for new methods for dental shade matching process. A computer-aided system device; digital camera has developed rapidly upon today. Currently, advances in image processing and computing have resulted in the extensive use of digital cameras for color imaging. This procedure has a much cheaper process than the use of traditional contact-type color measurement devices. Digital cameras can be taken by the place of contact-type instruments for shade selection and overcome their disadvantages. Images taken from teeth show morphology and color texture of teeth. In last decades, a new method was recommended to compare the color of shade tabs taken by a digital camera using color features. This method showed that visual and computer-aided shade matching systems should be used as concatenated. Recently using methods of feature extraction techniques are based on shape description and not used color information. However, color is mostly experienced as an essential property in depicting and extracting features from objects in the world around us. When local feature descriptors with color information are extended by concatenating color descriptor with the shape descriptor, that descriptor will be effective on visual object recognition and classification task. Therefore, the color descriptor is to be used in combination with a shape descriptor it does not need to contain any spatial information, which leads us to use local histograms. This local color histogram method is remain reliable under variation of photometric changes, geometrical changes and variation of image quality. So, coloring local feature extraction methods are used to extract features, and also the Scale Invariant Feature Transform (SIFT) descriptor used to for shape description in the proposed method. After the combination of these descriptors, the state-of-art descriptor named by Color-SIFT will be used in this study. Finally, the image feature vectors obtained from quantization algorithm are fed to classifiers such as Nearest Neighbor (KNN), Naive Bayes or Support Vector Machines (SVM) to determine label(s) of the visual object category or matching. In this study, SVM are used as classifiers for color determination and shade matching. Finally, experimental results of this method will be compared with other recent studies. It is concluded from the study that the proposed method is remarkable development on computer aided tooth shade determination system.

Keywords: classifiers, color determination, computer-aided system, tooth shade matching, feature extraction

Procedia PDF Downloads 405
28 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization

Authors: Soheila Sadeghi

Abstract:

Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.

Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction

Procedia PDF Downloads 12
27 Settings of Conditions Leading to Reproducible and Robust Biofilm Formation in vitro in Evaluation of Drug Activity against Staphylococcal Biofilms

Authors: Adela Diepoltova, Klara Konecna, Ondrej Jandourek, Petr Nachtigal

Abstract:

A loss of control over antibiotic-resistant pathogens has become a global issue due to severe and often untreatable infections. This state is reflected in complicated treatment, health costs, and higher mortality. All these factors emphasize the urgent need for the discovery and development of new anti-infectives. One of the most common pathogens mentioned in the phenomenon of antibiotic resistance are bacteria of the genus Staphylococcus. These bacterial agents have developed several mechanisms against the effect of antibiotics. One of them is biofilm formation. In staphylococci, biofilms are associated with infections such as endocarditis, osteomyelitis, catheter-related bloodstream infections, etc. To author's best knowledge, no validated and standardized methodology evaluating candidate compound activity against staphylococcal biofilms exists. However, a variety of protocols for in vitro drug activity testing has been suggested, yet there are often fundamental differences. Based on our experience, a key methodological step that leads to credible results is to form a robust biofilm with appropriate attributes such as firm adherence to the substrate, a complex arrangement in layers, and the presence of extracellular polysaccharide matrix. At first, for the purpose of drug antibiofilm activity evaluation, the focus was put on various conditions (supplementation of cultivation media by human plasma/fetal bovine serum, shaking mode, the density of initial inoculum) that should lead to reproducible and robust in vitro staphylococcal biofilm formation in microtiter plate model. Three model staphylococcal reference strains were included in the study: Staphylococcus aureus (ATCC 29213), methicillin-resistant Staphylococcus aureus (ATCC 43300), and Staphylococcus epidermidis (ATCC 35983). The total biofilm biomass was quantified using the Christensen method with crystal violet, and results obtained from at least three independent experiments were statistically processed. Attention was also paid to the viability of the biofilm-forming staphylococcal cells and the presence of extracellular polysaccharide matrix. The conditions that led to robust biofilm biomass formation with attributes for biofilms mentioned above were then applied by introducing an alternative method analogous to the commercially available test system, the Calgary Biofilm Device. In this test system, biofilms are formed on pegs that are incorporated into the lid of the microtiter plate. This system provides several advantages (in situ detection and quantification of biofilm microbial cells that have retained their viability after drug exposure). Based on our preliminary studies, it was found that the attention to the peg surface and substrate on which the bacterial biofilms are formed should also be paid to. Therefore, further steps leading to the optimization were introduced. The surface of pegs was coated by human plasma, fetal bovine serum, and L-polylysine. Subsequently, the willingness of bacteria to adhere and form biofilm was monitored. In conclusion, suitable conditions were revealed, leading to the formation of reproducible, robust staphylococcal biofilms in vitro for the microtiter model and the system analogous to the Calgary biofilm device, as well. The robustness and typical slime texture could be detected visually. Likewise, an analysis by confocal laser scanning microscopy revealed a complex three-dimensional arrangement of biofilm forming organisms surrounded by an extracellular polysaccharide matrix.

Keywords: anti-biofilm drug activity screening, in vitro biofilm formation, microtiter plate model, the Calgary biofilm device, staphylococcal infections, substrate modification, surface coating

Procedia PDF Downloads 131
26 Calpoly Autonomous Transportation Experience: Software for Driverless Vehicle Operating on Campus

Authors: F. Tang, S. Boskovich, A. Raheja, Z. Aliyazicioglu, S. Bhandari, N. Tsuchiya

Abstract:

Calpoly Autonomous Transportation Experience (CATE) is a driverless vehicle that we are developing to provide safe, accessible, and efficient transportation of passengers throughout the Cal Poly Pomona campus for events such as orientation tours. Unlike the other self-driving vehicles that are usually developed to operate with other vehicles and reside only on the road networks, CATE will operate exclusively on walk-paths of the campus (potentially narrow passages) with pedestrians traveling from multiple locations. Safety becomes paramount as CATE operates within the same environment as pedestrians. As driverless vehicles assume greater roles in today’s transportation, this project will contribute to autonomous driving with pedestrian traffic in a highly dynamic environment. The CATE project requires significant interdisciplinary work. Researchers from mechanical engineering, electrical engineering and computer science are working together to attack the problem from different perspectives (hardware, software and system). In this abstract, we describe the software aspects of the project, with a focus on the requirements and the major components. CATE shall provide a GUI interface for the average user to interact with the car and access its available functionalities, such as selecting a destination from any origin on campus. We have developed an interface that provides an aerial view of the campus map, the current car location, routes, and the goal location. Users can interact with CATE through audio or manual inputs. CATE shall plan routes from the origin to the selected destination for the vehicle to travel. We will use an existing aerial map for the campus and convert it to a spatial graph configuration where the vertices represent the landmarks and edges represent paths that the car should follow with some designated behaviors (such as stay on the right side of the lane or follow an edge). Graph search algorithms such as A* will be implemented as the default path planning algorithm. D* Lite will be explored to efficiently recompute the path when there are any changes to the map. CATE shall avoid any static obstacles and walking pedestrians within some safe distance. Unlike traveling along traditional roadways, CATE’s route directly coexists with pedestrians. To ensure the safety of the pedestrians, we will use sensor fusion techniques that combine data from both lidar and stereo vision for obstacle avoidance while also allowing CATE to operate along its intended route. We will also build prediction models for pedestrian traffic patterns. CATE shall improve its location and work under a GPS-denied situation. CATE relies on its GPS to give its current location, which has a precision of a few meters. We have implemented an Unscented Kalman Filter (UKF) that allows the fusion of data from multiple sensors (such as GPS, IMU, odometry) in order to increase the confidence of localization. We also noticed that GPS signals can easily get degraded or blocked on campus due to high-rise buildings or trees. UKF can also help here to generate a better state estimate. In summary, CATE will provide on-campus transportation experience that coexists with dynamic pedestrian traffic. In future work, we will extend it to multi-vehicle scenarios.

Keywords: driverless vehicle, path planning, sensor fusion, state estimate

Procedia PDF Downloads 117
25 Results concerning the University: Industry Partnership for a Research Project Implementation (MUROS) in the Romanian Program Star

Authors: Loretta Ichim, Dan Popescu, Grigore Stamatescu

Abstract:

The paper reports the collaboration between a top university from Romania and three companies for the implementation of a research project in a multidisciplinary domain, focusing on the impact and benefits both for the education and industry. The joint activities were developed under the Space Technology and Advanced Research Program (STAR), funded by the Romanian Space Agency (ROSA) for a university-industry partnership. The context was defined by linking the European Space Agency optional programs, with the development and promotion national research, with the educational and industrial capabilities in the aeronautics, security and related areas by increasing the collaboration between academic and industrial entities as well as by realizing high-level scientific production. The project name is Multisensory Robotic System for Aerial Monitoring of Critical Infrastructure Systems (MUROS), which was carried 2013-2016. The project included the University POLITEHNICA of Bucharest (coordinator) and three companies, which manufacture and market unmanned aerial systems. The project had as main objective the development of an integrated system for combined ground wireless sensor networks and UAV monitoring in various application scenarios for critical infrastructure surveillance. This included specific activities related to fundamental and applied research, technology transfer, prototype implementation and result dissemination. The core area of the contributions laid in distributed data processing and communication mechanisms, advanced image processing and embedded system development. Special focus is given by the paper to analyzing the impact the project implementation in the educational process, directly or indirectly, through the faculty members (professors and students) involved in the research team. Three main directions are discussed: a) enabling students to carry out internships at the partner companies, b) handling advanced topics and industry requirements at the master's level, c) experiments and concept validation for doctoral thesis. The impact of the research work (as the educational component) developed by the faculty members on the increasing performances of the companies’ products is highlighted. The collaboration between university and companies was well balanced both for contributions and results. The paper also presents the outcomes of the project which reveals the efficient collaboration between high education and industry: master thesis, doctoral thesis, conference papers, journal papers, technical documentation for technology transfer, prototype, and patent. The experience can provide useful practices of blending research and education within an academia-industry cooperation framework while the lessons learned represent a starting point in debating the new role of advanced research and development performing companies in association with higher education. This partnership, promoted at UE level, has a broad impact beyond the constrained scope of a single project and can develop into long-lasting collaboration while benefiting all stakeholders: students, universities and the surrounding knowledge-based economic and industrial ecosystem. Due to the exchange of experiences between the university (UPB) and the manufacturing company (AFT Design), a new project, SIMUL, under the Bridge Grant Program (Romanian executive agency UEFISCDI) was started (2016 – 2017). This project will continue the educational research for innovation on master and doctoral studies in MUROS thematic (collaborative multi-UAV application for flood detection).

Keywords: education process, multisensory robotic system, research and innovation project, technology transfer, university-industry partnership

Procedia PDF Downloads 212
24 Identification Strategies for Unknown Victims from Mass Disasters and Unknown Perpetrators from Violent Crime or Terrorist Attacks

Authors: Michael Josef Schwerer

Abstract:

Background: The identification of unknown victims from mass disasters, violent crimes, or terrorist attacks is frequently facilitated through information from missing persons lists, portrait photos, old or recent pictures showing unique characteristics of a person such as scars or tattoos, or simply reference samples from blood relatives for DNA analysis. In contrast, the identification or at least the characterization of an unknown perpetrator from criminal or terrorist actions remains challenging, particularly in the absence of material or data for comparison, such as fingerprints, which had been previously stored in criminal records. In scenarios that result in high levels of destruction of the perpetrator’s corpse, for instance, blast or fire events, the chance for a positive identification using standard techniques is further impaired. Objectives: This study shows the forensic genetic procedures in the Legal Medicine Service of the German Air Force for the identification of unknown individuals, including such cases in which reference samples are not available. Scenarios requiring such efforts predominantly involve aircraft crash investigations, which are routinely carried out by the German Air Force Centre of Aerospace Medicine as one of the Institution’s essential missions. Further, casework by military police or military intelligence is supported based on administrative cooperation. In the talk, data from study projects, as well as examples from real casework, will be demonstrated and discussed with the audience. Methods: Forensic genetic identification in our laboratories involves the analysis of Short Tandem Repeats and Single Nucleotide Polymorphisms in nuclear DNA along with mitochondrial DNA haplotyping. Extended DNA analysis involves phenotypic markers for skin, hair, and eye color together with the investigation of a person’s biogeographic ancestry. Assessment of the biological age of an individual employs CpG-island methylation analysis using bisulfite-converted DNA. Forensic Investigative Genealogy assessment allows the detection of an unknown person’s blood relatives in reference databases. Technically, end-point-PCR, real-time PCR, capillary electrophoresis, pyrosequencing as well as next generation sequencing using flow-cell-based and chip-based systems are used. Results and Discussion: Optimization of DNA extraction from various sources, including difficult matrixes like formalin-fixed, paraffin-embedded tissues, degraded specimens from decomposed bodies or from decedents exposed to blast or fire events, provides soil for successful PCR amplification and subsequent genetic profiling. For cases with extremely low yields of extracted DNA, whole genome preamplification protocols are successfully used, particularly regarding genetic phenotyping. Improved primer design for CpG-methylation analysis, together with validated sampling strategies for the analyzed substrates from, e.g., lymphocyte-rich organs, allows successful biological age estimation even in bodies with highly degraded tissue material. Conclusions: Successful identification of unknown individuals or at least their phenotypic characterization using pigmentation markers together with age-informative methylation profiles, possibly supplemented by family tree search employing Forensic Investigative Genealogy, can be provided in specialized laboratories. However, standard laboratory procedures must be adapted to work with difficult and highly degraded sample materials.

Keywords: identification, forensic genetics, phenotypic markers, CPG methylation, biological age estimation, forensic investigative genealogy

Procedia PDF Downloads 23
23 Advancing Dialysis Care Access And Health Information Management: A Blueprint For Nairobi Hospital

Authors: Kimberly Winnie Achieng Otieno

Abstract:

The Nairobi Hospital plays a pivotal role in healthcare provision in East and Central Africa, yet it faces challenges in providing accessible dialysis care. This paper explores strategic interventions to enhance dialysis care, improve access and streamline health information management, with an aim of fostering an integrated and patient-centered healthcare system in our region. Challenges at The Nairobi Hospital The Nairobi Hospital currently grapples with insufficient dialysis machines which results in extended turn around times. This issue stems from both staffing bottle necks and infrastructural limitations given our growing demand for renal care services. Our Paper-based record keeping system and fragmented flow of information downstream hinders the hospital’s ability to manage health data effectively. There is also a need for investment in expanding The Nairobi Hospital dialysis facilities to far reaching communities. Setting up satellite clinics that are closer to people who live in areas far from the main hospital will ensure better access to underserved areas. Community Outreach and Education Implementing education programs on kidney health within local communities is vital for early detection and prevention. Collaborating with local leaders and organizations can establish a proactive approach to renal health hence reducing the demand for acute dialysis interventions. We can amplify this effort by expanding The Nairobi Hospital’s corporate social responsibility outreach program with weekend engagement activities such as walks, awareness classes and fund drives. Enhancing Efficiency in Dialysis Care Demand for dialysis services continues to rise due to an aging Kenyan population and the increasing prevalence of chronic kidney disease (CKD). Present at this years International Nursing Conference are a diverse group of caregivers from around the world who can share with us their process optimization strategies, patient engagement techniques and resource utilization efficiencies to catapult The Nairobi Hospital to the 21st century and beyond. Plans are underway to offer ongoing education opportunities to keep staff updated on best practices and emerging technologies in addition to utilizing a patient feedback mechanisms to identify areas for improvement and enhance satisfaction. Staff empowerment and suggestion boxes address The Nairobi Hospital’s organizational challenges. Current financial constraints may limit a leapfrog in technology integration such as the acquisition of new dialysis machines and an investment in predictive analytics to forecast patient needs and optimize resource allocation. Streamlining Health Information Management Fully embracing a shift to 100% Electronic Health Records (EHRs) is a transformative step toward efficient health information management. Shared information promotes a holistic understanding of patients’ medical history, minimizing redundancies and enhancing overall care quality. To manage the transition to community-based care and EHRs effectively, a phased implementation approach is recommended. Conclusion By strategically enhancing dialysis care access and streamlining health information management, The Nairobi Hospital can strengthen its position as a leading healthcare institution in both East and Central Africa. This comprehensive approach aligns with the hospital’s commitment to providing high-quality, accessible, and patient-centered care in an evolving landscape of healthcare delivery.

Keywords: Africa, urology, diaylsis, healthcare

Procedia PDF Downloads 27
22 A Bibliometric Analysis of Ukrainian Research Articles on SARS-COV-2 (COVID-19) in Compliance with the Standards of Current Research Information Systems

Authors: Sabina Auhunas

Abstract:

These days in Ukraine, Open Science dramatically develops for the sake of scientists of all branches, providing an opportunity to take a more close look on the studies by foreign scientists, as well as to deliver their own scientific data to national and international journals. However, when it comes to the generalization of data on science activities by Ukrainian scientists, these data are often integrated into E-systems that operate inconsistent and barely related information sources. In order to resolve these issues, developed countries productively use E-systems, designed to store and manage research data, such as Current Research Information Systems that enable combining uncompiled data obtained from different sources. An algorithm for selecting SARS-CoV-2 research articles was designed, by means of which we collected the set of papers published by Ukrainian scientists and uploaded by August 1, 2020. Resulting metadata (document type, open access status, citation count, h-index, most cited documents, international research funding, author counts, the bibliographic relationship of journals) were taken from Scopus and Web of Science databases. The study also considered the info from COVID-19/SARS-CoV-2-related documents published from December 2019 to September 2020, directly from documents published by authors depending on territorial affiliation to Ukraine. These databases are enabled to get the necessary information for bibliometric analysis and necessary details: copyright, which may not be available in other databases (e.g., Science Direct). Search criteria and results for each online database were considered according to the WHO classification of the virus and the disease caused by this virus and represented (Table 1). First, we identified 89 research papers that provided us with the final data set after consolidation and removing duplication; however, only 56 papers were used for the analysis. The total number of documents by results from the WoS database came out at 21641 documents (48 affiliated to Ukraine among them) in the Scopus database came out at 32478 documents (41 affiliated to Ukraine among them). According to the publication activity of Ukrainian scientists, the following areas prevailed: Education, educational research (9 documents, 20.58%); Social Sciences, interdisciplinary (6 documents, 11.76%) and Economics (4 documents, 8.82%). The highest publication activity by institution types was reported in the Ministry of Education and Science of Ukraine (its percent of published scientific papers equals 36% or 7 documents), Danylo Halytsky Lviv National Medical University goes next (5 documents, 15%) and P. L. Shupyk National Medical Academy of Postgraduate Education (4 documents, 12%). Basically, research activities by Ukrainian scientists were funded by 5 entities: Belgian Development Cooperation, the National Institutes of Health (NIH, U.S.), The United States Department of Health & Human Services, grant from the Whitney and Betty MacMillan Center for International and Area Studies at Yale, a grant from the Yale Women Faculty Forum. Based on the results of the analysis, we obtained a set of published articles and preprints to be assessed on the variety of features in upcoming studies, including citation count, most cited documents, a bibliographic relationship of journals, reference linking. Further research on the development of the national scientific E-database continues using brand new analytical methods.

Keywords: content analysis, COVID-19, scientometrics, text mining

Procedia PDF Downloads 93
21 Exploratory Characterization of Antibacterial Efficacy of Synthesized Nanoparticles on Staphylococcus Isolates from Hospital Specimens in Saudi Arabia

Authors: Reham K. Sebaih, Afaf I. Shehata , Awatif A. Hindi, Tarek Gheith, Amal A. Hazzani Anas Al-Orjan

Abstract:

Staphylococci spp are ubiquitous gram-positive bacteria is often associated with infections, especially nosocomial infections, and antibiotic resistanceStudy pathogenic bacteria and its use as a tool in the technology of Nano biology and molecular genetics research of the latest research trends of modern characterization and definition of different multiresistant of bacteria including Staphylococci. The Staphylococci are widespread all over the world and particularly in Saudi Arabia The present work study was conducted to evaluate the effect of five different types of nanoparticles (biosynthesized zinc oxide, Spherical and rod of each silver and gold nanoparticles) and their antibacterial impact on the Staphylococcus species. Ninety-six isolates of Staphylococcus species. Staphylococcus aureus, Staphylococcus epidermidis, MRSA were collected from different sources during the period between March 2011G to June 2011G. All isolates were isolated from inpatients and outpatients departments at Royal Commission Hospital in Yanbu Industrial, Saudi Arabia. High percentage isolation from males(55%) than females (45%). Staphylococcus epidermidis from males was (47%), (28%), and(25%). For Staphylococcus aureus and Methicillin-resistant Staphylococcus aureus (MRSA. Isolates from females were Staphylococcus aureus with higher percent of (47%), (30%), and (23%) for MRSA, Staphylococcus epidermidis. Staphylococcus aureus from wound swab were the highest percent (51.42%) followed by vaginal swab (25.71%). Staphylococcus epidermidis were founded with higher percentage in blood (37.14%) and wound swab (34.21%) respectively related to other. The highest percentage of methicillin-resistant Staphylococcus aureus (MRSA)(80.77%) were isolated from wound swab, while those from nostrils were (19.23%). Staphylococcus species were isolates in highest percentage from hospital Emergency department with Staphylococcus aureus (59.37%), Methicillin-resistant Staphylococcus aureus (MRSA) (28.13%)and Staphylococcus epidermidis (12.5%) respectively. Evaluate the antibacterial property of Zinc oxide, Silver, and Gold nanoparticles as an alternative to conventional antibacterial agents Staphylococci isolates from hospital sources we screened them. Gold and Silver rods Nanoparticles to be sensitive to all isolates of Staphylococcus species. Zinc oxide Nanoparticles gave sensitivity impact range(52%) and (48%). The Gold and Silver spherical nanoparticles did not showed any effect on Staphylococci species. Zinc Oxide Nanoparticles gave bactericidal impact (25%) and bacteriostatic impact (75%) for of Staphylococci species. Detecting the association of nanoparticles with Staphylococci isolates imaging by scanning electron microscope (SEM) of some bacteriostatic isolates for Zinc Oxide nanoparticles on Staphylococcus aureus, Staphylococcus epidermidis and Methicillin resistant Staphylococcus aureus(MRSA), showed some Overlapping Bacterial cells with lower their number and appearing some appendages with deformities in external shape. Molecular analysis was applied by Multiplex polymerase chain reaction (PCR) used for the identification of genes within Staphylococcal pathogens. A multiplex polymerase chain reaction (PCR) method has been developed using six primer pairs to detect different genes using 50bp and 100bp DNA ladder marker. The range of Molecular gene typing ranging between 93 bp to 326 bp for Staphylococcus aureus and Methicillin resistant Staphylococcus aureus by TSST-1,mecA,femA and eta, while the bands border were from 546 bp to 682 bp for Staphylococcus epidermidis using icaAB and atlE. Sixteen isolation of Staphylococcus aureus and Methicillin resistant Staphylococcus aureus were positive for the femA gene at 132bp,this allowed the using of this gene as an internal positive control, fifteen isolates of Staphylococcus aureus and Methicillin resistant Staphylococcus aureus were positive for mecA gene at163bp.This gene was responsible for antibiotic resistant Methicillin, Two isolates of Staphylococcus aureus and Methicillin resistant Staphylococcus aureus were positive for the TSST-1 gene at326bp which is responsible for toxic shock syndrome in some Staphylococcus species, None were positive for eta gene at 102bpto that was responsible for Exfoliative toxins. Six isolates of Staphylococcus epidermidis were positive for atlE gene at 682 bp which is responsible for the initial adherence, three isolates of Staphylococcus epidermidis were positive for icaAB gene at 546bp that are responsible for mediates the formation of the biofilm. In conclusion, this study demonstrates the ability of the detection of the genes to discriminate between infecting Staphylococcus strains and considered biological tests, they may potentiate the clinical criteria used for the diagnosis of septicemia or catheter-related infections.

Keywords: multiplex polymerase chain reaction, toxic shock syndrome, Staphylococcus aureus, nosocomial infections

Procedia PDF Downloads 314
20 Interference of Polymers Addition in Wastewaters Microbial Survey: Case Study of Viral Retention in Sludges

Authors: Doriane Delafosse, Dominique Fontvieille

Abstract:

Background: Wastewater treatment plants (WWTPs) generally display significant efficacy in virus retention yet, are sometimes highly variable, partly in relation to large fluctuating loads at the head of the plant and partly because of episodic dysfunctions in some treatment processes. The problem is especially sensitive when human enteric viruses, such as human Noroviruses Genogroup I or Adenoviruses, are in concern: their release downstream WWTP, in environments often interconnected to recreational areas, may be very harmful to human communities even at low concentrations. It points out the importance of WWTP permanent monitoring from which their internal treatment processes could be adjusted. One way to adjust primary treatments is to add coagulants and flocculants to sewage ahead settling tanks to improve decantation. In this work, sludge produced by three coagulants (two organics, one mineral), four flocculants (three cationic, one anionic), and their combinations were studied for their efficacy in human enteric virus retention. Sewage samples were coming from a WWTP in the vicinity of the laboratory. All experiments were performed three times and in triplicates in laboratory pilots, using Murine Norovirus (MNV-1), a surrogate of human Norovirus, as an internal control (spiking). Viruses were quantified by (RT-)qPCR after nucleic acid extraction from both treated water and sediment. Results: Low values of sludge virus retention (from 4 to 8% of the initial sewage concentration) were observed with each cationic organic flocculant added to wastewater and no coagulant. The largest part of the virus load was detected in the treated water (48 to 90%). However, it was not counterbalancing the amount of the introduced virus (MNV-1). The results pertained to two types of cationic flocculants, branched and linear, and in the last case, to two percentages of cations. Results were quite similar to the association of a linear cationic organic coagulant and an anionic flocculant, though suggesting that differences between water and sludges would sometimes be related to virus size or virus origins (autochthonous/allochthonous). FeCl₃, as a mineral coagulant associated with an anionic flocculant, significantly increased both auto- and allochthonous virus retention in the sediments (15 to 34%). Accordingly, virus load in treated water was lower (14 to 48%) but with a total that still does not reach the amount of the introduced virus (MNV-1). It also appeared that the virus retrieval in a bare 0.1M NaCl suspension varied rather strongly according to the FeCl₃ concentration, suggesting an inhibiting effect on the molecular analysis used to detect the virus. Finally, no viruses were detected in both phases (sediment and water) with the combination branched cationic coagulant-linear anionic flocculant, which was later demonstrated as an effect, here also, of polymers on the virus detection-molecular analysis. Conclusions: The combination of FeCl₃-anionic flocculant gave its highest performance to the decantation-based virus removal process. However, large unbalanced values in spiking experiments were observed, suggesting that polymers cast additional obstacles to both elution buffer and lysis buffer on their way to reach the virus. The situation was probably even worse with autochthonous viruses already embedded into sewage's particulate matter. Polymers and FeCl₃ also appeared to interfere in some steps of molecular analyses. More attention should be paid to such impediments wherever chemical additives are considered to be used to enhance WWTP processes. Acknowledgments: This research was supported by the ABIOLAB laboratory (Montbonnot Saint-Martin, France) and by the ASPOSAN association. Field experiments were possible thanks to the Grand Chambéry WWTP authorities (Chambéry, France).

Keywords: flocculants-coagulants, polymers, enteric viruses, wastewater sedimentation treatment plant

Procedia PDF Downloads 96
19 Restoring Total Form and Function in Patients with Lower Limb Bony Defects Utilizing Patient-Specific Fused Deposition Modelling- A Neoteric Multidisciplinary Reconstructive Approach

Authors: Divya SY. Ang, Mark B. Tan, Nicholas EM. Yeo, Siti RB. Sudirman, Khong Yik Chew

Abstract:

Introduction: The importance of the amalgamation of technological and engineering advances with surgical principles of reconstruction cannot be overemphasized. With earlier detection of cancer, consequences of high-speed living and neglect, like traumatic injuries and infection, resulting in increasingly younger patients with bone defects. This may result in malformations and suboptimal function that is more noticeable and palpable in the younger, active demographic. Our team proposes a technique that encapsulates a mesh of multidisciplinary effort, tissue engineering and reconstructive principles. Methods/Materials: Our patient was a young competitive footballer in his early 30s who was diagnosed with submandibular adenoid cystic carcinoma with bony involvement. He was thus counselled for a right hemi mandibulectomy, the floor of mouth resection, right selective neck dissection, tracheostomy, and free fibular flap reconstruction of his mandible and required post-operative radiotherapy. Being young and in his prime sportsman years, he was unable to accept the morbidities associated with using his fibula to reconstruct his mandible despite it being the gold standard reconstructive option. The fibula is an ideal vascularized bone flap because it’s reliable and easily shaped with relatively minimal impact on functional outcomes. The fibula contributes to 30% of weightbearing and is the attachment for the lateral compartment muscles; it is stronger in footballers concerning lateral bending. When harvesting the fibula, the distal 6-8cm and up to 10% of the total length is preserved to maintain the ankle’s stability, thus, minimizing the impact on daily activities. There are studies that have noted gait variability post-operatively. Therefore, returning to a premorbid competitive level may be doubtful. To improve his functional outcomes, the decision was made to try and restore the fibula's form and function. Using the concept of Fused Deposition Modelling (FDM), our team comprising of Plastics, Otolaryngology, Orthopedics and Radiology, worked with Osteopore to design a 3D bioresorbable implant to regenerate the fibula defect (14.5cm). Bone marrow was harvested via reaming the contralateral hip prior to the wide resection. 30mls of his blood was obtained for extracting platelet rich plasma. These were packed into the Osteopore 3D-printed bone scaffold. This was then secured into the fibula defect with titanium plates and screws. The flexor hallucis longus and soleus were anchored along the construct and intraosseous membrane, done in a single setting. Results: He was reviewed closely as an outpatient over 10 months post operatively. He reported no discernable loss or difference in ankle function. He is satisfied and back in training and our team has video and photographs that substantiate his progress. Conclusion: FDM allows regeneration of long bone defects. However, we aimed to also restore his eversion and inversion that is imperative for footballers and hence reattached his previously dissected muscles along the length of the Osteopore implant. We believe that the reattachment of the muscle stabilizes not only the construct but allows optimum muscle tensioning when moving his ankle. This is a simple but effective technique in restoring complete function and form in a young patient whose minute muscle control is imperative to life.

Keywords: fused deposition modelling, functional reconstruction, lower limb bony defects, regenerative surgery, 3D printing, tissue engineering

Procedia PDF Downloads 47
18 Impact of Marangoni Stress and Mobile Surface Charge on Electrokinetics of Ionic Liquids Over Hydrophobic Surfaces

Authors: Somnath Bhattacharyya

Abstract:

The mobile adsorbed surface charge on hydrophobic surfaces can modify the velocity slip condition as well as create a Marangoni stress at the interface. The functionalized hydrophobic walls of micro/nanopores, e.g., graphene nanochannels, may possess physio-sorbed ions. The lateral mobility of the physisorbed absorbed ions creates a friction force as well as an electric force, leading to a modification in the velocity slip condition at the hydrophobic surface. In addition, the non-uniform distribution of these surface ions creates a surface tension gradient, leading to a Marangoni stress. The impact of the mobile surface charge on streaming potential and electrochemical energy conversion efficiency in a pressure-driven flow of ionized liquid through the nanopore is addressed. Also, enhanced electro-osmotic flow through the hydrophobic nanochannel is also analyzed. The mean-filed electrokinetic model is modified to take into account the short-range non-electrostatic steric interactions and the long-range Coulomb correlations. The steric interaction is modeled by considering the ions as charged hard spheres of finite radius suspended in the electrolyte medium. The electrochemical potential is modified by including the volume exclusion effect, which is modeled based on the BMCSL equation of state. The electrostatic correlation is accounted for in the ionic self-energy. The extremal of the self-energy leads to a fourth-order Poisson equation for the electric field. The ion transport is governed by the modified Nernst-Planck equation, which includes the ion steric interactions; born force arises due to the spatial variation of the dielectric permittivity and the dielectrophoretic force on the hydrated ions. This ion transport equation is coupled with the Navier-Stokes equation describing the flow of the ionized fluid and the 3fourth-order Poisson equation for the electric field. We numerically solve the coupled set of nonlinear governing equations along with the prescribed boundary conditions by adopting a control volume approach over a staggered grid arrangement. In the staggered grid arrangements, velocity components are stored on the midpoint of the cell faces to which they are normal, whereas the remaining scalar variables are stored at the center of each cell. The convection and electromigration terms are discretized at each interface of the control volumes using the total variation diminishing (TVD) approach to capture the strong convection resulting from the highly enhanced fluid flow due to the modified model. In order to link pressure to the continuity equation, we adopt a pressure correction-based iterative SIMPLE (Semi-Implicit Method for Pressure-Linked Equations) algorithm, in which the discretized continuity equation is converted to a Poisson equation involving pressure correction terms. Our results show that the physisorbed ions on a hydrophobic surface create an enhanced slip velocity when streaming potential, which enhances the convection current. However, the electroosmotic flow attenuates due to the mobile surface ions.

Keywords: microfluidics, electroosmosis, streaming potential, electrostatic correlation, finite sized ions

Procedia PDF Downloads 46
17 Holistic Urban Development: Incorporating Both Global and Local Optimization

Authors: Christoph Opperer

Abstract:

The rapid urbanization of modern societies and the need for sustainable urban development demand innovative solutions that meet both individual and collective needs while addressing environmental concerns. To address these challenges, this paper presents a study that explores the potential of spatial and energetic/ecological optimization to enhance the performance of urban settlements, focusing on both architectural and urban scales. The study focuses on the application of biological principles and self-organization processes in urban planning and design, aiming to achieve a balance between ecological performance, architectural quality, and individual living conditions. The research adopts a case study approach, focusing on a 10-hectare brownfield site in the south of Vienna. The site is surrounded by a small-scale built environment as an appropriate starting point for the research and design process. However, the selected urban form is not a prerequisite for the proposed design methodology, as the findings can be applied to various urban forms and densities. The methodology used in this research involves dividing the overall building mass and program into individual small housing units. A computational model has been developed to optimize the distribution of these units, considering factors such as solar exposure/radiation, views, privacy, proximity to sources of disturbance (such as noise), and minimal internal circulation areas. The model also ensures that existing vegetation and buildings on the site are preserved and incorporated into the optimization and design process. The model allows for simultaneous optimization at two scales, architectural and urban design, which have traditionally been addressed sequentially. This holistic design approach leads to individual and collective benefits, resulting in urban environments that foster a balance between ecology and architectural quality. The results of the optimization process demonstrate a seemingly random distribution of housing units that, in fact, is a densified hybrid between traditional garden settlements and allotment settlements. This urban typology is selected due to its compatibility with the surrounding urban context, although the presented methodology can be extended to other forms of urban development and density levels. The benefits of this approach are threefold. First, it allows for the determination of ideal housing distribution that optimizes solar radiation for each building density level, essentially extending the concept of sustainable building to the urban scale. Second, the method enhances living quality by considering the orientation and positioning of individual functions within each housing unit, achieving optimal views and privacy. Third, the algorithm's flexibility and robustness facilitate the efficient implementation of urban development with various stakeholders, architects, and construction companies without compromising its performance. The core of the research is the application of global and local optimization strategies to create efficient design solutions. By considering both, the performance of individual units and the collective performance of the urban aggregation, we ensure an optimal balance between private and communal benefits. By promoting a holistic understanding of urban ecology and integrating advanced optimization strategies, our methodology offers a sustainable and efficient solution to the challenges of modern urbanization.

Keywords: sustainable development, self-organization, ecological performance, solar radiation and exposure, daylight, visibility, accessibility, spatial distribution, local and global optimization

Procedia PDF Downloads 40
16 Sensorless Machine Parameter-Free Control of Doubly Fed Reluctance Wind Turbine Generator

Authors: Mohammad R. Aghakashkooli, Milutin G. Jovanovic

Abstract:

The brushless doubly-fed reluctance generator (BDFRG) is an emerging, medium-speed alternative to a conventional wound rotor slip-ring doubly-fed induction generator (DFIG) in wind energy conversion systems (WECS). It can provide competitive overall performance and similar low failure rates of a typically 30% rated back-to-back power electronics converter in 2:1 speed ranges but with the following important reliability and cost advantages over DFIG: the maintenance-free operation afforded by its brushless structure, 50% synchronous speed with the same number of rotor poles (allowing the use of a more compact, and more efficient two-stage gearbox instead of a vulnerable three-stage one), and superior grid integration properties including simpler protection for the low voltage ride through compliance of the fractional converter due to the comparatively higher leakage inductances and lower fault currents. Vector controlled pulse-width-modulated converters generally feature a much lower total harmonic distortion relative to hysteresis counterparts with variable switching rates and as such have been a predominant choice for BDFRG (and DFIG) wind turbines. Eliminating a shaft position sensor, which is often required for control implementation in this case, would be desirable to address the associated reliability issues. This fact has largely motivated the recent growing research of sensorless methods and developments of various rotor position and/or speed estimation techniques for this purpose. The main limitation of all the observer-based control approaches for grid-connected wind power applications of the BDFRG reported in the open literature is the requirement for pre-commissioning procedures and prior knowledge of the machine inductances, which are usually difficult to accurately identify by off-line testing. A model reference adaptive system (MRAS) based sensor-less vector control scheme to be presented will overcome this shortcoming. The true machine parameter independence of the proposed field-oriented algorithm, offering robust, inherently decoupled real and reactive power control of the grid-connected winding, is achieved by on-line estimation of the inductance ratio, the underlying rotor angular velocity and position MRAS observer being reliant upon. Such an observer configuration will be more practical to implement and clearly preferable to the existing machine parameter dependent solutions, and especially bearing in mind that with very little modifications it can be adapted for commercial DFIGs with immediately obvious further industrial benefits and prospects of this work. The excellent encoder-less controller performance with maximum power point tracking in the base speed region will be demonstrated by realistic simulation studies using large-scale BDFRG design data and verified by experimental results on a small laboratory prototype of the WECS emulation facility.

Keywords: brushless doubly fed reluctance generator, model reference adaptive system, sensorless vector control, wind energy conversion

Procedia PDF Downloads 39
15 A Modular Solution for Large-Scale Critical Industrial Scheduling Problems with Coupling of Other Optimization Problems

Authors: Ajit Rai, Hamza Deroui, Blandine Vacher, Khwansiri Ninpan, Arthur Aumont, Francesco Vitillo, Robert Plana

Abstract:

Large-scale critical industrial scheduling problems are based on Resource-Constrained Project Scheduling Problems (RCPSP), that necessitate integration with other optimization problems (e.g., vehicle routing, supply chain, or unique industrial ones), thus requiring practical solutions (i.e., modular, computationally efficient with feasible solutions). To the best of our knowledge, the current industrial state of the art is not addressing this holistic problem. We propose an original modular solution that answers the issues exhibited by the delivery of complex projects. With three interlinked entities (project, task, resources) having their constraints, it uses a greedy heuristic with a dynamic cost function for each task with a situational assessment at each time step. It handles large-scale data and can be easily integrated with other optimization problems, already existing industrial tools and unique constraints as required by the use case. The solution has been tested and validated by domain experts on three use cases: outage management in Nuclear Power Plants (NPPs), planning of future NPP maintenance operation, and application in the defense industry on supply chain and factory relocation. In the first use case, the solution, in addition to the resources’ availability and tasks’ logical relationships, also integrates several project-specific constraints for outage management, like, handling of resource incompatibility, updating of tasks priorities, pausing tasks in a specific circumstance, and adjusting dynamic unit of resources. With more than 20,000 tasks and multiple constraints, the solution provides a feasible schedule within 10-15 minutes on a standard computer device. This time-effective simulation corresponds with the nature of the problem and requirements of several scenarios (30-40 simulations) before finalizing the schedules. The second use case is a factory relocation project where production lines must be moved to a new site while ensuring the continuity of their production. This generates the challenge of merging job shop scheduling and the RCPSP with location constraints. Our solution allows the automation of the production tasks while considering the rate expectation. The simulation algorithm manages the use and movement of resources and products to respect a given relocation scenario. The last use case establishes a future maintenance operation in an NPP. The project contains complex and hard constraints, like on Finish-Start precedence relationship (i.e., successor tasks have to start immediately after predecessors while respecting all constraints), shareable coactivity for managing workspaces, and requirements of a specific state of "cyclic" resources (they can have multiple states possible with only one at a time) to perform tasks (can require unique combinations of several cyclic resources). Our solution satisfies the requirement of minimization of the state changes of cyclic resources coupled with the makespan minimization. It offers a solution of 80 cyclic resources with 50 incompatibilities between levels in less than a minute. Conclusively, we propose a fast and feasible modular approach to various industrial scheduling problems that were validated by domain experts and compatible with existing industrial tools. This approach can be further enhanced by the use of machine learning techniques on historically repeated tasks to gain further insights for delay risk mitigation measures.

Keywords: deterministic scheduling, optimization coupling, modular scheduling, RCPSP

Procedia PDF Downloads 162
14 Remote BioMonitoring of Mothers and Newborns for Temperature Surveillance Using a Smart Wearable Sensor: Techno-Feasibility Study and Clinical Trial in Southern India

Authors: Prem K. Mony, Bharadwaj Amrutur, Prashanth Thankachan, Swarnarekha Bhat, Suman Rao, Maryann Washington, Annamma Thomas, N. Sheela, Hiteshwar Rao, Sumi Antony

Abstract:

The disease burden among mothers and newborns is caused mostly by a handful of avoidable conditions occurring around the time of childbirth and within the first month following delivery. Real-time monitoring of vital parameters of mothers and neonates offers a potential opportunity to impact access as well as the quality of care in vulnerable populations. We describe the design, development and testing of an innovative wearable device for remote biomonitoring (RBM) of body temperatures in mothers and neonates in a hospital in southern India. The architecture consists of: [1] a low-cost, wearable sensor tag; [2] a gateway device for ‘real-time’ communication link; [3] piggy-backing on a commercial GSM communication network; and [4] an algorithm-based data analytics system. Requirements for the device were: long battery-life upto 28 days (with sampling frequency 5/hr); robustness; IP 68 hermetic sealing; and human-centric design. We undertook pre-clinical laboratory testing followed by clinical trial phases I & IIa for evaluation of safety and efficacy in the following sequence: seven healthy adult volunteers; 18 healthy mothers; and three sets of babies – 3 healthy babies; 10 stable babies in the Neonatal Intensive Care Unit (NICU) and 1 baby with hypoxic ischaemic encephalopathy (HIE). The 3-coin thickness, pebble-design sensor weighing about 8 gms was secured onto the abdomen for the baby and over the upper arm for adults. In the laboratory setting, the response-time of the sensor device to attain thermal equilibrium with the surroundings was 4 minutes vis-a-vis 3 minutes observed with a precision-grade digital thermometer used as a reference standard. The accuracy was ±0.1°C of the reference standard within the temperature range of 25-40°C. The adult volunteers, aged 20 to 45 years, contributed a total of 345 hours of readings over a 7-day period and the postnatal mothers provided a total of 403 paired readings. The mean skin temperatures measured in the adults by the sensor were about 2°C lower than the axillary temperature readings (sensor =34.1 vs digital = 36.1); this difference was statistically significant (t-test=13.8; p<0.001). The healthy neonates provided a total of 39 paired readings; the mean difference in temperature was 0.13°C (sensor =36.9 vs digital = 36.7; p=0.2). The neonates in the NICU provided a total of 130 paired readings. Their mean skin temperature measured by the sensor was 0.6°C lower than that measured by the radiant warmer probe (sensor =35.9 vs warmer probe = 36.5; p < 0.001). The neonate with HIE provided a total of 25 paired readings with the mean sensor reading being not different from the radian warmer probe reading (sensor =33.5 vs warmer probe = 33.5; p=0.8). No major adverse events were noted in both the adults and neonates; four adult volunteers reported mild sweating under the device/arm band and one volunteer developed mild skin allergy. This proof-of-concept study shows that real-time monitoring of temperatures is technically feasible and that this innovation appears to be promising in terms of both safety and accuracy (with appropriate calibration) for improved maternal and neonatal health.

Keywords: public health, remote biomonitoring, temperature surveillance, wearable sensors, mothers and newborns

Procedia PDF Downloads 181
13 Revealing Celtic and Norse Mythological Depths through Dragon Age’s Tattoos and Narratives

Authors: Charles W. MacQuarrie, Rachel R. Tatro Duarte

Abstract:

This paper explores the representation of medieval identity within the world of games such as Dragon Age, Elden Ring, Hellblade: Senua’s sacrifice, fantasy role-playing games that draw effectively and problematically on Celtic and Norse mythologies. Focusing on tattoos, onomastics, and accent as visual and oral markers of status and ethnicity, this study analyzes how the game's interplay between mythology, character narratives, and visual storytelling enriches the themes and offers players an immersive, but sometimes baldly ahistorical, connection to ancient mythologies and contemporary digital storytelling. Dragon Age is a triple a game series, Hellblade Senua’s Sacrifice, and Elden Ring of gamers worldwide with its presentation of an idealized medieval world, inspired by the lore of Celtic and Norse mythologies. This paper sets out to explore the intricate relationships between tattoos, accent, and character narratives in the game, drawing parallels to themes,heroic figures and gods from Celtic and Norse mythologies. Tattoos as Mythic and Ethnic Markers: This study analyzes how tattoos in Dragon Age visually represent mythological elements from both Celtic and Norse cultures, serving as conduits of cultural identity and narratives. The nature of these tattoos reflects the slave, criminal, warrior associations made in classical and medieval literature, and some of the episodes concerning tattoos in the games have either close analogs or sources in literature. For example the elvish character Solas, in Dragon Age Inquisition, removes a slave tattoo from the face of a lower status elf in an episode that is reminiscent of Bridget removing the stigmata from Connallus in the Vita Prima of Saint Bridget Character Narratives: The paper examines how characters' personal narratives in the game parallel the archetypal journeys of Celtic heroes and Norse gods, with a focus on their relationships to mythic themes. In these games the Elves usually have Welsh or Irish accents, are close to nature, magically powerful, oppressed by apparently Anglo-Saxon humans and Norse dwarves, and these elves wear facial tattoos. The Welsh voices of fairies and demons is older than the reference in Shakespeare’s Merry Wives of Windsor or even the Anglo-Saxon Life of Saint Guthlac. The English speaking world, and the fantasy genre of literature and gaming, undoubtedly driven by Tolkien, see Elves as Welsh speakers, and as having Welsh accents when speaking English Comparative Analysis: A comparative approach is employed to reveal connections, adaptations, and unique interpretations of the motifs of tattoos and narrative themes in Dragon Age, compared to those found in Celtic and Norse mythologies. Methodology: The study uses a comparative approach to examine the similarities and distinctions between Celtic and Norse mythologies and their counterparts in video games. The analysis encompasses character studies, narrative exploration, visual symbolism, and the historical context of Celtic and Norse cultures. Mythic Visuals: This study showcases how tattoos, as visual symbols, encapsulate mythic narratives, beliefs, and cultural identity, echoing Celtic and Norse visual motifs. Archetypal Journeys: The paper analyzes how character arcs mirror the heroic journeys of Celtic and Norse mythological figures, allowing players to engage with mythic narratives on a personal level. Cultural Interplay: The study discusses how the game's portrayal of tattoos and narratives both preserves and reinterprets elements from Celtic and Norse mythologies, fostering a connection between ancient cultures and modern digital storytelling. Conclusion: By exploring the interconnectedness of tattoos and character narratives in Dragon Age, this paper reveals the game series' ability to act as a bridge between ancient mythologies and contemporary gaming. By drawing inspiration from Celtic heroes and Norse gods and translating them into digital narratives and visual motifs, Dragon Age offers players a multi-dimensional engagement with mythic themes and a unique lens through which to appreciate the enduring allure of these cultures.

Keywords: comparative analysis, character narratives, video games and literature, tattoos, immersive storytelling, character development, mythological influences, Celtic mythology, Norset mythology

Procedia PDF Downloads 41
12 Fabrication of Highly Stable Low-Density Self-Assembled Monolayers by Thiolyne Click Reaction

Authors: Leila Safazadeh, Brad Berron

Abstract:

Self-assembled monolayers have tremendous impact in interfacial science, due to the unique opportunity they offer to tailor surface properties. Low-density self-assembled monolayers are an emerging class of monolayers where the environment-interfacing portion of the adsorbate has a greater level of conformational freedom when compared to traditional monolayer chemistries. This greater range of motion and increased spacing between surface-bound molecules offers new opportunities in tailoring adsorption phenomena in sensing systems. In particular, we expect low-density surfaces to offer a unique opportunity to intercalate surface bound ligands into the secondary structure of protiens and other macromolecules. Additionally, as many conventional sensing surfaces are built upon gold surfaces (SPR or QCM), these surfaces must be compatible with gold substrates. Here, we present the first stable method of generating low-density self assembled monolayer surfaces on gold for the analysis of their interactions with protein targets. Our approach is based on the 2:1 addition of thiol-yne chemistry to develop new classes of y-shaped adsorbates on gold, where the environment-interfacing group is spaced laterally from neighboring chemical groups. This technique involves an initial deposition of a crystalline monolayer of 1,10 decanedithiol on the gold substrate, followed by grafting of a low-packed monolayer on through a photoinitiated thiol-yne reaction in presence of light. Orthogonality of the thiol-yne chemistry (commonly referred to as a click chemistry) allows for preparation of low-density monolayers with variety of functional groups. To date, carboxyl, amine, alcohol, and alkyl terminated monolayers have been prepared using this core technology. Results from surface characterization techniques such as FTIR, contact angle goniometry and electrochemical impedance spectroscopy confirm the proposed low chain-chain interactions of the environment interfacing groups. Reductive desorption measurements suggest a higher stability for the click-LDMs compared to traditional SAMs, along with the equivalent packing density at the substrate interface, which confirms the proposed stability of the monolayer-gold interface. In addition, contact angle measurements change in the presence of an applied potential, supporting our description of a surface structure which allows the alkyl chains to freely orient themselves in response to different environments. We are studying the differences in protein adsorption phenomena between well packed and our loosely packed surfaces, and we expect this data will be ready to present at the GRC meeting. This work aims to contribute biotechnology science in the following manner: Molecularly imprinted polymers are a promising recognition mode with several advantages over natural antibodies in the recognition of small molecules. However, because of their bulk polymer structure, they are poorly suited for the rapid diffusion desired for recognition of proteins and other macromolecules. Molecularly imprinted monolayers are an emerging class of materials where the surface is imprinted, and there is not a bulk material to impede mass transfer. Further, the short distance between the binding site and the signal transduction material improves many modes of detection. My dissertation project is to develop a new chemistry for protein-imprinted self-assembled monolayers on gold, for incorporation into SPR sensors. Our unique contribution is the spatial imprinting of not only physical cues (seen in current imprinted monolayer techniques), but to also incorporate complementary chemical cues. This is accomplished through a photo-click grafting of preassembled ligands around a protein template. This conference is important for my development as a graduate student to broaden my appreciation of the sensor development beyond surface chemistry.

Keywords: low-density self-assembled monolayers, thiol-yne click reaction, molecular imprinting

Procedia PDF Downloads 202
11 Discovering Causal Structure from Observations: The Relationships between Technophile Attitude, Users Value and Use Intention of Mobility Management Travel App

Authors: Aliasghar Mehdizadeh Dastjerdi, Francisco Camara Pereira

Abstract:

The increasing complexity and demand of transport services strains transportation systems especially in urban areas with limited possibilities for building new infrastructure. The solution to this challenge requires changes of travel behavior. One of the proposed means to induce such change is multimodal travel apps. This paper describes a study of the intention to use a real-time multi-modal travel app aimed at motivating travel behavior change in the Greater Copenhagen Region (Denmark) toward promoting sustainable transport options. The proposed app is a multi-faceted smartphone app including both travel information and persuasive strategies such as health and environmental feedback, tailoring travel options, self-monitoring, tunneling users toward green behavior, social networking, nudging and gamification elements. The prospective for mobility management travel apps to stimulate sustainable mobility rests not only on the original and proper employment of the behavior change strategies, but also on explicitly anchoring it on established theoretical constructs from behavioral theories. The theoretical foundation is important because it positively and significantly influences the effectiveness of the system. However, there is a gap in current knowledge regarding the study of mobility-management travel app with support in behavioral theories, which should be explored further. This study addresses this gap by a social cognitive theory‐based examination. However, compare to conventional method in technology adoption research, this study adopts a reverse approach in which the associations between theoretical constructs are explored by Max-Min Hill-Climbing (MMHC) algorithm as a hybrid causal discovery method. A technology-use preference survey was designed to collect data. The survey elicited different groups of variables including (1) three groups of user’s motives for using the app including gain motives (e.g., saving travel time and cost), hedonic motives (e.g., enjoyment) and normative motives (e.g., less travel-related CO2 production), (2) technology-related self-concepts (i.e. technophile attitude) and (3) use Intention of the travel app. The questionnaire items led to the formulation of causal relationships discovery to learn the causal structure of the data. Causal relationships discovery from observational data is a critical challenge and it has applications in different research fields. The estimated causal structure shows that the two constructs of gain motives and technophilia have a causal effect on adoption intention. Likewise, there is a causal relationship from technophilia to both gain and hedonic motives. In line with the findings of the prior studies, it highlights the importance of functional value of the travel app as well as technology self-concept as two important variables for adoption intention. Furthermore, the results indicate the effect of technophile attitude on developing gain and hedonic motives. The causal structure shows hierarchical associations between the three groups of user’s motive. They can be explained by “frustration-regression” principle according to Alderfer's ERG (Existence, Relatedness and Growth) theory of needs meaning that a higher level need remains unfulfilled, a person may regress to lower level needs that appear easier to satisfy. To conclude, this study shows the capability of causal discovery methods to learn the causal structure of theoretical model, and accordingly interpret established associations.

Keywords: travel app, behavior change, persuasive technology, travel information, causality

Procedia PDF Downloads 117