Search results for: time prediction algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20471

Search results for: time prediction algorithms

13151 Co-Culture with Murine Stromal Cells Enhances the In-vitro Expansion of Hematopoietic Stem Cells in Response to Low Concentrations of Trans-Resveratrol

Authors: Mariyah Poonawala, Selvan Ravindran, Anuradha Vaidya

Abstract:

Despite much progress in understanding the regulatory factors and cytokines that support the maturation of the various cell lineages of the hematopoietic system, factors that govern the self-renewal and proliferation of hematopoietic stem cells (HSCs) is still a grey area of research. Hematopoietic stem cell transplantation (HSCT) has evolved over the years and gained tremendous importance in the treatment of both malignant and non-malignant diseases. However, factors such as graft rejection and multiple organ failure have challenged HSCT from time to time, underscoring the urgent need for development of milder processes for successful hematopoietic transplantation. An emerging concept in the field of stem cell biology states that the interactions between the bone-marrow micro-environment and the hematopoietic stem and progenitor cells is essential for regulation, maintenance, commitment and proliferation of stem cells. Understanding the role of mesenchymal stromal cells in modulating the functionality of HSCs is, therefore, an important area of research. Trans-resveratrol has been extensively studied for its various properties to combat and prevent cancer, diabetes and cardiovascular diseases etc. The aim of the present study was to understand the effect of trans-resveratrol on HSCs using single and co-culture systems. We have used KG1a cells since it is a well accepted hematopoietic stem cell model system. Our preliminary experiments showed that low concentrations of trans-resveratrol stimulated the HSCs to undergo proliferation whereas high concentrations of trans-resveratrol did not stimulate the cells to proliferate. We used a murine fibroblast cell line, M210B4, as a stromal feeder layer. On culturing the KG1a cells with M210B4 cells, we observed that the stimulatory as well as inhibitory effects of trans-resveratrol at low and high concentrations respectively, were enhanced. Our further experiments showed that low concentration of trans-resveratrol reduced the generation of reactive oxygen species (ROS) and nitric oxide (NO) whereas high concentrations increased the oxidative stress in KG1a cells. We speculated that perhaps the oxidative stress was imposing inhibitory effects at high concentration and the same was confirmed by performing an apoptotic assay. Furthermore, cell cycle analysis and growth kinetic experiments provided evidence that low concentration of trans-resveratrol reduced the doubling time of the cells. Our hypothesis is that perhaps at low concentration of trans-resveratrol the cells get pushed into the G0/G1 phase and re-enter the cell cycle resulting in their proliferation, whereas at high concentration the cells are perhaps arrested at G2/M phase or at cytokinesis and therefore undergo apoptosis. Liquid Chromatography-Quantitative-Time of Flight–Mass Spectroscopy (LC-Q-TOF MS) analyses indicated the presence of trans-resveratrol and its metabolite(s) in the supernatant of the co-cultured cells incubated with high concentration of trans-resveratrol. We conjecture that perhaps the metabolites of trans-resveratrol are responsible for the apoptosis observed at the high concentration. Our findings may shed light on the unsolved problems in the in vitro expansion of stem cells and may have implications in the ex vivo manipulation of HSCs for therapeutic purposes.

Keywords: co-culture system, hematopoietic micro-environment, KG1a cell line, M210B4 cell line, trans-resveratrol

Procedia PDF Downloads 240
13150 A Resource-Based Perspective on Job Crafting Consequences: An Empirical Study from China

Authors: Eko Liao, Cheryl Zhang

Abstract:

Employee job crafting refers to employee’s proactive behaviors of making customized changes to their jobs on cognitive, relationship, and task levels. Previous studies have investigated different situations triggering employee’s job crafting. However, much less is known about what would be the consequences for both employee themselves and their work groups. Guided by conservation of resources theory (COR), this study investigates how employees job crafting increases their objective task performance and promotive voice behaviors at work. It is argued that employee would gain more resources when they actively craft their job tasks, which in turn increase their job performance and encourage them to have more constructive speak-up behaviors. Specifically, employee’s psychological resources (i.e., job engagement) and relational resources (i.e., leader-member relationships) would be enhanced from effective crafting behaviors, because employees are more likely to regard their job tasks as meaningful, and their leaders would be more likely to notice and recognize their dedication at work when employees craft their job frequently. To test this research model, around 400 employees from various Chinese organizations from mainland China joins the two-wave data collection stage. Employee’s job crafting behaviors in three aspects are measured at time 1. Perception of resource gain (job engagement and leader-member exchange), voice, and job performance are measured at time 2. The research model is generally supported. This study contributes to the job crafting literature by broadening the theoretical lens to a resource-based perspective. It also has practical implications that organizations should pay more attention to employee crafting behaviors because they are closely related to employees in-role performance and constructive voice behaviors.

Keywords: job crafting, resource-based perspective, voice, job performance

Procedia PDF Downloads 153
13149 Study on the Addition of Solar Generating and Energy Storage Units to a Power Distribution System

Authors: T. Costa, D. Narvaez, K. Melo, M. Villalva

Abstract:

Installation of micro-generators based on renewable energy in power distribution system has increased in recent years, with the main renewable sources being solar and wind. Due to the intermittent nature of renewable energy sources, such micro-generators produce time-varying energy which does not correspond at certain times of the day to the peak energy consumption of end users. For this reason, the use of energy storage units next to the grid contributes to the proper leveling of the buses’ voltage level according to Brazilian energy quality standards. In this work, the effect of the addition of a photovoltaic solar generator and a store of energy in the busbar voltages of an electric system is analyzed. The consumption profile is defined as the average hourly use of appliances in a common residence, and the generation profile is defined as a function of the solar irradiation available in a locality. The power summation method is validated with analytical calculation and is used to calculate the modules and angles of the voltages in the buses of an electrical system based on the IEEE standard, at each hour of the day and with defined load and generation profiles. The results show that bus 5 presents the worst voltage level at the power consumption peaks and stabilizes at the appropriate range with the inclusion of the energy storage during the night time period. Solar generator maintains improvement of the voltage level during the period when it receives solar irradiation, having peaks of production during the 12 pm (without exceeding the appropriate maximum levels of tension).

Keywords: energy storage, power distribution system, solar generator, voltage level

Procedia PDF Downloads 128
13148 Reimaging Archetype of Mosque: A Case Study on Contemporary Mosque Architecture in Bangladesh

Authors: Sabrina Rahman

Abstract:

The Mosque is Islam’s most symbolic structure, as well as the expression of collective identity. From the explicit words of our Prophet, 'The earth has been created for me as a masjid and a place of purity, and whatever man from my Ummah finds himself in need of prayer, let him pray' (anywhere)! it is obvious that a devout Muslim does not require a defined space or structure for divine worship since the whole earth is his prayer house. Yet we see that from time immemorial man throughout the Muslim world has painstakingly erected innumerable mosques. However, mosque design spans time, crosses boundaries, and expresses cultures. It is a cultural manifestation as much as one based on a regional building tradition or a certain interpretation of religion. The trend to express physical signs of religion is not new. Physical forms seem to convey symbolic messages. However, in recent times physical forms of mosque architecture are dominantly demising from mosque architecture projects in Bangladesh. Dome & minaret, the most prominent symbol of the mosque, is replacing by contextual and contemporary improvisation rather than subcontinental mosque architecture practice of early fellows. Thus the recent mosque projects of the last 15 years established the contemporary architectural realm in their design. Contextually, spiritual lighting, the serenity of space, tranquility of outdoor spaces, the texture of materials is widely establishing a new genre of Muslim prayer space. A case study based research will lead to specify its significant factors of modernism. Based on the findings, the paper presents evidence of recent projects as well as a guideline for the future image of contemporary Mosque architecture in Bangladesh.

Keywords: contemporary architecture, modernism, prayer space, symbolism

Procedia PDF Downloads 110
13147 Investigating the Characteristics of Correlated Parking-Charging Behaviors for Electric Vehicles: A Data-Driven Approach

Authors: Xizhen Zhou, Yanjie Ji

Abstract:

In advancing the management of integrated electric vehicle (EV) parking-charging behaviors, this study uses Changshu City in Suzhou as a case study to establish a data association mechanism for parking-charging platforms and to develop a database for EV parking-charging behaviors. Key indicators, such as charging start time, initial state of charge, final state of charge, and parking-charging time difference, are considered. Utilizing the K-S test method, the paper examines the heterogeneity of parking-charging behavior preferences among pure EV and non-pure EV users. The K-means clustering method is employed to analyze the characteristics of parking-charging behaviors for both user groups, thereby enhancing the overall understanding of these behaviors. The findings of this study reveal that using a classification model, the parking-charging behaviors of pure EVs can be classified into five distinct groups, while those of non-pure EVs can be separated into four groups. Among them, both types of EV users exhibit groups with low range anxiety for complete charging with special journeys, complete charging at destination, and partial charging. Additionally, both types have a group with high range anxiety, characterized by pure EV users displaying a preference for complete charging with specific journeys, while non-pure EV users exhibit a preference for complete charging. Notably, pure EV users also display a significant group engaging in nocturnal complete charging. The findings of this study can provide technical support for the scientific and rational layout and management of integrated parking and charging facilities for EVs.

Keywords: traffic engineering, potential preferences, cluster analysis, EV, parking-charging behavior

Procedia PDF Downloads 60
13146 Simulation of Dynamic Behavior of Seismic Isolators Using a Parallel Elasto-Plastic Model

Authors: Nicolò Vaiana, Giorgio Serino

Abstract:

In this paper, a one-dimensional (1d) Parallel Elasto- Plastic Model (PEPM), able to simulate the uniaxial dynamic behavior of seismic isolators having a continuously decreasing tangent stiffness with increasing displacement, is presented. The parallel modeling concept is applied to discretize the continuously decreasing tangent stiffness function, thus allowing to simulate the dynamic behavior of seismic isolation bearings by putting linear elastic and nonlinear elastic-perfectly plastic elements in parallel. The mathematical model has been validated by comparing the experimental force-displacement hysteresis loops, obtained testing a helical wire rope isolator and a recycled rubber-fiber reinforced bearing, with those predicted numerically. Good agreement between the simulated and experimental results shows that the proposed model can be an effective numerical tool to predict the forcedisplacement relationship of seismic isolators within relatively large displacements. Compared to the widely used Bouc-Wen model, the proposed one allows to avoid the numerical solution of a first order ordinary nonlinear differential equation for each time step of a nonlinear time history analysis, thus reducing the computation effort, and requires the evaluation of only three model parameters from experimental tests, namely the initial tangent stiffness, the asymptotic tangent stiffness, and a parameter defining the transition from the initial to the asymptotic tangent stiffness.

Keywords: base isolation, earthquake engineering, parallel elasto-plastic model, seismic isolators, softening hysteresis loops

Procedia PDF Downloads 265
13145 Understanding the Challenges of Lawbook Translation via the Framework of Functional Theory of Language

Authors: Tengku Sepora Tengku Mahadi

Abstract:

Where the speed of book writing lags behind the high need for such material for tertiary studies, translation offers a way to enhance the equilibrium in this demand-supply equation. Nevertheless, translation is confronted by obstacles that threaten its effectiveness. The primary challenge to the production of efficient translations may well be related to the text-type and in terms of its complexity. A text that is intricately written with unique rhetorical devices, subject-matter foundation and cultural references will undoubtedly challenge the translator. Longer time and greater effort would be the consequence. To understand these text-related challenges, the present paper set out to analyze a lawbook entitled Learning the Law by David Melinkoff. The book is chosen because it has often been used as a textbook or for reference in many law courses in the United Kingdom and has seen over thirteen editions; therefore, it can be said to be a worthy book for studies in law. Another reason is the existence of a ready translation in Malay. Reference to this translation enables confirmation to some extent of the potential problems that might occur in its translation. Understanding the organization and the language of the book will help translators to prepare themselves better for the task. They can anticipate the research and time that may be needed to produce an effective translation. Another premise here is that this text-type implies certain ways of writing and organization. Accordingly, it seems practicable to adopt the functional theory of language as suggested by Michael Halliday as its theoretical framework. Concepts of the context of culture, the context of situation and measures of the field, tenor and mode form the instruments for analysis. Additional examples from similar materials can also be used to validate the findings. Some interesting findings include the presence of several other text-types or sub-text-types in the book and the dependence on literary discourse and devices to capture the meanings better or add color to the dry field of law. In addition, many elements of culture can be seen, for example, the use of familiar alternatives, allusions, and even terminology and references that date back to various periods of time and languages. Also found are parts which discuss origins of words and terms that may be relevant to readers within the United Kingdom but make little sense to readers of the book in other languages. In conclusion, the textual analysis in terms of its functions and the linguistic and textual devices used to achieve them can then be applied as a guide to determine the effectiveness of the translation that is produced.

Keywords: functional theory of language, lawbook text-type, rhetorical devices, culture

Procedia PDF Downloads 132
13144 Control of Doxorubicin Release Rate from Magnetic PLGA Nanoparticles Using a Non-Permanent Magnetic Field

Authors: Inês N. Peça , A. Bicho, Rui Gardner, M. Margarida Cardoso

Abstract:

Inorganic/organic nanocomplexes offer tremendous scope for future biomedical applications, including imaging, disease diagnosis and drug delivery. The combination of Fe3O4 with biocompatible polymers to produce smart drug delivery systems for use in pharmaceutical formulation present a powerful tool to target anti-cancer drugs to specific tumor sites through the application of an external magnetic field. In the present study, we focused on the evaluation of the effect of the magnetic field application time on the rate of drug release from iron oxide polymeric nanoparticles. Doxorubicin, an anticancer drug, was selected as the model drug loaded into the nanoparticles. Nanoparticles composed of poly(d-lactide-co-glycolide (PLGA), a biocompatible polymer already approved by FDA, containing iron oxide nanoparticles (MNP) for magnetic targeting and doxorubicin (DOX) were synthesized by the o/w solvent extraction/evaporation method and characterized by scanning electron microscopy (SEM), by dynamic light scattering (DLS), by inductively coupled plasma-atomic emission spectrometry and by Fourier transformed infrared spectroscopy. The produced particles yielded smooth surfaces and spherical shapes exhibiting a size between 400 and 600 nm. The effect of the magnetic doxorubicin loaded PLGA nanoparticles produced on cell viability was investigated in mammalian CHO cell cultures. The results showed that unloaded magnetic PLGA nanoparticles were nontoxic while the magnetic particles without polymeric coating show a high level of toxicity. Concerning the therapeutic activity doxorubicin loaded magnetic particles cause a remarkable enhancement of the cell inhibition rates compared to their non-magnetic counterpart. In vitro drug release studies performed under a non-permanent magnetic field show that the application time and the on/off cycle duration have a great influence with respect to the final amount and to the rate of drug release. In order to determine the mechanism of drug release, the data obtained from the release curves were fitted to the semi-empirical equation of the the Korsmeyer-Peppas model that may be used to describe the Fickian and non-Fickian release behaviour. Doxorubicin release mechanism has shown to be governed mainly by Fickian diffusion. The results obtained show that the rate of drug release from the produced magnetic nanoparticles can be modulated through the magnetic field time application.

Keywords: drug delivery, magnetic nanoparticles, PLGA nanoparticles, controlled release rate

Procedia PDF Downloads 247
13143 Optimal Image Representation for Linear Canonical Transform Multiplexing

Authors: Navdeep Goel, Salvador Gabarda

Abstract:

Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4x4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4*4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4*4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.

Keywords: chirp signals, image multiplexing, image transformation, linear canonical transform, polynomial approximation

Procedia PDF Downloads 401
13142 Unveiling the Self-Assembly Behavior and Salt-Induced Morphological Transition of Double PEG-Tailed Unconventional Amphiphiles

Authors: Rita Ghosh, Joykrishna Dey

Abstract:

PEG-based amphiphiles are of tremendous importance for its widespread applications in pharmaceutics, household purposes, and drug delivery. Previously, a number of single PEG-tailed amphiphiles having significant applications have been reported from our group. Therefore, it was of immense interest to explore the properties and application potential of PEG-based double tailed amphiphiles. Herein, for the first time, two novel double PEG-tailed amphiphiles having different PEG chain lengths have been developed. The self-assembly behavior of the newly developed amphiphiles in aqueous buffer (pH 7.0) was thoroughly investigated at 25 oC by a number of techniques including, 1H-NMR, and steady-state and time-dependent fluorescence spectroscopy, dynamic light scattering, transmission electron microscopy, atomic force microscopy, and isothermal titration calorimetry. Despite having two polar PEG chains both molecules were found to have strong tendency to self-assemble in aqueous buffered solution above a very low concentration. Surprisingly, the amphiphiles were shown to form stable vesicles spontaneously at room temperature without any external stimuli. The results of calorimetric measurements showed that the vesicle formation is driven by the hydrophobic effect (positive entropy change) of the system, which is associated with the helix-to-random coil transition of the PEG chain. The spectroscopic data confirmed that the bilayer membrane of the vesicles is constituted by the PEG chains of the amphiphilic molecule. Interestingly, the vesicles were also found to exhibit structural transitions upon addition of salts in solution. These properties of the vesicles enable them as potential candidate for drug delivery.

Keywords: double-tailed amphiphiles, fluorescence, microscopy, PEG, vesicles

Procedia PDF Downloads 110
13141 A Bayesian Parameter Identification Method for Thermorheological Complex Materials

Authors: Michael Anton Kraus, Miriam Schuster, Geralt Siebert, Jens Schneider

Abstract:

Polymers increasingly gained interest in construction materials over the last years in civil engineering applications. As polymeric materials typically show time- and temperature dependent material behavior, which is accounted for in the context of the theory of linear viscoelasticity. Within the context of this paper, the authors show, that some polymeric interlayers for laminated glass can not be considered as thermorheologically simple as they do not follow a simple TTSP, thus a methodology of identifying the thermorheologically complex constitutive bahavioir is needed. ‘Dynamical-Mechanical-Thermal-Analysis’ (DMTA) in tensile and shear mode as well as ‘Differential Scanning Caliometry’ (DSC) tests are carried out on the interlayer material ‘Ethylene-vinyl acetate’ (EVA). A navoel Bayesian framework for the Master Curving Process as well as the detection and parameter identification of the TTSPs along with their associated Prony-series is derived and applied to the EVA material data. To our best knowledge, this is the first time, an uncertainty quantification of the Prony-series in a Bayesian context is shown. Within this paper, we could successfully apply the derived Bayesian methodology to the EVA material data to gather meaningful Master Curves and TTSPs. Uncertainties occurring in this process can be well quantified. We found, that EVA needs two TTSPs with two associated Generalized Maxwell Models. As the methodology is kept general, the derived framework could be also applied to other thermorheologically complex polymers for parameter identification purposes.

Keywords: bayesian parameter identification, generalized Maxwell model, linear viscoelasticity, thermorheological complex

Procedia PDF Downloads 249
13140 Applications of Artificial Intelligence (AI) in Cardiac imaging

Authors: Angelis P. Barlampas

Abstract:

The purpose of this study is to inform the reader, about the various applications of artificial intelligence (AI), in cardiac imaging. AI grows fast and its role is crucial in medical specialties, which use large amounts of digital data, that are very difficult or even impossible to be managed by human beings and especially doctors.Artificial intelligence (AI) refers to the ability of computers to mimic human cognitive function, performing tasks such as learning, problem-solving, and autonomous decision making based on digital data. Whereas AI describes the concept of using computers to mimic human cognitive tasks, machine learning (ML) describes the category of algorithms that enable most current applications described as AI. Some of the current applications of AI in cardiac imaging are the follows: Ultrasound: Automated segmentation of cardiac chambers across five common views and consequently quantify chamber volumes/mass, ascertain ejection fraction and determine longitudinal strain through speckle tracking. Determine the severity of mitral regurgitation (accuracy > 99% for every degree of severity). Identify myocardial infarction. Distinguish between Athlete’s heart and hypertrophic cardiomyopathy, as well as restrictive cardiomyopathy and constrictive pericarditis. Predict all-cause mortality. CT Reduce radiation doses. Calculate the calcium score. Diagnose coronary artery disease (CAD). Predict all-cause 5-year mortality. Predict major cardiovascular events in patients with suspected CAD. MRI Segment of cardiac structures and infarct tissue. Calculate cardiac mass and function parameters. Distinguish between patients with myocardial infarction and control subjects. It could potentially reduce costs since it would preclude the need for gadolinium-enhanced CMR. Predict 4-year survival in patients with pulmonary hypertension. Nuclear Imaging Classify normal and abnormal myocardium in CAD. Detect locations with abnormal myocardium. Predict cardiac death. ML was comparable to or better than two experienced readers in predicting the need for revascularization. AI emerge as a helpful tool in cardiac imaging and for the doctors who can not manage the overall increasing demand, in examinations such as ultrasound, computed tomography, MRI, or nuclear imaging studies.

Keywords: artificial intelligence, cardiac imaging, ultrasound, MRI, CT, nuclear medicine

Procedia PDF Downloads 62
13139 Application of Medical Information System for Image-Based Second Opinion Consultations–Georgian Experience

Authors: Kldiashvili Ekaterina, Burduli Archil, Ghortlishvili Gocha

Abstract:

Introduction – Medical information system (MIS) is at the heart of information technology (IT) implementation policies in healthcare systems around the world. Different architecture and application models of MIS are developed. Despite of obvious advantages and benefits, application of MIS in everyday practice is slow. Objective - On the background of analysis of the existing models of MIS in Georgia has been created a multi-user web-based approach. This presentation will present the architecture of the system and its application for image based second opinion consultations. Methods – The MIS has been created with .Net technology and SQL database architecture. It realizes local (intranet) and remote (internet) access to the system and management of databases. The MIS is fully operational approach, which is successfully used for medical data registration and management as well as for creation, editing and maintenance of the electronic medical records (EMR). Five hundred Georgian language electronic medical records from the cervical screening activity illustrated by images were selected for second opinion consultations. Results – The primary goal of the MIS is patient management. However, the system can be successfully applied for image based second opinion consultations. Discussion – The ideal of healthcare in the information age must be to create a situation where healthcare professionals spend more time creating knowledge from medical information and less time managing medical information. The application of easily available and adaptable technology and improvement of the infrastructure conditions is the basis for eHealth applications. Conclusion - The MIS is perspective and actual technology solution. It can be successfully and effectively used for image based second opinion consultations.

Keywords: digital images, medical information system, second opinion consultations, electronic medical record

Procedia PDF Downloads 434
13138 Ureteral Stents with Extraction Strings: Patient-Reported Outcomes

Authors: Rammah Abdlbagi, Similoluwa Biyi, Aakash Pai

Abstract:

Introduction: Short-term ureteric stents are commonly placed after ureteroscopy procedures. The removal usually entails having a flexible cystoscopy, which entails a further invasive procedure. There are often delays in removing the stent as departments have limited cystoscopy availability. However, if stents with extraction strings are used, the patient or a clinician can remove them. The aim of the study is to assess the safety and effectiveness of the use of a stent with a string. Method: A retrospective, single-institution study was conducted over a three-month period. Twenty consecutive patients had ureteric stents with string insertion. Ten of the patients had a stent removal procedure previously with flexible cystoscopy. A validated questionnaire was used to assess outcomes. Primary outcomes included: dysuria, hematuria, urinary frequency, and disturbance of the patient’s daily activities. Secondary outcomes included pain experience during the stent removal. Result: Fifteen patients (75%) experienced hematuria and frequency. Two patients experienced pain and discomfort during the stent removal (10%). Two patients had experienced a disturbance in their daily activity (10%). All patients who had stent removal before using flexible cystoscopy preferred the removal of the stent using a string. None of the patients had stent displacement. The median stent dwell time was five days. Conclusion: Patient reported outcomes measures for the indwelling period of a stent with extraction string are equivalent to the published data on stents. Extraction strings mean that the stent dwell time can be reduced. The removal of the stent on extraction strings is more tolerable than the conventional stent.

Keywords: ureteric stent, string flexible cystoscopy, stent symptoms, validated questionnaire

Procedia PDF Downloads 78
13137 Using Artificial Intelligence Technology to Build the User-Oriented Platform for Integrated Archival Service

Authors: Lai Wenfang

Abstract:

Tthis study will describe how to use artificial intelligence (AI) technology to build the user-oriented platform for integrated archival service. The platform will be launched in 2020 by the National Archives Administration (NAA) in Taiwan. With the progression of information communication technology (ICT) the NAA has built many systems to provide archival service. In order to cope with new challenges, such as new ICT, artificial intelligence or blockchain etc. the NAA will try to use the natural language processing (NLP) and machine learning (ML) skill to build a training model and propose suggestions based on the data sent to the platform. NAA expects the platform not only can automatically inform the sending agencies’ staffs which records catalogues are against the transfer or destroy rules, but also can use the model to find the details hidden in the catalogues and suggest NAA’s staff whether the records should be or not to be, to shorten the auditing time. The platform keeps all the users’ browse trails; so that the platform can predict what kinds of archives user could be interested and recommend the search terms by visualization, moreover, inform them the new coming archives. In addition, according to the Archives Act, the NAA’s staff must spend a lot of time to mark or remove the personal data, classified data, etc. before archives provided. To upgrade the archives access service process, the platform will use some text recognition pattern to black out automatically, the staff only need to adjust the error and upload the correct one, when the platform has learned the accuracy will be getting higher. In short, the purpose of the platform is to deduct the government digital transformation and implement the vision of a service-oriented smart government.

Keywords: artificial intelligence, natural language processing, machine learning, visualization

Procedia PDF Downloads 156
13136 An Investigation into the Influence of Compression on 3D Woven Preform Thickness and Architecture

Authors: Calvin Ralph, Edward Archer, Alistair McIlhagger

Abstract:

3D woven textile composites continue to emerge as an advanced material for structural applications and composite manufacture due to their bespoke nature, through thickness reinforcement and near net shape capabilities. When 3D woven preforms are produced, they are in their optimal physical state. As 3D weaving is a dry preforming technology it relies on compression of the preform to achieve the desired composite thickness, fibre volume fraction (Vf) and consolidation. This compression of the preform during manufacture results in changes to its thickness and architecture which can often lead to under-performance or changes of the 3D woven composite. Unlike traditional 2D fabrics, the bespoke nature and variability of 3D woven architectures makes it difficult to know exactly how each 3D preform will behave during processing. Therefore, the focus of this study is to investigate the effect of compression on differing 3D woven architectures in terms of structure, crimp or fibre waviness and thickness as well as analysing the accuracy of available software to predict how 3D woven preforms behave under compression. To achieve this, 3D preforms are modelled and compression simulated in Wisetex with varying architectures of binder style, pick density, thickness and tow size. These architectures have then been woven with samples dry compression tested to determine the compressibility of the preforms under various pressures. Additional preform samples were manufactured using Resin Transfer Moulding (RTM) with varying compressive force. Composite samples were cross sectioned, polished and analysed using microscopy to investigate changes in architecture and crimp. Data from dry fabric compression and composite samples were then compared alongside the Wisetex models to determine accuracy of the prediction and identify architecture parameters that can affect the preform compressibility and stability. Results indicate that binder style/pick density, tow size and thickness have a significant effect on compressibility of 3D woven preforms with lower pick density allowing for greater compression and distortion of the architecture. It was further highlighted that binder style combined with pressure had a significant effect on changes to preform architecture where orthogonal binders experienced highest level of deformation, but highest overall stability, with compression while layer to layer indicated a reduction in fibre crimp of the binder. In general, simulations showed a relative comparison to experimental results; however, deviation is evident due to assumptions present within the modelled results.

Keywords: 3D woven composites, compression, preforms, textile composites

Procedia PDF Downloads 122
13135 Object Recognition System Operating from Different Type Vehicles Using Raspberry and OpenCV

Authors: Maria Pavlova

Abstract:

In our days, it is possible to put the camera on different vehicles like quadcopter, train, airplane and etc. The camera also can be the input sensor in many different systems. That means the object recognition like non separate part of monitoring control can be key part of the most intelligent systems. The aim of this paper is to focus of the object recognition process during vehicles movement. During the vehicle’s movement the camera takes pictures from the environment without storage in Data Base. In case the camera detects a special object (for example human or animal), the system saves the picture and sends it to the work station in real time. This functionality will be very useful in emergency or security situations where is necessary to find a specific object. In another application, the camera can be mounted on crossroad where do not have many people and if one or more persons come on the road, the traffic lights became the green and they can cross the road. In this papers is presented the system has solved the aforementioned problems. It is presented architecture of the object recognition system includes the camera, Raspberry platform, GPS system, neural network, software and Data Base. The camera in the system takes the pictures. The object recognition is done in real time using the OpenCV library and Raspberry microcontroller. An additional feature of this library is the ability to display the GPS coordinates of the captured objects position. The results from this processes will be sent to remote station. So, in this case, we can know the location of the specific object. By neural network, we can learn the module to solve the problems using incoming data and to be part in bigger intelligent system. The present paper focuses on the design and integration of the image recognition like a part of smart systems.

Keywords: camera, object recognition, OpenCV, Raspberry

Procedia PDF Downloads 208
13134 From Theory to Practice: Harnessing Mathematical and Statistical Sciences in Data Analytics

Authors: Zahid Ullah, Atlas Khan

Abstract:

The rapid growth of data in diverse domains has created an urgent need for effective utilization of mathematical and statistical sciences in data analytics. This abstract explores the journey from theory to practice, emphasizing the importance of harnessing mathematical and statistical innovations to unlock the full potential of data analytics. Drawing on a comprehensive review of existing literature and research, this study investigates the fundamental theories and principles underpinning mathematical and statistical sciences in the context of data analytics. It delves into key mathematical concepts such as optimization, probability theory, statistical modeling, and machine learning algorithms, highlighting their significance in analyzing and extracting insights from complex datasets. Moreover, this abstract sheds light on the practical applications of mathematical and statistical sciences in real-world data analytics scenarios. Through case studies and examples, it showcases how mathematical and statistical innovations are being applied to tackle challenges in various fields such as finance, healthcare, marketing, and social sciences. These applications demonstrate the transformative power of mathematical and statistical sciences in data-driven decision-making. The abstract also emphasizes the importance of interdisciplinary collaboration, as it recognizes the synergy between mathematical and statistical sciences and other domains such as computer science, information technology, and domain-specific knowledge. Collaborative efforts enable the development of innovative methodologies and tools that bridge the gap between theory and practice, ultimately enhancing the effectiveness of data analytics. Furthermore, ethical considerations surrounding data analytics, including privacy, bias, and fairness, are addressed within the abstract. It underscores the need for responsible and transparent practices in data analytics, and highlights the role of mathematical and statistical sciences in ensuring ethical data handling and analysis. In conclusion, this abstract highlights the journey from theory to practice in harnessing mathematical and statistical sciences in data analytics. It showcases the practical applications of these sciences, the importance of interdisciplinary collaboration, and the need for ethical considerations. By bridging the gap between theory and practice, mathematical and statistical sciences contribute to unlocking the full potential of data analytics, empowering organizations and decision-makers with valuable insights for informed decision-making.

Keywords: data analytics, mathematical sciences, optimization, machine learning, interdisciplinary collaboration, practical applications

Procedia PDF Downloads 75
13133 MRI R2* of Liver in an Animal Model

Authors: Chiung-Yun Chang, Po-Chou Chen, Jiun-Shiang Tzeng, Ka-Wai Mac, Chia-Chi Hsiao, Jo-Chi Jao

Abstract:

This study aimed to measure R2* relaxation rates in the liver of New Zealand White (NZW) rabbits. R2* relaxation rate has been widely used in various hepatic diseases for iron overload by quantifying iron contents in liver. R2* relaxation rate is defined as the reciprocal of T2* relaxation time and mainly depends on the composition of tissue. Different tissues would have different R2* relaxation rates. The signal intensity decay in Magnetic resonance imaging (MRI) may be characterized by R2* relaxation rates. In this study, a 1.5T GE Signa HDxt whole body MR scanner equipped with an 8-channel high resolution knee coil was used to observe R2* values in NZW rabbit’s liver and muscle. Eight healthy NZW rabbits weighted 2 ~ 2.5 kg were recruited. After anesthesia using Zoletil 50 and Rompun 2% mixture, the abdomen of rabbit was landmarked at the center of knee coil to perform 3-plane localizer scan using fast spoiled gradient echo (FSPGR) pulse sequence. Afterward, multi-planar fast gradient echo (MFGR) scans were performed with 8 various echo times (TEs) (2/4/6/8/10/12/14/16 ms) to acquire images for R2* calculations. Regions of interest (ROIs) at liver and muscle were measured using Advantage workstation. Finally, the R2* was obtained by a linear regression of ln(SI) on TE. The results showed that the longer the echo time, the smaller the signal intensity. The R2* values of liver and muscle were 44.8  10.9 s-1 and 37.4  9.5 s-1, respectively. It implies that the iron concentration of liver is higher than that of muscle. In conclusion, R2* is correlated with iron contents in tissue. The correlations between R2* and iron content in NZW rabbit might be valuable for further exploration.

Keywords: liver, magnetic resonance imaging, muscle, R2* relaxation rate

Procedia PDF Downloads 419
13132 Visual Aid and Imagery Ramification on Decision Making: An Exploratory Study Applicable in Emergency Situations

Authors: Priyanka Bharti

Abstract:

Decades ago designs were based on common sense and tradition, but after an enhancement in visualization technology and research, we are now able to comprehend the cognitive ability involved in the decoding of the visual information. However, many fields in visuals need intense research to deliver an efficient explanation for the events. Visuals are an information representation mode through images, symbols and graphics. It plays an impactful role in decision making by facilitating quick recognition, comprehension, and analysis of a situation. They enhance problem-solving capabilities by enabling the processing of more data without overloading the decision maker. As research proves that, visuals offer an improved learning environment by a factor of 400 compared to textual information. Visual information engages learners at a cognitive level and triggers the imagination, which enables the user to process the information faster (visuals are processed 60,000 times faster in the brain than text). Appropriate information, visualization, and its presentation are known to aid and intensify the decision-making process for the users. However, most literature discusses the role of visual aids in comprehension and decision making during normal conditions alone. Unlike emergencies, in a normal situation (e.g. our day to day life) users are neither exposed to stringent time constraints nor face the anxiety of survival and have sufficient time to evaluate various alternatives before making any decision. An emergency is an unexpected probably fatal real-life situation which may inflict serious ramifications on both human life and material possessions unless corrective measures are taken instantly. The situation demands the exposed user to negotiate in a dynamic and unstable scenario in the absence or lack of any preparation, but still, take swift and appropriate decisions to save life/lives or possessions. But the resulting stress and anxiety restricts cue sampling, decreases vigilance, reduces the capacity of working memory, causes premature closure in evaluating alternative options, and results in task shedding. Limited time, uncertainty, high stakes and vague goals negatively affect cognitive abilities to take appropriate decisions. More so, theory of natural decision making by experts has been understood with far more depth than that of an ordinary user. Therefore, in this study, the author aims to understand the role of visual aids in supporting rapid comprehension to take appropriate decisions during an emergency situation.

Keywords: cognition, visual, decision making, graphics, recognition

Procedia PDF Downloads 258
13131 Effects of SNP in Semen Diluents on Motility, Viability and Lipid Peroxidation of Sperm of Bulls

Authors: Hamid Reza Khodaei, Behnaz Mahdavi, Alireza Banitaba

Abstract:

Nitric oxide (NO) plays an important role in all sexual activities of animals. It is made in body from NO syntheses enzyme and L-arginin molecule. NO can make band with sulfur-iron complexes and due to production of steroid sexual hormones related to enzymes which have this complex, NO can change the activity of these enzymes. NO affects many cells including endothelial cells of veins, macrophages and mast cells. These cells are found in testis leydig cells and therefore are important source of NO in testis tissue. Minimizing damages to sperm at the time of sperm freezing and thawing is really important. The goal of this study was to determine the function of NO before freezing and its effects on quality and viability of sperms after thawing and incubation. 4 Holstein bulls were selected from the age of 4, and artificial insemination was done for 3 weeks (2 times a week). Treatments were 0, 10, 50 and 100 nm of sodium nitroprusside (SNP). Data analysis was performed by SAS98 program. Also, mean comparison was done using Duncan's multiple ranges test (P<0.05). Concentrations used were found to increase motility and viability of spermatozoa at 1, 2 and 3 hours after thawing significantly (P<0.05) but there was no significant difference at zero time. SNP levels reduced the amount of lipid peroxidation in sperm membrane, increased acrosome health and improved samples membranes especially in 50 and 100 nm treatments. According to results, adding SNP to semen diluents increases motility and viability of spermatozoa. Also, it reduces lipid peroxidation in sperm membrane and improves sperm function.

Keywords: sperm motility, nitric oxide, lipid peroxidation, spermatozoa

Procedia PDF Downloads 639
13130 The Untreated Burden of Parkinson’s Disease: A Patient Perspective

Authors: John Acord, Ankita Batla, Kiran Khepar, Maude Schmidt, Charlotte Allen, Russ Bradford

Abstract:

Objectives: Despite the availability oftreatment options, Parkinson’s disease (PD) continues to impact heavily on a patient’s quality of life (QoL), as many symptoms that bother the patient remain unexplored and untreated in clinical settings. The aims of this research were to understand the burden of PDsymptoms from a patient perspective, particularly those which are the most persistent and debilitating, and to determine if current treatments and treatment algorithms adequately focus on their resolution. Methods: A13-question, online, patient-reported survey was created based on the MDS-Unified Parkinson’s Disease Rating Scale (MDS-UPDRS)and symptoms listed on Parkinson’s Disease Patient Advocacy Groups websites, and then validated by 10 Parkinson’s patients. In the survey, patients were asked to choose both their most common and their most bothersome symptoms, whether they had received treatment for those and, if so, had it been effective in resolving those symptoms. Results: The most bothersome symptoms reported by the 111 participants who completed the survey were sleep problems (61%), feeling tired (56%), slowness of movements (54%), and pain in some parts of the body (49%). However, while 86% of patients reported receiving dopamine or dopamine like drugs to treat their PD, far fewer reported receiving targeted therapies for additional symptoms. For example, of the patients who reported having sleep problems, only 33% received some form of treatment for this symptom. This was also true for feeling tired (30% received treatment for this symptom), slowness of movements (62% received treatment for this symptom), and pain in some parts of the body (61% received treatment for this symptom). Additionally, 65% of patients reported that the symptoms they experienced were not adequately controlled by the treatments they received, and 9% reported that their current treatments had no effect on their symptoms whatsoever. Conclusion: The survey outcomes highlight that the majority of patients involved in the study received treatment focused on their disease, however, symptom-based treatments were less well represented. Consequently, patient-reported symptoms such as sleep problems and feeling tired tended to receive more fragmented intervention than ‘classical’ PD symptoms, such as slowness of movement, even though they were reported as being amongst the most bothersome symptoms for patients. This research highlights the need to explore symptom burden from the patient’s perspective and offer Customised treatment/support for both motor and non-motor symptoms maximize patients’ quality of life.

Keywords: survey, patient reported symptom burden, unmet needs, parkinson's disease

Procedia PDF Downloads 279
13129 The Reenactment of Historic Memory and the Ways to Read past Traces through Contemporary Architecture in European Urban Contexts: The Case Study of the Medieval Walls of Naples

Authors: Francesco Scarpati

Abstract:

Because of their long history, ranging from ancient times to the present day, European cities feature many historical layers, whose single identities are represented by traces surviving in the urban design. However, urban transformations, in particular, the ones that have been produced by the property speculation phenomena of the 20th century, often compromised the readability of these traces, resulting in a loss of the historical identities of the single layers. The purpose of this research is, therefore, a reflection on the theme of the reenactment of the historical memory in the stratified European contexts and on how contemporary architecture can help to reveal past signs of the cities. The research work starts from an analysis of a series of emblematic examples that have already provided an original solution to the described problem, going from the architectural detail scale to the urban and landscape scale. The results of these analyses are then applied to the case study of the city of Naples, as an emblematic example of a stratified city, with an ancient Greek origin; a city where it is possible to read most of the traces of its transformations. Particular consideration is given to the trace of the medieval walls of the city, which a long time ago clearly divided the city itself from the outer fields, and that is no longer readable at the current time. Finally, solutions and methods of intervention are proposed to ensure that the trace of the walls, read as a boundary, can be revealed through the contemporary project.

Keywords: contemporary project, historic memory, historic urban contexts, medieval walls, naples, stratified cities, urban traces

Procedia PDF Downloads 250
13128 Analyzing the Contamination of Some Food Crops Due to Mineral Deposits in Ondo State, Nigeria

Authors: Alexander Chinyere Nwankpa, Nneka Ngozi Nwankpa

Abstract:

In Nigeria, the Federal government is trying to make sure that everyone has access to enough food that is nutritiously adequate and safe. But in the southwest of Nigeria, notably in Ondo State, the most valuable minerals such as oil and gas, bitumen, kaolin, limestone talc, columbite, tin, gold, coal, and phosphate are abundant. Therefore, some regions of Ondo State are now linked to large quantities of natural radioactivity as a result of the mineral presence. In this work, the baseline radioactivity levels in some of the most important food crops in Ondo State were analyzed, allowing for the prediction of probable radiological health impacts. To this effect, maize (Zea mays), yam (Dioscorea alata) and cassava (Manihot esculenta) tubers were collected from the farmlands in the State because they make up the majority of food's nutritional needs. Ondo State was divided into eight zones in order to provide comprehensive coverage of the research region. At room temperature, the maize (Zea mays), yam (Dioscorea alata), and cassava (Manihot esculenta) samples were dried until they reached a consistent weight. They were pulverized, homogenized, and 250 g packed in a 1-liter Marinelli beaker and kept for 28 days to achieve secular equilibrium. The activity concentrations of Radium-226 (Ra-226), Thorium-232 (Th-232), and Potassium-40 (K-40) were determined in the food samples using Gamma-ray spectrometry. Firstly, the Hyper Pure Germanium detector was calibrated using standard radioactive sources. The gamma counting, which lasted for 36000s for each sample, was carried out in the Centre for Energy Research and Development, Obafemi Awolowo University, Ile-Ife, Nigeria. The mean activity concentration of Ra-226, Th-232 and K-40 for yam were 1.91 ± 0.10 Bq/kg, 2.34 ± 0.21 Bq/kg and 48.84 ± 3.14 Bq/kg, respectively. The content of the radionuclides in maize gave a mean value of 2.83 ± 0.21 Bq/kg for Ra-226, 2.19 ± 0.07 Bq/kg for Th-232 and 41.11 ± 2.16 Bq/kg for K-40. The mean activity concentrations in cassava were 2.52 ± 0.31 Bq/kg for Ra-226, 1.94 ± 0.21 Bq/kg for Th-232 and 45.12 ± 3.31 Bq/kg for K-40. The average committed effective doses in zones 6-8 were 0.55 µSv/y for the consumption of yam, 0.39 µSv/y for maize, and 0.49 µSv/y for cassava. These values are higher than the annual dose guideline of 0.35 µSv/y for the general public. Therefore, the values obtained in this work show that there is radiological contamination of some foodstuffs consumed in some parts of Ondo State. However, we recommend that systematic and appropriate methods also need to be established for the measurement of gamma-emitting radionuclides since these constitute important contributors to the internal exposure of man through ingestion, inhalation, or wound on the body.

Keywords: contamination, environment, radioactivity, radionuclides

Procedia PDF Downloads 83
13127 The Degree Project-Course in Swedish Teacher Education – Deliberative and Transformative Perspectives on the Formative Assessment Practice

Authors: Per Blomqvist

Abstract:

The overall aim of this study is to highlight how the degree project-course in teacher education has developed over time at Swedish universities, above all regarding changes in the formative assessment practices in relation to student's opportunities to take part in writing processes that can develop both their independent critical thinking, subject knowledge, and academic writing skills. Theoretically, the study is based on deliberative and transformative perspectives of teaching academic writing in higher education. The deliberative perspective is motivated by the fact that it is the universities and their departments' responsibility to give the students opportunities to develop their academic writing skills, while there is little guidance on how this can be implemented. The transformative perspective is motivated by the fact that education needs to be adapted to the student's prior knowledge and developed in relation to the student group. Given the academisation of education and the new student groups, this is a necessity. The empirical data consists of video recordings of teacher groups' conversations at three Swedish universities. The conversations were conducted as so-called collective remembering interviews, a method to stimulate the participants' memory through social interaction, and focused on addressing issues on how the degree project-course in teacher education has changed over time. Topic analysis was used to analyze the conversations in order to identify common descriptions and expressions among the teachers. The result highlights great similarities in how the degree project-course has changed over time, both from a deliberative and a transformative perspective. The course is characterized by a “strong framing,” where the teachers have great control over the work through detailed instructions for the writing process and detailed templates for the text. This is justified by the fact that the education has been adapted based on the student teachers' lack of prior subject knowledge. The strong framing places high demands on continuous discussions between teachers about, for example, which tools the students have with them and which linguistic and textual tools are offered in the education. The teachers describe that such governance often leads to conflicts between teachers from different departments because reading and writing are always part of cultural contexts and are linked to different knowledge, traditions, and values. The problem that is made visible in this study raises questions about how students' opportunities to develop independence and make critical judgments in academic writing are affected if the writing becomes too controlled and if passing students becomes the main goal of education.

Keywords: formative assessment, academic writing, degree project, higher education, deliberative perspective, transformative perspective

Procedia PDF Downloads 52
13126 Survey of Hawke's Bay Tourism Based Businesses: Tsunami Understanding and Preparation

Authors: V. A. Ritchie

Abstract:

The loss of life and livelihood experienced after the magnitude 9.3 Sumatra earthquake and tsunami on 26 December 2004 and magnitude 9 earthquake and tsunami in northeastern Japan on 11 March 2011, has raised global awareness and brought tsunami phenomenology, nomenclature, and representation into sharp focus. At the same time, travel and tourism continue to increase, contributing around 1 in 11 jobs worldwide. This increase in tourism is especially true for coastal zones, placing pressure on decision-makers to downplay tsunami risks and at the same time provide adequate tsunami warning so that holidaymakers will feel confident enough to visit places of high tsunami risk. This study investigates how well tsunami preparedness messages are getting through for tourist-based businesses in Hawke’s Bay New Zealand, a region of frequent seismic activity and a high probability of experiencing a nearshore tsunami. The aim of this study is to investigate whether tourists based businesses are well informed about tsunamis, how well they understand that information and to what extent their clients are included in awareness raising and evacuation processes. In high-risk tsunami zones, such as Hawke’s Bay, tourism based businesses face competitive tension between short term business profitability and longer term reputational issues related to preventable loss of life from natural hazards, such as tsunamis. This study will address ways to accommodate culturally and linguistically relevant tourist awareness measures without discouraging tourists or being too costly to implement.

Keywords: tsunami risk and response, travel and tourism, business preparedness, cross cultural knowledge transfer

Procedia PDF Downloads 136
13125 Pricing Techniques to Mitigate Recurring Congestion on Interstate Facilities Using Dynamic Feedback Assignment

Authors: Hatem Abou-Senna

Abstract:

Interstate 4 (I-4) is a primary east-west transportation corridor between Tampa and Daytona cities, serving commuters, commercial and recreational traffic. I-4 is known to have severe recurring congestion during peak hours. The congestion spans about 11 miles in the evening peak period in the central corridor area as it is considered the only non-tolled limited access facility connecting the Orlando Central Business District (CBD) and the tourist attractions area (Walt Disney World). Florida officials had been skeptical of tolling I-4 prior to the recent legislation, and the public through the media had been complaining about the excessive toll facilities in Central Florida. So, in search for plausible mitigation to the congestion on the I-4 corridor, this research is implemented to evaluate the effectiveness of different toll pricing alternatives that might divert traffic from I-4 to the toll facilities during the peak period. The network is composed of two main diverging limited access highways, freeway (I-4) and toll road (SR 417) in addition to two east-west parallel toll roads SR 408 and SR 528, intersecting the above-mentioned highways from both ends. I-4 and toll road SR 408 are the most frequently used route by commuters. SR-417 is a relatively uncongested toll road with 15 miles longer than I-4 and $5 tolls compared to no monetary cost on 1-4 for the same trip. The results of the calibrated Orlando PARAMICS network showed that percentages of route diversion vary from one route to another and depends primarily on the travel cost between specific origin-destination (O-D) pairs. Most drivers going from Disney (O1) or Lake Buena Vista (O2) to Lake Mary (D1) were found to have a high propensity towards using I-4, even when eliminating tolls and/or providing real-time information. However, a diversion from I-4 to SR 417 for these OD pairs occurred only in the cases of the incident and lane closure on I-4, due to the increase in delay and travel costs, and when information is provided to travelers. Furthermore, drivers that diverted from I-4 to SR 417 and SR 528 did not gain significant travel-time savings. This was attributed to the limited extra capacity of the alternative routes in the peak period and the longer traveling distance. When the remaining origin-destination pairs were analyzed, average travel time savings on I-4 ranged between 10 and 16% amounting to 10 minutes at the most with a 10% increase in the network average speed. High propensity of diversion on the network increased significantly when eliminating tolls on SR 417 and SR 528 while doubling the tolls on SR 408 along with the incident and lane closure scenarios on I-4 and with real-time information provided. The toll roads were found to be a viable alternative to I-4 for these specific OD pairs depending on the user perception of the toll cost which was reflected in their specific travel times. However, on the macroscopic level, it was concluded that route diversion through toll reduction or elimination on surrounding toll roads would only have a minimum impact on reducing I-4 congestion during the peak period.

Keywords: congestion pricing, dynamic feedback assignment, microsimulation, paramics, route diversion

Procedia PDF Downloads 160
13124 Process Improvement and Redesign of the Immuno Histology (IHC) Lab at MSKCC: A Lean and Ergonomic Study

Authors: Samantha Meyerholz

Abstract:

MSKCC offers patients cutting edge cancer care with the highest quality standards. However, many patients and industry members do not realize that the operations of the Immunology Histology Lab (IHC) are the backbone for carrying out this mission. The IHC lab manufactures blocks and slides containing critical tissue samples that will be read by a Pathologist to diagnose and dictate a patient’s treatment course. The lab processes 200 requests daily, leading to the generation of approximately 2,000 slides and 1,100 blocks each day. Lab material is transported through labeling, cutting, staining and sorting manufacturing stations, while being managed by multiple techs throughout the space. The quality of the stain as well as wait times associated with processing requests, is directly associated with patients receiving rapid treatments and having a wider range of care options. This project aims to improve slide request turnaround time for rush and non-rush cases, while increasing the quality of each request filled (no missing slides or poorly stained items). Rush cases are to be filled in less than 24 hours, while standard cases are allotted a 48 hour time period. Reducing turnaround times enable patients to communicate sooner with their clinical team regarding their diagnosis, ultimately leading faster treatments and potentially better outcomes. Additional project goals included streamlining tech and material workflow, while reducing waste and increasing efficiency. This project followed a DMAIC structure with emphasis on lean and ergonomic principles that could be integrated into an evolving lab culture. Load times and batching processes were analyzed using process mapping, FMEA analysis, waste analysis, engineering observation, 5S and spaghetti diagramming. Reduction of lab technician movement as well as their body position at each workstation was of top concern to pathology leadership. With new equipment being brought into the lab to carry out workflow improvements, screen and tool placement was discussed with the techs in focus groups, to reduce variation and increase comfort throughout the workspace. 5S analysis was completed in two phases in the IHC lab, helping to drive solutions that reduced rework and tech motion. The IHC lab plans to continue utilizing these techniques to further reduce the time gap between tissue analysis and cancer care.

Keywords: engineering, ergonomics, healthcare, lean

Procedia PDF Downloads 215
13123 Assessment of OTA Contamination in Rice from Fungal Growth Alterations in a Scenario of Climate Changes

Authors: Carolina S. Monteiro, Eugénia Pinto, Miguel A. Faria, Sara C. Cunha

Abstract:

Rice (Oryza sativa) production plays a vital role in reducing hunger and poverty and assumes particular importance in low-income and developing countries. Rice is a sensitive plant, and production occurs strictly where suitable temperature and water conditions are found. Climatic changes are likely to affect worldwide, and some models have predicted increased temperatures, variations in atmospheric CO₂ concentrations and modification in precipitation patterns. Therefore, the ongoing climatic changes threaten rice production by increasing biotic and abiotic stress factors, and crops will grow in different environmental conditions in the following years. Around the world, the effects will be regional and can be detrimental or advantageous depending on the region. Mediterranean zones have been identified as possible hot spots, where dramatic temperature changes, modifications of CO₂ levels, and rainfall patterns are predicted. The actual estimated atmospheric CO₂ concentration is around 400 ppm, and it is predicted that it can reach up to 1000–1200 ppm, which can lead to a temperature increase of 2–4 °C. Alongside, rainfall patterns are also expected to change, with more extreme wet/dry episodes taking place. As a result, it could increase the migration of pathogens, and a shift in the occurrence of mycotoxins, concerning their types and concentrations, is expected. Mycotoxigenic spoilage fungi can colonize the crops and be present in all rice food chain supplies, especially Penicillium species, mainly resulting in ochratoxin A (OTA) contamination. In this scenario, the objectives of the present study are evaluating the effect of temperature (20 vs. 25 °C), CO₂ (400 vs. 1000 ppm), and water stress (0.93 vs 0.95 water activity) on growth and OTA production by a Penicillium nordicum strain in vitro on rice-based media and when colonizing layers of raw rice. Results demonstrate the effect of temperature, CO₂ and drought on the OTA production in a rice-based environment, thus contributing to the development of mycotoxins predictive models in climate change scenarios. As a result, improving mycotoxins' surveillance and monitoring systems, whose occurrence can be more frequent due to climatic changes, seems relevant and necessary. The development of prediction models for hazard contaminants presents in foods highly sensitive to climatic changes, such as mycotoxins, in the highly probable new agricultural scenarios is of paramount importance.

Keywords: climate changes, ochratoxin A, penicillium, rice

Procedia PDF Downloads 50
13122 A Semi-supervised Classification Approach for Trend Following Investment Strategy

Authors: Rodrigo Arnaldo Scarpel

Abstract:

Trend following is a widely accepted investment strategy that adopts a rule-based trading mechanism that rather than striving to predict market direction or on information gathering to decide when to buy and when to sell a stock. Thus, in trend following one must respond to market’s movements that has recently happen and what is currently happening, rather than on what will happen. Optimally, in trend following strategy, is to catch a bull market at its early stage, ride the trend, and liquidate the position at the first evidence of the subsequent bear market. For applying the trend following strategy one needs to find the trend and identify trade signals. In order to avoid false signals, i.e., identify fluctuations of short, mid and long terms and to separate noise from real changes in the trend, most academic works rely on moving averages and other technical analysis indicators, such as the moving average convergence divergence (MACD) and the relative strength index (RSI) to uncover intelligible stock trading rules following trend following strategy philosophy. Recently, some works has applied machine learning techniques for trade rules discovery. In those works, the process of rule construction is based on evolutionary learning which aims to adapt the rules to the current environment and searches for the global optimum rules in the search space. In this work, instead of focusing on the usage of machine learning techniques for creating trading rules, a time series trend classification employing a semi-supervised approach was used to early identify both the beginning and the end of upward and downward trends. Such classification model can be employed to identify trade signals and the decision-making procedure is that if an up-trend (down-trend) is identified, a buy (sell) signal is generated. Semi-supervised learning is used for model training when only part of the data is labeled and Semi-supervised classification aims to train a classifier from both the labeled and unlabeled data, such that it is better than the supervised classifier trained only on the labeled data. For illustrating the proposed approach, it was employed daily trade information, including the open, high, low and closing values and volume from January 1, 2000 to December 31, 2022, of the São Paulo Exchange Composite index (IBOVESPA). Through this time period it was visually identified consistent changes in price, upwards or downwards, for assigning labels and leaving the rest of the days (when there is not a consistent change in price) unlabeled. For training the classification model, a pseudo-label semi-supervised learning strategy was used employing different technical analysis indicators. In this learning strategy, the core is to use unlabeled data to generate a pseudo-label for supervised training. For evaluating the achieved results, it was considered the annualized return and excess return, the Sortino and the Sharpe indicators. Through the evaluated time period, the obtained results were very consistent and can be considered promising for generating the intended trading signals.

Keywords: evolutionary learning, semi-supervised classification, time series data, trading signals generation

Procedia PDF Downloads 70