Search results for: spatial time series
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21654

Search results for: spatial time series

19494 Coupled Space and Time Homogenization of Viscoelastic-Viscoplastic Composites

Authors: Sarra Haouala, Issam Doghri

Abstract:

In this work, a multiscale computational strategy is proposed for the analysis of structures, which are described at a refined level both in space and in time. The proposal is applied to two-phase viscoelastic-viscoplastic (VE-VP) reinforced thermoplastics subjected to large numbers of cycles. The main aim is to predict the effective long time response while reducing the computational cost considerably. The proposed computational framework is a combination of the mean-field space homogenization based on the generalized incrementally affine formulation for VE-VP composites, and the asymptotic time homogenization approach for coupled isotropic VE-VP homogeneous solids under large numbers of cycles. The time homogenization method is based on the definition of micro and macro-chronological time scales, and on asymptotic expansions of the unknown variables. First, the original anisotropic VE-VP initial-boundary value problem of the composite material is decomposed into coupled micro-chronological (fast time scale) and macro-chronological (slow time-scale) problems. The former is purely VE, and solved once for each macro time step, whereas the latter problem is nonlinear and solved iteratively using fully implicit time integration. Second, mean-field space homogenization is used for both micro and macro-chronological problems to determine the micro and macro-chronological effective behavior of the composite material. The response of the matrix material is VE-VP with J2 flow theory assuming small strains. The formulation exploits the return-mapping algorithm for the J2 model, with its two steps: viscoelastic predictor and plastic corrections. The proposal is implemented for an extended Mori-Tanaka scheme, and verified against finite element simulations of representative volume elements, for a number of polymer composite materials subjected to large numbers of cycles.

Keywords: asymptotic expansions, cyclic loadings, inclusion-reinforced thermoplastics, mean-field homogenization, time homogenization

Procedia PDF Downloads 373
19493 Assessment of Five Photoplethysmographic Methods for Estimating Heart Rate Variability

Authors: Akshay B. Pawar, Rohit Y. Parasnis

Abstract:

Heart Rate Variability (HRV) is a widely used indicator of the regulation between the autonomic nervous system (ANS) and the cardiovascular system. Besides being non-invasive, it also has the potential to predict mortality in cases involving critical injuries. The gold standard method for determining HRV is based on the analysis of RR interval time series extracted from ECG signals. However, because it is much more convenient to obtain photoplethysmogramic (PPG) signals as compared to ECG signals (which require the attachment of several electrodes to the body), many researchers have used pulse cycle intervals instead of RR intervals to estimate HRV. They have also compared this method with the gold standard technique. Though most of their observations indicate a strong correlation between the two methods, recent studies show that in healthy subjects, except for a few parameters, the pulse-based method cannot be a surrogate for the standard RR interval- based method. Moreover, the former tends to overestimate short-term variability in heart rate. This calls for improvements in or alternatives to the pulse-cycle interval method. In this study, besides the systolic peak-peak interval method (PP method) that has been studied several times, four recent PPG-based techniques, namely the first derivative peak-peak interval method (P1D method), the second derivative peak-peak interval method (P2D method), the valley-valley interval method (VV method) and the tangent-intersection interval method (TI method) were compared with the gold standard technique. ECG and PPG signals were obtained from 10 young and healthy adults (consisting of both males and females) seated in the armchair position. In order to de-noise these signals and eliminate baseline drift, they were passed through certain digital filters. After filtering, the following HRV parameters were computed from PPG using each of the five methods and also from ECG using the gold standard method: time domain parameters (SDNN, pNN50 and RMSSD), frequency domain parameters (Very low-frequency power (VLF), Low-frequency power (LF), High-frequency power (HF) and Total power or “TP”). Besides, Poincaré plots were also plotted and their SD1/SD2 ratios determined. The resulting sets of parameters were compared with those yielded by the standard method using measures of statistical correlation (correlation coefficient) as well as statistical agreement (Bland-Altman plots). From the viewpoint of correlation, our results show that the best PPG-based methods for the determination of most parameters and Poincaré plots are the P2D method (shows more than 93% correlation with the standard method) and the PP method (mean correlation: 88%) whereas the TI, VV and P1D methods perform poorly (<70% correlation in most cases). However, our evaluation of statistical agreement using Bland-Altman plots shows that none of the five techniques agrees satisfactorily well with the gold standard method as far as time-domain parameters are concerned. In conclusion, excellent statistical correlation implies that certain PPG-based methods provide a good amount of information on the pattern of heart rate variation, whereas poor statistical agreement implies that PPG cannot completely replace ECG in the determination of HRV.

Keywords: photoplethysmography, heart rate variability, correlation coefficient, Bland-Altman plot

Procedia PDF Downloads 325
19492 Strong Ground Motion Characteristics Revealed by Accelerograms in Ms8.0 Wenchuan Earthquake

Authors: Jie Su, Zhenghua Zhou, Yushi Wang, Yongyi Li

Abstract:

The ground motion characteristics, which are given by the analysis of acceleration records, underlie the formulation and revision of the seismic design code of structural engineering. China Digital Strong Motion Network had recorded a lot of accelerograms of main shock from 478 permanent seismic stations, during the Ms8.0 Wenchuan earthquake on 12th May, 2008. These accelerograms provided a large number of essential data for the analysis of ground motion characteristics of the event. The spatial distribution characteristics, rupture directivity effect, hanging-wall and footwall effect had been studied based on these acceleration records. The results showed that the contours of horizontal peak ground acceleration and peak velocity were approximately parallel to the seismogenic fault which demonstrated that the distribution of the ground motion intensity was obviously controlled by the spatial extension direction of the seismogenic fault. Compared with the peak ground acceleration (PGA) recorded on the sites away from which the front of the fault rupture propagates, the PGA recorded on the sites toward which the front of the fault rupture propagates had larger amplitude and shorter duration, which indicated a significant rupture directivity effect. With the similar fault distance, the PGA of the hanging-wall is apparently greater than that of the foot-wall, while the peak velocity fails to observe this rule. Taking account of the seismic intensity distribution of Wenchuan Ms8.0 earthquake, the shape of strong ground motion contours was significantly affected by the directional effect in the regions with Chinese seismic intensity level VI ~ VIII. However, in the regions whose Chinese seismic intensity level are equal or greater than VIII, the mutual positional relationship between the strong ground motion contours and the surface outcrop trace of the fault was evidently influenced by the hanging-wall and foot-wall effect.

Keywords: hanging-wall and foot-wall effect, peak ground acceleration, rupture directivity effect, strong ground motion

Procedia PDF Downloads 353
19491 Geographic Information Systems and a Breath of Opportunities for Supply Chain Management: Results from a Systematic Literature Review

Authors: Anastasia Tsakiridi

Abstract:

Geographic information systems (GIS) have been utilized in numerous spatial problems, such as site research, land suitability, and demographic analysis. Besides, GIS has been applied in scientific fields like geography, health, and economics. In business studies, GIS has been used to provide insights and spatial perspectives in demographic trends, spending indicators, and network analysis. To date, the information regarding the available usages of GIS in supply chain management (SCM) and how these analyses can benefit businesses is limited. A systematic literature review (SLR) of the last 5-year peer-reviewed academic literature was conducted, aiming to explore the existing usages of GIS in SCM. The searches were performed in 3 databases (Web of Science, ProQuest, and Business Source Premier) and reported using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology. The analysis resulted in 79 papers. The results indicate that the existing GIS applications used in SCM were in the following domains: a) network/ transportation analysis (in 53 of the papers), b) location – allocation site search/ selection (multiple-criteria decision analysis) (in 45 papers), c) spatial analysis (demographic or physical) (in 34 papers), d) combination of GIS and supply chain/network optimization tools (in 32 papers), and e) visualization/ monitoring or building information modeling applications (in 8 papers). An additional categorization of the literature was conducted by examining the usage of GIS in the supply chain (SC) by the business sectors, as indicated by the volume of the papers. The results showed that GIS is mainly being applied in the SC of the biomass biofuel/wood industry (33 papers). Other industries that are currently utilizing GIS in their SC were the logistics industry (22 papers), the humanitarian/emergency/health care sector (10 papers), the food/agro-industry sector (5 papers), the petroleum/ coal/ shale gas sector (3 papers), the faecal sludge sector (2 papers), the recycle and product footprint industry (2 papers), and the construction sector (2 papers). The results were also presented by the geography of the included studies and the GIS software used to provide critical business insights and suggestions for future research. The results showed that research case studies of GIS in SCM were conducted in 26 countries (mainly in the USA) and that the most prominent GIS software provider was the Environmental Systems Research Institute’s ArcGIS (in 51 of the papers). This study is a systematic literature review of the usage of GIS in SCM. The results showed that the GIS capabilities could offer substantial benefits in SCM decision-making by providing key insights to cost minimization, supplier selection, facility location, SC network configuration, and asset management. However, as presented in the results, only eight industries/sectors are currently using GIS in their SCM activities. These findings may offer essential tools to SC managers who seek to optimize the SC activities and/or minimize logistic costs and to consultants and business owners that want to make strategic SC decisions. Furthermore, the findings may be of interest to researchers aiming to investigate unexplored research areas where GIS may improve SCM.

Keywords: supply chain management, logistics, systematic literature review, GIS

Procedia PDF Downloads 145
19490 Characterization on Molecular Weight of Polyamic Acids Using GPC Coupled with Multiple Detectors

Authors: Mei Hong, Wei Liu, Xuemin Dai, Yanxiong Pan, Xiangling Ji

Abstract:

Polyamic acid (PAA) is the precursor of polyimide (PI) prepared by a two-step method, its molecular weight and molecular weight distribution not only play an important role during the preparation and processing, but also influence the final performance of PI. However, precise characterization on molecular weight of PAA is still a challenge because of the existence of very complicated interactions in the solution system, including the electrostatic interaction, hydrogen bond interaction, dipole-dipole interaction, etc. Thus, it is necessary to establisha suitable strategy which can completely suppress these complex effects and get reasonable data on molecular weight. Herein, the gel permeation chromatography (GPC) coupled with differential refractive index (RI) and multi-angle laser light scattering (MALLS) detectors were applied to measure the molecular weight of (6FDA-DMB) PAA using different mobile phases, LiBr/DMF, LiBr/H3PO4/THF/DMF, LiBr/HAc/THF/DMF, and LiBr/HAc/DMF, respectively. It was found that combination of LiBr with HAc can shield the above-mentioned complex interactions and is more conducive to the separation of PAA than only addition of LiBr in DMF. LiBr/HAc/DMF was employed for the first time as a mild mobile phase to effectively separate PAA and determine its molecular weight. After a series of conditional experiments, 0.02M LiBr/0.2M HAc/DMF was fixed as an optimized mobile phase to measure the relative and absolute molecular weights of (6FDA-DMB) PAA prepared, and the obtained Mw from GPC-MALLS and GPC-RI were 35,300 g/mol and 125,000 g/mol, respectively. Particularly, such a mobile phase is also applicable to other PAA samples with different structures, and the final results on molecular weight are also reproducible.

Keywords: Polyamic acids, Polyelectrolyte effects, Gel permeation chromatography, Mobile phase, Molecular weight

Procedia PDF Downloads 59
19489 Agile Project Management: A Real Application in a Multi-Project Research and Development Center

Authors: Aysegul Sarac

Abstract:

The aim of this study is to analyze the impacts of integrating agile development principles and practices, in particular to reduce project lead time in a multi-project environment. We analyze Arçelik Washing Machine R&D Center in which multiple projects are conducted by shared resources. In the first part of the study, we illustrate the current waterfall model system by using a value stream map. We define all activities starting from the first idea of the project to the customer and measure process time and lead time of projects. In the second part of the study we estimate potential improvements and select a set of these improvements to integrate agile principles. We aim to develop a future state map and analyze the impacts of integrating lean principles on project lead time. The main contribution of this study is that we analyze and integrate agile product development principles in a real multi-project system.

Keywords: agile project management, multi project system, project lead time, product development

Procedia PDF Downloads 310
19488 Votes - Commercialization in Nigeria: A Crime Against Sustainable Democracy

Authors: Oluwasaanmi Lawrence Adesuyi, Igbekoyi Kayode Emmanuel

Abstract:

This study examined vote - commercialization during elections among the voters in Nigeria, a series of elections in Ekiti State, Southwestern Nigeria. Democracy in Nigeria that came to replace the unwanted ruling and dictating mission of the military government has been facing a societal terror “crime of votes commercialization” that stands in jeopardy against its sustainability in Nigeria. Social exchange and action-bound theories were employed as the theoretical framework. Forty-Eight in-depth interviews, key informant interviews, and case studies were conducted with purposively selected respondents in the three senatorial districts that captured the sixteen local governments of the state. The results show that really commercialization of votes has become the order of the day in all series of electioneering among Ekiti people. Also, it was recorded that true democracy is no longer allowed to triumph as a result of vote buying that allows the highest bidder to be the winner. The result also shows that this attitude is not limited to only one political party or one candidate but involved all the political parties that participated in Election. It has become a frequent idea among the electorates during every festive period of election in Ekiti State. The tyrannical attitude has been given a nickname to suit the conditional situation of votes commercialization - (Diboki o se obe), which means vote and have a pot of soup, this implies that you will get money to take care of yourself and the family when you vote and collect money on the vote you cast, notwithstanding the money is being collected from all candidates that participated in the election, but the highest bidder has the day. The main challenge this has on democracy is that the contestants challenge the result of the election results based on the act of vote commercialization. Also, those that bought people’s votes with a huge amount of money relent on their democratic promises. The study showed that the crime of vote commercialization that threatens democracy must be addressed for sustainability.

Keywords: crime, democracy, jeopardy, military, sustainability, votes-commercialization

Procedia PDF Downloads 105
19487 Urban Catalyst through Traditional Market Revitalization towards the MICE Tourism in Surakarta

Authors: Istijabatul Aliyah, Bambang Setioko, Rara Sugiarti

Abstract:

Surakarta is one of the cities which are formed with the concept of Javanese cosmology. As a traditional town of Java, Surakarta is known as ‘the paradise’ of traditional markets. Since its establishment, Surakarta is formed with Catur Gatra Tunggal or Four Single-Slot concept (palace, square, mosques, and markets). Current development in Surakarta downtown today indicates that traditional markets have improved themselves in both physical and non-physical aspects. The efforts start from the market façade revitalization, restoration and the overall development of market; up to social activities, competition between traders or large celebrations in the neighbourhood market. This research was conducted in Surakarta, which is aimed at: identifying the role of traditional market revitalization efforts in the development of a city. This study employs several methods of analysis, namely: 1) Spatial analysis for mapping the distribution of traditional markets in the city constellation, 2) Category-Based Analysis (CBA) to classify the revitalization of traditional markets that has an influence in the development of the city, and 3) Interactive Method of Analysis. The results of this research indicate that the presence of a constellation of traditional markets in Surakarta is dominated by the presence of Gede Market, not only as the oldest traditional market, but also as a center of economic and socio-cultural activities of the community. The role of traditional market revitalization in the development of a town is as an Urban Catalyst towards a MICE city in the sense that the revitalization effort, even done in a relatively short time and not yet covering the overall objects, is able to establish brand image of Surakarta as a city of culture which is friendly and ready to be MICE tourism city.

Keywords: traditional market revitalization, urban catalyst, MICE tourism, Surakarta

Procedia PDF Downloads 384
19486 Arthroscopic Assisted Fibertape Technique For Recurrent MPFL Reconstruction - Case Series Done In The UK Population

Authors: Naufal Ahmed, Michael Lwin

Abstract:

Background: MPFL reconstructions are ideally performed with au-tografts like gracilis semitendinosus tendon, which may be associated with donor site morbidity and complications. In this case series, we have tried to use fiber tape, which avoids the above complications and also keeps the graft virgin. This kind of synthetic graft has been used successfully in rotator cuffs and ACJ reconstructions with good results. Materials and methods: It was a retrospective data analysis of 45 patients who underwent this procedure from 2014-2020 under a single consultant in a DGH . These patiens have been followed up at 6 weeks, 6 months, 1 year, and 1 ½ years with clinical assessment and KOOS scores. We compared the results with the NJR and also with the Belgium report and was found to be satisfactory and comparable with them. Surgical technique : We used Arthrex fiber tape for the reconstruction of MPFL . Initially, two parallel holes drilled over sup aspect of the patella with help of an image intensifier, and then fiber wire passed through them from the medial to the lateral side and back to the medial side. The fiber wire was attached to the schottle point on the femoral side, giving a good extra articular internal brac-ing to the MPFL. All patients were scoped before the procedure, and the final tightening over the femoral side was done directly under vision to see the position of the patella. Results: We had 45 MPFL reconstructions along with 4 additional procedures 1 ACLR, 2 ACL REPAIR, 1 TTT advancement ( revision MPFL ). There were 14 males and 31 females, and their average age was 25 (13-55 ). We did not have any donor site morbidity, no infection, no fractures, no recurrent dislocations, no reoperations yet. Conclusion: Fiber tape is a feasible and appropriate option for MPFL reconstruction. We haven’t seen any re -operation in our 5 year follow up. This technique avoids the use of autograft, which can be used in the future if needed for revision surgeries. We don’t lose anything by following this simple novel technique.

Keywords: arthroscopy, fibertape, MPFL reconstruction, recurrent patella dislocation

Procedia PDF Downloads 144
19485 Developing Research Involving Different Species: Opportunities and Empirical Foundations

Authors: A. V. Varfolomeeva, N. S. Tkachenko, A. G. Tishchenko

Abstract:

The problem of violation of internal validity in studies of psychological structures is considered. The role of epistemological attitudes of researchers in the planning of research within the methodology of the system-evolutionary approach is assessed. Alternative programs of psychological research involving representatives of different biological species are presented. On the example of the results of two research series the variants of solving the problem are discussed.

Keywords: epistemological attitudes, experimental design, validity, psychological structure, learning

Procedia PDF Downloads 118
19484 Cross-Cultural Competence Development through 'Learning by Reflection': A Case Study of Chinese International Students Learning through Taking Part-Time Jobs in the UK

Authors: Xin Zhao

Abstract:

The project aims to expand the notion of narrative learning and address the importance of learning by reflection in our learning and teaching context at a British university. Drawing on the key concepts such as development ZPD, transition and reflection-in and –on-action, this project analyses the learning experiences of a small sample of Chinese postgraduate students in a British University, who use part-time job experience to develop cross-cultural communication skills. The project adopts a mixed methods approach. Questionnaires and focus group interviews are used to examine the way in which students adapt (or not adapt) to the culture of learning in a British university and develop a renewed sense of self in transitions from one culture to the other. The project also looks at how the students appropriate opportunities for learning not just from classrooms but outside classrooms from everyday encounters. The project aims to address the implication of learning by reflection as development in transition. Time in and for learning, or duration, is taken for granted in theorising narrative learning. The project shall explore this very issue of time in relation to learning by reflection in considering time in/of/for learning as duration.

Keywords: cross-cultural competence, learning by refection, international student transition, part-time work experience

Procedia PDF Downloads 187
19483 Economic Valuation of Emissions from Mobile Sources in the Urban Environment of Bogotá

Authors: Dayron Camilo Bermudez Mendoza

Abstract:

Road transportation is a significant source of externalities, notably in terms of environmental degradation and the emission of pollutants. These emissions adversely affect public health, attributable to criteria pollutants like particulate matter (PM2.5 and PM10) and carbon monoxide (CO), and also contribute to climate change through the release of greenhouse gases, such as carbon dioxide (CO2). It is, therefore, crucial to quantify the emissions from mobile sources and develop a methodological framework for their economic valuation, aiding in the assessment of associated costs and informing policy decisions. The forthcoming congress will shed light on the externalities of transportation in Bogotá, showcasing methodologies and findings from the construction of emission inventories and their spatial analysis within the city. This research focuses on the economic valuation of emissions from mobile sources in Bogotá, employing methods like hedonic pricing and contingent valuation. Conducted within the urban confines of Bogotá, the study leverages demographic, transportation, and emission data sourced from the Mobility Survey, official emission inventories, and tailored estimates and measurements. The use of hedonic pricing and contingent valuation methodologies facilitates the estimation of the influence of transportation emissions on real estate values and gauges the willingness of Bogotá's residents to invest in reducing these emissions. The findings are anticipated to be instrumental in the formulation and execution of public policies aimed at emission reduction and air quality enhancement. In compiling the emission inventory, innovative data sources were identified to determine activity factors, including information from automotive diagnostic centers and used vehicle sales websites. The COPERT model was utilized to ascertain emission factors, requiring diverse inputs such as data from the national transit registry (RUNT), OpenStreetMap road network details, climatological data from the IDEAM portal, and Google API for speed analysis. Spatial disaggregation employed GIS tools and publicly available official spatial data. The development of the valuation methodology involved an exhaustive systematic review, utilizing platforms like the EVRI (Environmental Valuation Reference Inventory) portal and other relevant sources. The contingent valuation method was implemented via surveys in various public settings across the city, using a referendum-style approach for a sample of 400 residents. For the hedonic price valuation, an extensive database was developed, integrating data from several official sources and basing analyses on the per-square meter property values in each city block. The upcoming conference anticipates the presentation and publication of these results, embodying a multidisciplinary knowledge integration and culminating in a master's thesis.

Keywords: economic valuation, transport economics, pollutant emissions, urban transportation, sustainable mobility

Procedia PDF Downloads 63
19482 Temporal Effects on Chemical Composition of Treated Wastewater and Borehole Water Used for Irrigation in Limpopo Province, South Africa

Authors: Pholosho M. Kgopa, Phatu W. Mashela, Alen Manyevere

Abstract:

Increasing incidents of drought spells in most Sub-Saharan Africa call for using alternative sources of water for irrigation in arid and semi-arid regions. A study was conducted to investigate chemical composition of borehole and treated wastewater from different sampling disposal sites at University of Limpopo Experimental Farm (ULEF). A 4 × 5 factorial experiment, with the borehole as a reference sampling site and three other sampling sites along the wastewater disposal system was conducted over five months. Water samples were collected at four sites namely, (a) exit from Pond 16 into the furrow, (b) entry into night-dam, (c) exit from night dam to irrigated fields and (d) exit from borehole to irrigated fields. Water samples were collected in the middle of each month, starting from July to November 2016. Samples were analysed for pH, EC, Ca, Mg, Na, K, Al, B, Zn, Cu, Cr, Pb, Cd and As. The site × time interactions were highly significant for Ca, Mg, Zn, Cu, Cr, Pb, Cd, and As variables, but not for Na and K. Sampling site was highly significant on all variables, with sampling period not significant for K and Na. Relative to water from the borehole, Na concentration in wastewater samples from the night-dam exit, night-dam entry and Pond16 exit were lower by 69, 34 and 55%, respectively. Relative to borehole water, Al was higher in wastewater sampling sites. In conclusion, both sampling site and period affected the chemical composition of treated wastewater.

Keywords: irrigation water quality, spatial effects, temporal effects, water reuse, water scarcity

Procedia PDF Downloads 243
19481 Evaluation of Hand Grip Strength and EMG Signal on Visual Reaction

Authors: Sung-Wook Shin, Sung-Taek Chung

Abstract:

Hand grip strength has been utilized as an indicator to evaluate the motor ability of hands, responsible for performing multiple body functions. It is, however, difficult to evaluate other factors (other than hand muscular strength) utilizing the hand grip strength only. In this study, we analyzed the motor ability of hands using EMG and the hand grip strength, simultaneously in order to evaluate concentration, muscular strength reaction time, instantaneous muscular strength change, and agility in response to visual reaction. In results, the average time (and their standard deviations) of muscular strength reaction EMG signal and hand grip strength was found to be 209.6 ± 56.2 ms and 354.3 ± 54.6 ms, respectively. In addition, the onset time which represents acceleration time to reach 90% of maximum hand grip strength, was 382.9 ± 129.9 ms.

Keywords: hand grip strength, EMG, visual reaction, endurance

Procedia PDF Downloads 467
19480 Number of Parametrization of Discrete-Time Systems without Unit-Delay Element: Single-Input Single-Output Case

Authors: Kazuyoshi Mori

Abstract:

In this paper, we consider the parametrization of the discrete-time systems without the unit-delay element within the framework of the factorization approach. In the parametrization, we investigate the number of required parameters. We consider single-input single-output systems in this paper. By the investigation, we find, on the discrete-time systems without the unit-delay element, three cases that are (1) there exist plants which require only one parameter and (2) two parameters, and (3) the number of parameters is at most three.

Keywords: factorization approach, discrete-time system, parameterization of stabilizing controllers, system without unit-delay

Procedia PDF Downloads 243
19479 Time Travel Testing: A Mechanism for Improving Renewal Experience

Authors: Aritra Majumdar

Abstract:

While organizations strive to expand their new customer base, retaining existing relationships is a key aspect of improving overall profitability and also showcasing how successful an organization is in holding on to its customers. It is an experimentally proven fact that the lion’s share of profit always comes from existing customers. Hence seamless management of renewal journeys across different channels goes a long way in improving trust in the brand. From a quality assurance standpoint, time travel testing provides an approach to both business and technology teams to enhance the customer experience when they look to extend their partnership with the organization for a defined phase of time. This whitepaper will focus on key pillars of time travel testing: time travel planning, time travel data preparation, and enterprise automation. Along with that, it will call out some of the best practices and common accelerator implementation ideas which are generic across verticals like healthcare, insurance, etc. In this abstract document, a high-level snapshot of these pillars will be provided. Time Travel Planning: The first step of setting up a time travel testing roadmap is appropriate planning. Planning will include identifying the impacted systems that need to be time traveled backward or forward depending on the business requirement, aligning time travel with other releases, frequency of time travel testing, preparedness for handling renewal issues in production after time travel testing is done and most importantly planning for test automation testing during time travel testing. Time Travel Data Preparation: One of the most complex areas in time travel testing is test data coverage. Aligning test data to cover required customer segments and narrowing it down to multiple offer sequencing based on defined parameters are keys for successful time travel testing. Another aspect is the availability of sufficient data for similar combinations to support activities like defect retesting, regression testing, post-production testing (if required), etc. This section will talk about the necessary steps for suitable data coverage and sufficient data availability from a time travel testing perspective. Enterprise Automation: Time travel testing is never restricted to a single application. The workflow needs to be validated in the downstream applications to ensure consistency across the board. Along with that, the correctness of offers across different digital channels needs to be checked in order to ensure a smooth customer experience. This section will talk about the focus areas of enterprise automation and how automation testing can be leveraged to improve the overall quality without compromising on the project schedule. Along with the above-mentioned items, the white paper will elaborate on the best practices that need to be followed during time travel testing and some ideas pertaining to accelerator implementation. To sum it up, this paper will be written based on the real-time experience author had on time travel testing. While actual customer names and program-related details will not be disclosed, the paper will highlight the key learnings which will help other teams to implement time travel testing successfully.

Keywords: time travel planning, time travel data preparation, enterprise automation, best practices, accelerator implementation ideas

Procedia PDF Downloads 164
19478 FLIME - Fast Low Light Image Enhancement for Real-Time Video

Authors: Vinay P., Srinivas K. S.

Abstract:

Low Light Image Enhancement is of utmost impor- tance in computer vision based tasks. Applications include vision systems for autonomous driving, night vision devices for defence systems, low light object detection tasks. Many of the existing deep learning methods are resource intensive during the inference step and take considerable time for processing. The algorithm should take considerably less than 41 milliseconds in order to process a real-time video feed with 24 frames per second and should be even less for a video with 30 or 60 frames per second. The paper presents a fast and efficient solution which has two main advantages, it has the potential to be used for a real-time video feed, and it can be used in low compute environments because of the lightweight nature. The proposed solution is a pipeline of three steps, the first one is the use of a simple function to map input RGB values to output RGB values, the second is to balance the colors and the final step is to adjust the contrast of the image. Hence a custom dataset is carefully prepared using images taken in low and bright lighting conditions. The preparation of the dataset, the proposed model, the processing time are discussed in detail and the quality of the enhanced images using different methods is shown.

Keywords: low light image enhancement, real-time video, computer vision, machine learning

Procedia PDF Downloads 211
19477 Quality Assurances for an On-Board Imaging System of a Linear Accelerator: Five Months Data Analysis

Authors: Liyun Chang, Cheng-Hsiang Tsai

Abstract:

To ensure the radiation precisely delivering to the target of cancer patients, the linear accelerator equipped with the pretreatment on-board imaging system is introduced and through it the patient setup is verified before the daily treatment. New generation radiotherapy using beam-intensity modulation, usually associated the treatment with steep dose gradients, claimed to have achieved both a higher degree of dose conformation in the targets and a further reduction of toxicity in normal tissues. However, this benefit is counterproductive if the beam is delivered imprecisely. To avoid shooting critical organs or normal tissues rather than the target, it is very important to carry out the quality assurance (QA) of this on-board imaging system. The QA of the On-Board Imager® (OBI) system of one Varian Clinac-iX linear accelerator was performed through our procedures modified from a relevant report and AAPM TG142. Two image modalities, 2D radiography and 3D cone-beam computed tomography (CBCT), of the OBI system were examined. The daily and monthly QA was executed for five months in the categories of safety, geometrical accuracy and image quality. A marker phantom and a blade calibration plate were used for the QA of geometrical accuracy, while the Leeds phantom and Catphan 504 phantom were used in the QA of radiographic and CBCT image quality, respectively. The reference images were generated through a GE LightSpeed CT simulator with an ADAC Pinnacle treatment planning system. Finally, the image quality was analyzed via an OsiriX medical imaging system. For the geometrical accuracy test, the average deviations of the OBI isocenter in each direction are less than 0.6 mm with uncertainties less than 0.2 mm, while all the other items have the displacements less than 1 mm. For radiographic image quality, the spatial resolution is 1.6 lp/cm with contrasts less than 2.2%. The spatial resolution, low contrast, and HU homogenous of CBCT are larger than 6 lp/cm, less than 1% and within 20 HU, respectively. All tests are within the criteria, except the HU value of Teflon measured with the full fan mode exceeding the suggested value that could be due to itself high HU value and needed to be rechecked. The OBI system in our facility was then demonstrated to be reliable with stable image quality. The QA of OBI system is really necessary to achieve the best treatment for a patient.

Keywords: CBCT, image quality, quality assurance, OBI

Procedia PDF Downloads 303
19476 Modeling Thermal Changes of Urban Blocks in Relation to the Landscape Structure and Configuration in Guilan Province

Authors: Roshanak Afrakhteh, Abdolrasoul Salman Mahini, Mahdi Motagh, Hamidreza Kamyab

Abstract:

Urban Heat Islands (UHIs) are distinctive urban areas characterized by densely populated central cores surrounded by less densely populated peripheral lands. These areas experience elevated temperatures, primarily due to impermeable surfaces and specific land use patterns. The consequences of these temperature variations are far-reaching, impacting the environment and society negatively, leading to increased energy consumption, air pollution, and public health concerns. This paper emphasizes the need for simplified approaches to comprehend UHI temperature dynamics and explains how urban development patterns contribute to land surface temperature variation. To illustrate this relationship, the study focuses on the Guilan Plain, utilizing techniques like principal component analysis and generalized additive models. The research centered on mapping land use and land surface temperature in the low-lying area of Guilan province. Satellite data from Landsat sensors for three different time periods (2002, 2012, and 2021) were employed. Using eCognition software, a spatial unit known as a "city block" was utilized through object-based analysis. The study also applied the normalized difference vegetation index (NDVI) method to estimate land surface radiance. Predictive variables for urban land surface temperature within residential city blocks were identified categorized as intrinsic (related to the block's structure) and neighboring (related to adjacent blocks) variables. Principal Component Analysis (PCA) was used to select significant variables, and a Generalized Additive Model (GAM) approach, implemented using R's mgcv package, modeled the relationship between urban land surface temperature and predictor variables.Notable findings included variations in urban temperature across different years attributed to environmental and climatic factors. Block size, shared boundary, mother polygon area, and perimeter-to-area ratio were identified as main variables for the generalized additive regression model. This model showed non-linear relationships, with block size, shared boundary, and mother polygon area positively correlated with temperature, while the perimeter-to-area ratio displayed a negative trend. The discussion highlights the challenges of predicting urban surface temperature and the significance of block size in determining urban temperature patterns. It also underscores the importance of spatial configuration and unit structure in shaping urban temperature patterns. In conclusion, this study contributes to the growing body of research on the connection between land use patterns and urban surface temperature. Block size, along with block dispersion and aggregation, emerged as key factors influencing urban surface temperature in residential areas. The proposed methodology enhances our understanding of parameter significance in shaping urban temperature patterns across various regions, particularly in Iran.

Keywords: urban heat island, land surface temperature, LST modeling, GAM, Gilan province

Procedia PDF Downloads 81
19475 A Framework for Automated Nuclear Waste Classification

Authors: Seonaid Hume, Gordon Dobie, Graeme West

Abstract:

Detecting and localizing radioactive sources is a necessity for safe and secure decommissioning of nuclear facilities. An important aspect for the management of the sort-and-segregation process is establishing the spatial distributions and quantities of the waste radionuclides, their type, corresponding activity, and ultimately classification for disposal. The data received from surveys directly informs decommissioning plans, on-site incident management strategies, the approach needed for a new cell, as well as protecting the workforce and the public. Manual classification of nuclear waste from a nuclear cell is time-consuming, expensive, and requires significant expertise to make the classification judgment call. Also, in-cell decommissioning is still in its relative infancy, and few techniques are well-developed. As with any repetitive and routine tasks, there is the opportunity to improve the task of classifying nuclear waste using autonomous systems. Hence, this paper proposes a new framework for the automatic classification of nuclear waste. This framework consists of five main stages; 3D spatial mapping and object detection, object classification, radiological mapping, source localisation based on gathered evidence and finally, waste classification. The first stage of the framework, 3D visual mapping, involves object detection from point cloud data. A review of related applications in other industries is provided, and recommendations for approaches for waste classification are made. Object detection focusses initially on cylindrical objects since pipework is significant in nuclear cells and indeed any industrial site. The approach can be extended to other commonly occurring primitives such as spheres and cubes. This is in preparation of stage two, characterizing the point cloud data and estimating the dimensions, material, degradation, and mass of the objects detected in order to feature match them to an inventory of possible items found in that nuclear cell. Many items in nuclear cells are one-offs, have limited or poor drawings available, or have been modified since installation, and have complex interiors, which often and inadvertently pose difficulties when accessing certain zones and identifying waste remotely. Hence, this may require expert input to feature match objects. The third stage, radiological mapping, is similar in order to facilitate the characterization of the nuclear cell in terms of radiation fields, including the type of radiation, activity, and location within the nuclear cell. The fourth stage of the framework takes the visual map for stage 1, the object characterization from stage 2, and radiation map from stage 3 and fuses them together, providing a more detailed scene of the nuclear cell by identifying the location of radioactive materials in three dimensions. The last stage involves combining the evidence from the fused data sets to reveal the classification of the waste in Bq/kg, thus enabling better decision making and monitoring for in-cell decommissioning. The presentation of the framework is supported by representative case study data drawn from an application in decommissioning from a UK nuclear facility. This framework utilises recent advancements of the detection and mapping capabilities of complex radiation fields in three dimensions to make the process of classifying nuclear waste faster, more reliable, cost-effective and safer.

Keywords: nuclear decommissioning, radiation detection, object detection, waste classification

Procedia PDF Downloads 206
19474 Development of Quasi Real-Time Comprehensive System for Earthquake Disaster

Authors: Zhi Liu, Hui Jiang, Jin Li, Kunhao Chen, Langfang Zhang

Abstract:

Fast acquisition of the seismic information and accurate assessment of the earthquake disaster is the key problem for emergency rescue after a destructive earthquake. In order to meet the requirements of the earthquake emergency response and rescue for the cities and counties, a quasi real-time comprehensive evaluation system for earthquake disaster is developed. Based on monitoring data of Micro-Electro-Mechanical Systems (MEMS) strong motion network, structure database of a county area and the real-time disaster information by the mobile terminal after an earthquake, fragility analysis method and dynamic correction algorithm are synthetically obtained in the developed system. Real-time evaluation of the seismic disaster in the county region is finally realized to provide scientific basis for seismic emergency command, rescue and assistant decision.

Keywords: quasi real-time, earthquake disaster data collection, MEMS accelerometer, dynamic correction, comprehensive evaluation

Procedia PDF Downloads 218
19473 Ant-Tracking Attribute: A Model for Understanding Production Response

Authors: Prince Suka Neekia Momta, Rita Iheoma Achonyeulo

Abstract:

Ant Tracking seismic attribute applied over 4-seconds seismic volume revealed structural features triggered by clay diapirism, growth fault development, rapid deltaic sedimentation and intense drilling. The attribute was extracted on vertical seismic sections and time slices. Mega tectonic structures such as growth faults and clay diapirs are visible on vertical sections with obscured minor lineaments or fractures. Fractures are distinctively visible on time slices yielding recognizable patterns corroborating established geologic models. This model seismic attribute enabled the understanding of fluid flow characteristics and production responses. Three structural patterns recognized in the field include: major growth faults, minor faults or lineaments and network of fractures. Three growth faults mapped on seismic section form major deformation bands delimiting the area into three blocks or depocenters. The growth faults trend E-W, dip down-to-south in the basin direction, and cut across the study area. The faults initiating from about 2000ms extended up to 500ms, and tend to progress parallel and opposite to the growth direction of an upsurging diapiric structure. The diapiric structures form the major deformational bands originating from great depths (below 2000ms) and rising to about 1200ms where series of sedimentary layers onlapped and pinchout stratigraphically against the diapir. Several other secondary faults or lineaments that form parallel streaks to one another also accompanied the growth faults. The fracture networks have no particular trend but form a network surrounding the well area. Faults identified in the study area have potentials for structural hydrocarbon traps whereas the presence of fractures created a fractured-reservoir condition that enhanced rapid fluid flow especially water. High aquifer flow potential aided by possible fracture permeability resulted in rapid decline in oil rate. Through the application of Ant Tracking attribute, it is possible to obtain detailed interpretation of structures that can have direct influence on oil and gas production.

Keywords: seismic, attributes, production, structural

Procedia PDF Downloads 77
19472 Sustainable Tourism and Heritage in Sığacık/Seferihisar

Authors: Sibel Ecemiş Kılıç, Muhammed Aydoğan

Abstract:

The rapid development of culture tourism has drawn attention to conserving cultural values especially by developing countries that would like to benefit from the economic contribution this type of tourism attracts. Tourism can have both positive and negative outcomes for historical settlements and their residents. The accommodation-purposed rehabilitation and revitalization project in “Sigacik Old City Zone” are to be discussed with spatial, economic, social and organizational dimensions. It is aimed to evaluate the relationship between the development of tourism and sustainable heritage conservation.

Keywords: Sığacık, urban conservation, sustainable tourism, Seferihisar

Procedia PDF Downloads 508
19471 Current Status and Future Trends of Mechanized Fruit Thinning Devices and Sensor Technology

Authors: Marco Lopes, Pedro D. Gaspar, Maria P. Simões

Abstract:

This paper reviews the different concepts that have been investigated concerning the mechanization of fruit thinning as well as multiple working principles and solutions that have been developed for feature extraction of horticultural products, both in the field and industrial environments. The research should be committed towards selective methods, which inevitably need to incorporate some kinds of sensor technology. Computer vision often comes out as an obvious solution for unstructured detection problems, although leaves despite the chosen point of view frequently occlude fruits. Further research on non-traditional sensors that are capable of object differentiation is needed. Ultrasonic and Near Infrared (NIR) technologies have been investigated for applications related to horticultural produce and show a potential to satisfy this need while simultaneously providing spatial information as time of flight sensors. Light Detection and Ranging (LIDAR) technology also shows a huge potential but it implies much greater costs and the related equipment is usually much larger, making it less suitable for portable devices, which may serve a purpose on smaller unstructured orchards. Portable devices may serve a purpose on these types of orchards. In what concerns sensor methods, on-tree fruit detection, major challenge is to overcome the problem of fruits’ occlusion by leaves and branches. Hence, nontraditional sensors capable of providing some type of differentiation should be investigated.

Keywords: fruit thinning, horticultural field, portable devices, sensor technologies

Procedia PDF Downloads 144
19470 New Results on Exponential Stability of Hybrid Systems

Authors: Grienggrai Rajchakit

Abstract:

This paper is concerned with the exponential stability of switched linear systems with interval time-varying delays. The time delay is any continuous function belonging to a given interval, in which the lower bound of delay is not restricted to zero. By constructing a suitable augmented Lyapunov-Krasovskii functional combined with Leibniz-Newton's formula, a switching rule for the exponential stability of switched linear systems with interval time-varying delays and new delay-dependent sufficient conditions for the exponential stability of the systems are first established in terms of LMIs. Finally, some examples are exploited to illustrate the effectiveness of the proposed schemes.

Keywords: exponential stability, hybrid systems, time-varying delays, lyapunov-krasovskii functional, leibniz-newton's formula

Procedia PDF Downloads 547
19469 An Approaching Index to Evaluate a forward Collision Probability

Authors: Yuan-Lin Chen

Abstract:

This paper presents an approaching forward collision probability index (AFCPI) for alerting and assisting driver in keeping safety distance to avoid the forward collision accident in highway driving. The time to collision (TTC) and time headway (TH) are used to evaluate the TTC forward collision probability index (TFCPI) and the TH forward collision probability index (HFCPI), respectively. The Mamdani fuzzy inference algorithm is presented combining TFCPI and HFCPI to calculate the approaching collision probability index of the vehicle. The AFCPI is easier to understand for the driver who did not even have any professional knowledge in vehicle professional field. At the same time, the driver’s behavior is taken into account for suiting each driver. For the approaching index, the value 0 is indicating the 0% probability of forward collision, and the values 0.5 and 1 are indicating the 50% and 100% probabilities of forward collision, respectively. The AFCPI is useful and easy-to-understand for alerting driver to avoid the forward collision accidents when driving in highway.

Keywords: approaching index, forward collision probability, time to collision, time headway

Procedia PDF Downloads 298
19468 The Impact of the Urban Planning and Environmental Problems over the Quality of Life Case Study: Median Zone of Bucharest's Sector 1, Romania

Authors: Cristian Cazacu, Bela Kobulniczky

Abstract:

Even though nowadays the median area of the Bucharest’s Sector 1 owns one of the best reputations in terms of quality of life level, the problems in urban planning from the last twenty years, as well as those related to the urban environment, became more and more obvious and shrill. And all this happened as long as non-compliance with urban and spatial planning laws, corroborated with uncontrolled territorial expansion on certain areas and faulty management of public and private spaces were more acute. The action of all these factors has been felt more and more strongly in the territory in the last twenty years, generating the degradation of the quality of the urban environment and affecting in parallel the general level of the inhabitants¬’ quality of life. Our methodology is based on analyzing a wide range of environmental parameters and it is also based on using advanced resources and skills for mapping planning and environmental dysfunctions as well as the possibility of integrating information into GIS programs, all data sets corroborated with problems related to spatial planning management and inaccuracies of the urbanistic sector. In the end, we managed to obtain a calculated and realistic image of the dysfunctions and a quantitative view of their magnitude in the territory. We also succeeded to create a full general map of the degree of degradation of the urban environment by typologies of urban tissues. Moreover, the methods applied by us can also be used globally to calculate and create realistic images and intelligent maps over the quality of the environment in areas larger than this one. Our study shows that environmental degradation occurred differently in the urban tissues from our study area, depending on several factors, reviewing the faulty way in which the processes of recovery / urban regeneration of the gap in recent years have led to the creation of new territorial dysfunctions. The general, centralized results show that the analyzed space has a much wider range of problems than initially thought, although notoriety and social etiquette place them far above other spaces from the same city of study.

Keywords: environment, GIS, planning, urban tissues

Procedia PDF Downloads 151
19467 Rural Water Supply Services in India: Developing a Composite Summary Score

Authors: Mimi Roy, Sriroop Chaudhuri

Abstract:

Sustainable water supply is among the basic needs for human development, especially in the rural areas of the developing nations where safe water supply and basic sanitation infrastructure is direly needed. In light of the above, we propose a simple methodology to develop a composite water sustainability index (WSI) to assess the collective performance of the existing rural water supply services (RWSS) in India over time. The WSI will be computed by summarizing the details of all the different varieties of water supply schemes presently available in India comprising of 40 liters per capita per day (lpcd), 55 lpcd, and piped water supply (PWS) per household. The WSI will be computed annually, between 2010 and 2016, to elucidate changes in holistic RWSS performances. Results will be integrated within a robust geospatial framework to identify the ‘hotspots’ (states/districts) which have persistent issues over adequate RWSS coverage and warrant spatially-optimized policy reforms in future to address sustainable human development. Dataset will be obtained from the National Rural Drinking Water Program (NRDWP), operating under the aegis of the Ministry of Drinking Water and Sanitation (MoDWS), at state/district/block levels to offer the authorities a cross-sectional view of RWSS at different levels of administrative hierarchy. Due to simplistic design, complemented by spatio-temporal cartograms, similar approaches can also be adopted in other parts of the world where RWSS need a thorough appraisal.

Keywords: rural water supply services, piped water supply, sustainability, composite index, spatial, drinking water

Procedia PDF Downloads 303
19466 An Approach to Analyze Testing of Nano On-Chip Networks

Authors: Farnaz Fotovvatikhah, Javad Akbari

Abstract:

Test time of a test architecture is an important factor which depends on the architecture's delay and test patterns. Here a new architecture to store the test results based on network on chip is presented. In addition, simple analytical model is proposed to calculate link test time for built in self-tester (BIST) and external tester (Ext) in multiprocessor systems. The results extracted from the model are verified using FPGA implementation and experimental measurements. Systems consisting 16, 25, and 36 processors are implemented and simulated and test time is calculated. In addition, BIST and Ext are compared in terms of test time at different conditions such as at different number of test patterns and nodes. Using the model the maximum frequency of testing could be calculated and the test structure could be optimized for high speed testing.

Keywords: test, nano on-chip network, JTAG, modelling

Procedia PDF Downloads 492
19465 Improving the Performances of the nMPRA Architecture by Implementing Specific Functions in Hardware

Authors: Ionel Zagan, Vasile Gheorghita Gaitan

Abstract:

Minimizing the response time to asynchronous events in a real-time system is an important factor in increasing the speed of response and an interesting concept in designing equipment fast enough for the most demanding applications. The present article will present the results regarding the validation of the nMPRA (Multi Pipeline Register Architecture) architecture using the FPGA Virtex-7 circuit. The nMPRA concept is a hardware processor with the scheduler implemented at the processor level; this is done without affecting a possible bus communication, as is the case with the other CPU solutions. The implementation of static or dynamic scheduling operations in hardware and the improvement of handling interrupts and events by the real-time executive described in the present article represent a key solution for eliminating the overhead of the operating system functions. The nMPRA processor is capable of executing a preemptive scheduling, using various algorithms without a software scheduler. Therefore, we have also presented various scheduling methods and algorithms used in scheduling the real-time tasks.

Keywords: nMPRA architecture, pipeline processor, preemptive scheduling, real-time system

Procedia PDF Downloads 374