Search results for: resolution time
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18949

Search results for: resolution time

18379 Quality Assurances for an On-Board Imaging System of a Linear Accelerator: Five Months Data Analysis

Authors: Liyun Chang, Cheng-Hsiang Tsai

Abstract:

To ensure the radiation precisely delivering to the target of cancer patients, the linear accelerator equipped with the pretreatment on-board imaging system is introduced and through it the patient setup is verified before the daily treatment. New generation radiotherapy using beam-intensity modulation, usually associated the treatment with steep dose gradients, claimed to have achieved both a higher degree of dose conformation in the targets and a further reduction of toxicity in normal tissues. However, this benefit is counterproductive if the beam is delivered imprecisely. To avoid shooting critical organs or normal tissues rather than the target, it is very important to carry out the quality assurance (QA) of this on-board imaging system. The QA of the On-Board Imager® (OBI) system of one Varian Clinac-iX linear accelerator was performed through our procedures modified from a relevant report and AAPM TG142. Two image modalities, 2D radiography and 3D cone-beam computed tomography (CBCT), of the OBI system were examined. The daily and monthly QA was executed for five months in the categories of safety, geometrical accuracy and image quality. A marker phantom and a blade calibration plate were used for the QA of geometrical accuracy, while the Leeds phantom and Catphan 504 phantom were used in the QA of radiographic and CBCT image quality, respectively. The reference images were generated through a GE LightSpeed CT simulator with an ADAC Pinnacle treatment planning system. Finally, the image quality was analyzed via an OsiriX medical imaging system. For the geometrical accuracy test, the average deviations of the OBI isocenter in each direction are less than 0.6 mm with uncertainties less than 0.2 mm, while all the other items have the displacements less than 1 mm. For radiographic image quality, the spatial resolution is 1.6 lp/cm with contrasts less than 2.2%. The spatial resolution, low contrast, and HU homogenous of CBCT are larger than 6 lp/cm, less than 1% and within 20 HU, respectively. All tests are within the criteria, except the HU value of Teflon measured with the full fan mode exceeding the suggested value that could be due to itself high HU value and needed to be rechecked. The OBI system in our facility was then demonstrated to be reliable with stable image quality. The QA of OBI system is really necessary to achieve the best treatment for a patient.

Keywords: CBCT, image quality, quality assurance, OBI

Procedia PDF Downloads 293
18378 Survival Analysis Based Delivery Time Estimates for Display FAB

Authors: Paul Han, Jun-Geol Baek

Abstract:

In the flat panel display industry, the scheduler and dispatching system to meet production target quantities and the deadline of production are the major production management system which controls each facility production order and distribution of WIP (Work in Process). In dispatching system, delivery time is a key factor for the time when a lot can be supplied to the facility. In this paper, we use survival analysis methods to identify main factors and a forecasting model of delivery time. Of survival analysis techniques to select important explanatory variables, the cox proportional hazard model is used to. To make a prediction model, the Accelerated Failure Time (AFT) model was used. Performance comparisons were conducted with two other models, which are the technical statistics model based on transfer history and the linear regression model using same explanatory variables with AFT model. As a result, the Mean Square Error (MSE) criteria, the AFT model decreased by 33.8% compared to the existing prediction model, decreased by 5.3% compared to the linear regression model. This survival analysis approach is applicable to implementing a delivery time estimator in display manufacturing. And it can contribute to improve the productivity and reliability of production management system.

Keywords: delivery time, survival analysis, Cox PH model, accelerated failure time model

Procedia PDF Downloads 535
18377 Photonic Dual-Microcomb Ranging with Extreme Speed Resolution

Authors: R. R. Galiev, I. I. Lykov, A. E. Shitikov, I. A. Bilenko

Abstract:

Dual-comb interferometry is based on the mixing of two optical frequency combs with slightly different lines spacing which results in the mapping of the optical spectrum into the radio-frequency domain for future digitizing and numerical processing. The dual-comb approach enables diverse applications, including metrology, fast high-precision spectroscopy, and distance range. Ordinary frequency-modulated continuous-wave (FMCW) laser-based Light Identification Detection and Ranging systems (LIDARs) suffer from two main disadvantages: slow and unreliable mechanical, spatial scan and a rather wide linewidth of conventional lasers, which limits speed measurement resolution. Dual-comb distance measurements with Allan deviations down to 12 nanometers at averaging times of 13 microseconds, along with ultrafast ranging at acquisition rates of 100 megahertz, allowing for an in-flight sampling of gun projectiles moving at 150 meters per second, was previously demonstrated. Nevertheless, pump lasers with EDFA amplifiers made the device bulky and expensive. An alternative approach is a direct coupling of the laser to a reference microring cavity. Backscattering can tune the laser to the eigenfrequency of the cavity via the so-called self-injection locked (SIL) effect. Moreover, the nonlinearity of the cavity allows a solitonic frequency comb generation in the very same cavity. In this work, we developed a fully integrated, power-efficient, electrically driven dual-micro comb source based on the semiconductor lasers SIL to high-quality integrated Si3N4 microresonators. We managed to obtain robust 1400-1700 nm combs generation with a 150 GHz or 1 THz lines spacing and measure less than a 1 kHz Lorentzian withs of stable, MHz spaced beat notes in a GHz band using two separated chips, each pumped by its own, self-injection locked laser. A deep investigation of the SIL dynamic allows us to find out the turn-key operation regime even for affordable Fabry-Perot multifrequency lasers used as a pump. It is important that such lasers are usually more powerful than DFB ones, which were also tested in our experiments. In order to test the advantages of the proposed techniques, we experimentally measured a minimum detectable speed of a reflective object. It has been shown that the narrow line of the laser locked to the microresonator provides markedly better velocity accuracy, showing velocity resolution down to 16 nm/s, while the no-SIL diode laser only allowed 160 nm/s with good accuracy. The results obtained are in agreement with the estimations and open up ways to develop LIDARs based on compact and cheap lasers. Our implementation uses affordable components, including semiconductor laser diodes and commercially available silicon nitride photonic circuits with microresonators.

Keywords: dual-comb spectroscopy, LIDAR, optical microresonator, self-injection locking

Procedia PDF Downloads 66
18376 Tabu Search Algorithm for Ship Routing and Scheduling Problem with Time Window

Authors: Khaled Moh. Alhamad

Abstract:

This paper describes a tabu search heuristic for a ship routing and scheduling problem (SRSP). The method was developed to address the problem of loading cargos for many customers using heterogeneous vessels. Constraints relate to delivery time windows imposed by customers, the time horizon by which all deliveries must be made and vessel capacities. The results of a computational investigation are presented. Solution quality and execution time are explored with respect to problem size and parameters controlling the tabu search such as tenure and neighbourhood size.

Keywords: heuristic, scheduling, tabu search, transportation

Procedia PDF Downloads 502
18375 Understanding the Classification of Rain Microstructure and Estimation of Z-R Relationship using a Micro Rain Radar in Tropical Region

Authors: Tomiwa, Akinyemi Clement

Abstract:

Tropical regions experience diverse and complex precipitation patterns, posing significant challenges for accurate rainfall estimation and forecasting. This study addresses the problem of effectively classifying tropical rain types and refining the Z-R (Reflectivity-Rain Rate) relationship to enhance rainfall estimation accuracy. Through a combination of remote sensing, meteorological analysis, and machine learning, the research aims to develop an advanced classification framework capable of distinguishing between different types of tropical rain based on their unique characteristics. This involves utilizing high-resolution satellite imagery, radar data, and atmospheric parameters to categorize precipitation events into distinct classes, providing a comprehensive understanding of tropical rain systems. Additionally, the study seeks to improve the Z-R relationship, a crucial aspect of rainfall estimation. One year of rainfall data was analyzed using a Micro Rain Radar (MRR) located at The Federal University of Technology Akure, Nigeria, measuring rainfall parameters from ground level to a height of 4.8 km with a vertical resolution of 0.16 km. Rain rates were classified into low (stratiform) and high (convective) based on various microstructural attributes such as rain rates, liquid water content, Drop Size Distribution (DSD), average fall speed of the drops, and radar reflectivity. By integrating diverse datasets and employing advanced statistical techniques, the study aims to enhance the precision of Z-R models, offering a more reliable means of estimating rainfall rates from radar reflectivity data. This refined Z-R relationship holds significant potential for improving our understanding of tropical rain systems and enhancing forecasting accuracy in regions prone to heavy precipitation.

Keywords: remote sensing, precipitation, drop size distribution, micro rain radar

Procedia PDF Downloads 23
18374 Shape Evolution of CdSe Quantum Dots during the Synthesis in the Presence of Silver Halides

Authors: Pavel Kotin, Sergey Dotofeev, Daniil Kozlov, Alexey Garshev

Abstract:

We propose the investigation of CdSe quantum dots which were synthesized in the presence of silver halides. To understand a process of nanoparticle formation in more detail, we varied the silver halide amount in the synthesis and proposed a sampling during colloidal growth. The attempts were focused on the investigation of shape, structure and optical properties of nanoparticles. We used the colloidal method of synthesis. Cadmium oleate, tri-n-octylphosphine selenide (TOPSe) and AgHal in TOP were precursors of cadmium, selenium and silver halides correspondingly. The molar Ag/Cd ratio in synthesis was varied from 1/16 to 1/1. The sampling was basically realized in 20 sec, 5 min, and 30 min after the beginning of quantum dots nucleation. To investigate nanoparticles we used transmission electron microscopy (including high resolution one), X-ray diffraction, and optical spectroscopy. It was established that silver halides lead to obtaining tetrapods with different leg length and large ellipsoidal nanoparticles possessing an intensive near IR photoluminescence. The change of the amount of silver halide in synthesis and the selection of an optimal growth time allows controlling the shape and the share of tetrapods or ellipsoidal nanoparticles in the product. Our main attempts were focused on a detailed investigation of the quantum dots structure and shape evolution and, finally, on mechanisms of such nanoparticle formation.

Keywords: colloidal quantum dots, shape evolution, silver doping, tetrapods

Procedia PDF Downloads 286
18373 The Application of Bayesian Heuristic for Scheduling in Real-Time Private Clouds

Authors: Sahar Sohrabi

Abstract:

The emergence of Cloud data centers has revolutionized the IT industry. Private Clouds in specific provide Cloud services for certain group of customers/businesses. In a real-time private Cloud each task that is given to the system has a deadline that desirably should not be violated. Scheduling tasks in a real-time private CLoud determine the way available resources in the system are shared among incoming tasks. The aim of the scheduling policy is to optimize the system outcome which for a real-time private Cloud can include: energy consumption, deadline violation, execution time and the number of host switches. Different scheduling policies can be used for scheduling. Each lead to a sub-optimal outcome in a certain settings of the system. A Bayesian Scheduling strategy is proposed for scheduling to further improve the system outcome. The Bayesian strategy showed to outperform all selected policies. It also has the flexibility in dealing with complex pattern of incoming task and has the ability to adapt.

Keywords: cloud computing, scheduling, real-time private cloud, bayesian

Procedia PDF Downloads 354
18372 Bright Light Effects on the Concentration and Diffuse Attention Reaction Time, Tension, Angry, Fatigue and Alertness among Shift Workers

Authors: Mohammad Imani, JabraeilNasl Seraji, Abolfazl Zakerian

Abstract:

Background: Reaction time is the amount of time it takes to respond to a stimulus. In fact The time that passes between the introduction of a stimulus and the reaction by the subject to that stimulus. The aim of this interventional study is evaluation of bright light effects on concentration and diffuse attention reaction time, tension, angry, fatigue and alertness among shift workers. There are several incentives that can reduce the reaction time or added. Bright light as one of the environmental factors can reduce reaction time. Material &Method: This cross-sectional descriptive study was conducted in 1391, in 88 subjects (44 Fixed morning worker and 44 shift worker ) In a 24 h time (13-16-19-22-1-4-7-10) in an ordinary light situation after a randomly selected sample size calculation, concentration and diffuse attention test (reaction time) has been done. After intervention and using of bright light (4500lux), again reaction time test was done. After analyzing by ElISA method obtained data were analyzed by statistical software SPSS 19 and using T-test and ANOVA statistical analysis. Results: Between average of reaction time tests in ordinary light exposed to fixed morning workers and bright light exposed to shift worker, with 95% CI, (P>%5) there was no significant relationship. After the intervention and the use of bright light (4500 lux),between average of concentration and diffused attention reaction time tests in ordinary light exposure on the fixed morning workers and bright light exposure shift workers with 95% CI, (P<5%) there was significant relationship. Conclusion: In sometimes of 24 h during ordinary light exposure concentration and diffused attention reaction time has changed in shift workers. After intervention, during bright light (4500lux) exposure as a light shower, focused and diffuse attention reaction time, tension ,angry and fatigue decreased.

Keywords: bright light, reaction time, tension, angry, fatigue, alertness

Procedia PDF Downloads 379
18371 Interdigitated Flexible Li-Ion Battery by Aerosol Jet Printing

Authors: Yohann R. J. Thomas, Sébastien Solan

Abstract:

Conventional battery technology includes the assembly of electrode/separator/electrode by standard techniques such as stacking or winding, depending on the format size. In that type of batteries, coating or pasting techniques are only used for the electrode process. The processes are suited for large scale production of batteries and perfectly adapted to plenty of application requirements. Nevertheless, as the demand for both easier and cost-efficient production modes, flexible, custom-shaped and efficient small sized batteries is rising. Thin-film, printable batteries are one of the key areas for printed electronics. In the frame of European BASMATI project, we are investigating the feasibility of a new design of lithium-ion battery: interdigitated planar core design. Polymer substrate is used to produce bendable and flexible rechargeable accumulators. Direct fully printed batteries lead to interconnect the accumulator with other electronic functions for example organic solar cells (harvesting function), printed sensors (autonomous sensors) or RFID (communication function) on a common substrate to produce fully integrated, thin and flexible new devices. To fulfill those specifications, a high resolution printing process have been selected: Aerosol jet printing. In order to fit with this process parameters, we worked on nanomaterials formulation for current collectors and electrodes. In addition, an advanced printed polymer-electrolyte is developed to be implemented directly in the printing process in order to avoid the liquid electrolyte filling step and to improve safety and flexibility. Results: Three different current collectors has been studied and printed successfully. An ink of commercial copper nanoparticles has been formulated and printed, then a flash sintering was applied to the interdigitated design. A gold ink was also printed, the resulting material was partially self-sintered and did not require any high temperature post treatment. Finally, carbon nanotubes were also printed with a high resolution and well defined patterns. Different electrode materials were formulated and printed according to the interdigitated design. For cathodes, NMC and LFP were efficaciously printed. For anodes, LTO and graphite have shown to be good candidates for the fully printed battery. The electrochemical performances of those materials have been evaluated in a standard coin cell with lithium-metal counter electrode and the results are similar with those of a traditional ink formulation and process. A jellified plastic crystal solid state electrolyte has been developed and showed comparable performances to classical liquid carbonate electrolytes with two different materials. In our future developments, focus will be put on several tasks. In a first place, we will synthesize and formulate new specific nano-materials based on metal-oxyde. Then a fully printed device will be produced and its electrochemical performance will be evaluated.

Keywords: high resolution digital printing, lithium-ion battery, nanomaterials, solid-state electrolytes

Procedia PDF Downloads 245
18370 Generating Real-Time Visual Summaries from Located Sensor-Based Data with Chorems

Authors: Z. Bouattou, R. Laurini, H. Belbachir

Abstract:

This paper describes a new approach for the automatic generation of the visual summaries dealing with cartographic visualization methods and sensors real time data modeling. Hence, the concept of chorems seems an interesting candidate to visualize real time geographic database summaries. Chorems have been defined by Roger Brunet (1980) as schematized visual representations of territories. However, the time information is not yet handled in existing chorematic map approaches, issue has been discussed in this paper. Our approach is based on spatial analysis by interpolating the values recorded at the same time, by sensors available, so we have a number of distributed observations on study areas and used spatial interpolation methods to find the concentration fields, from these fields and by using some spatial data mining procedures on the fly, it is possible to extract important patterns as geographic rules. Then, those patterns are visualized as chorems.

Keywords: geovisualization, spatial analytics, real-time, geographic data streams, sensors, chorems

Procedia PDF Downloads 396
18369 Influence of Cobalt Incorporation on the Structure and Properties of SOL-Gel Derived Mesoporous Bioglass Nanoparticles

Authors: Ahmed El-Fiqi, Hae-Won Kim

Abstract:

Incorporation of therapeutic elements such as Sr, Cu and Co into bioglass structure and their release as ions is considered as one of the promising approaches to enhance cellular responses, e.g., osteogenesis and angiogenesis. Here, cobalt as angiogenesis promoter has been incorporated (at 0, 1 and 4 mol%) into sol-gel derived calcium silicate mesoporous bioglass nanoparticles. The composition and structure of cobalt-free (CFN) and cobalt-doped (CDN) mesoporous bioglass nanoparticles have been analyzed by X-ray fluorescence (XRF), X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS) and Fourier-Transform Infra-red spectroscopy (FT-IR). The physicochemical properties of CFN and CDN have been investigated using high-resolution transmission electron microscopy (HR-TEM), Selected area electron diffraction (SAED), and Energy-dispersive X-ray (EDX). Furthermore, the textural properties, including specific surface area, pore-volume, and pore size, have been analyzed from N²⁻sorption analyses. Surface charges of CFN and CDN were also determined from surface zeta potential measurements. The release of ions, including Co²⁺, Ca²⁺, and SiO₄⁴⁻ has been analyzed using inductively coupled plasma atomic emission spectrometry (ICP-AES). Loading and release of diclofenac as an anti-inflammatory drug model were explored in vitro using Ultraviolet-visible spectroscopy (UV-Vis). XRD results ensured the amorphous state of CFN and CDN whereas, XRF further confirmed that their chemical compositions are very close to the designed compositions. HR-TEM analyses unveiled nanoparticles with spherical morphologies, highly mesoporous textures, and sizes in the range of 90 - 100 nm. Moreover, N²⁻ sorption analyses revealed that the nanoparticles have pores with sizes of 3.2 - 2.6 nm, pore volumes of 0.41 - 0.35 cc/g and highly surface areas in the range of 716 - 830 m²/g. High-resolution XPS analysis of Co 2p core level provided structural information about Co atomic environment and it confirmed the electronic state of Co in the glass matrix. ICP-AES analysis showed the release of therapeutic doses of Co²⁺ ions from 4% CDN up to 100 ppm within 14 days. Finally, diclofenac loading and release have ensured the drug/ion co-delivery capability of 4% CDN.

Keywords: mesoporous bioactive glass, nanoparticles, cobalt ions, release

Procedia PDF Downloads 101
18368 Comparison of Applicability of Time Series Forecasting Models VAR, ARCH and ARMA in Management Science: Study Based on Empirical Analysis of Time Series Techniques

Authors: Muhammad Tariq, Hammad Tahir, Fawwad Mahmood Butt

Abstract:

Purpose: This study attempts to examine the best forecasting methodologies in the time series. The time series forecasting models such as VAR, ARCH and the ARMA are considered for the analysis. Methodology: The Bench Marks or the parameters such as Adjusted R square, F-stats, Durban Watson, and Direction of the roots have been critically and empirically analyzed. The empirical analysis consists of time series data of Consumer Price Index and Closing Stock Price. Findings: The results show that the VAR model performed better in comparison to other models. Both the reliability and significance of VAR model is highly appreciable. In contrary to it, the ARCH model showed very poor results for forecasting. However, the results of ARMA model appeared double standards i.e. the AR roots showed that model is stationary and that of MA roots showed that the model is invertible. Therefore, the forecasting would remain doubtful if it made on the bases of ARMA model. It has been concluded that VAR model provides best forecasting results. Practical Implications: This paper provides empirical evidences for the application of time series forecasting model. This paper therefore provides the base for the application of best time series forecasting model.

Keywords: forecasting, time series, auto regression, ARCH, ARMA

Procedia PDF Downloads 338
18367 Thick Disc Molecular Gas Fraction in NGC 6946

Authors: Narendra Nath Patra

Abstract:

Several recent studies reinforce the existence of a thick molecular disc in galaxies along with the dynamically cold thin disc. Assuming a two-component molecular disc, we model the disc of NGC 6946 as a four-component system consists of stars, HI, thin disc molecular gas, and thick disc molecular gas in vertical hydrostatic equilibrium. Following, we set up the joint Poisson-Boltzmann equation of hydrostatic equilibrium and solve it numerically to obtain a three-dimensional density distribution of different baryonic components. Using the density solutions and the observed rotation curve, we further build a three-dimensional dynamical model of the molecular disc and consecutively produce simulated CO spectral cube and spectral width profile. We find that the simulated spectral width profiles distinguishably differs for different assumed thick disc molecular gas fraction. Several CO spectral width profiles are then produced for different assumed thick disc molecular gas fractions and compared with the observed one to obtain the best fit thick disc molecular gas fraction profile. We find that the thick disc molecular gas fraction in NGC 6946 largely remains constant across its molecular disc with a mean value of 0.70 +/- 0.09. We also estimate the amount of extra-planar molecular gas in NGC 6946. We find 60% of the total molecular gas is extra-planar at the central region, whereas this fraction reduces to ~ 35% at the edge of the molecular disc. With our method, for the first time, we estimate the thick disc molecular gas fraction as a function of radius in an external galaxy with sub-kpc resolution.

Keywords: galaxies: kinematics and dynamic, galaxies: spiral, galaxies: structure , ISM: molecules, molecular data

Procedia PDF Downloads 133
18366 The Study on Corpse Floating Time in Shanghai Region of China

Authors: Hang Meng, Wen-Bin Liu, Bi Xiao, Kai-Jun Ma, Jian-Hui Xie, Geng Fei, Tian-Ye Zhang, Lu-Yi Xu, Dong-Chuan Zhang

Abstract:

The victims in water are often found in the coastal region, along river region or the region with lakes. In China, the examination for the bodies of victims in the water is conducted by forensic doctors working in the public security bureau. Because the enter water time for most of the victims are not clear, and often lack of monitor images and other information, so to find out the corpse enter water time for victims is very difficult. After the corpse of the victim enters the water, it sinks first, then corruption gas produces, which can make the density of the corpse to be less than water, and thus rise again. So the factor that determines the corpse floating time is temperature. On the basis of the temperature data obtained in Shanghai region of China (Shanghai is a north subtropical marine monsoon climate, with an average annual temperature of about 17.1℃. The hottest month is July, the average monthly temperature is 28.6℃, and the coldest month is January, the average monthly temperature is 4.8℃). This study selected about 100 cases with definite corpse enter water time and corpse floating time, analyzed the cases and obtained the empirical law of the corpse floating time. For example, in the Shanghai region, on June 15th and October 15th, the corpse floating time is about 1.5 days. In early December, the bodies who entered the water will go up around January 1st of the following year, and the bodies who enter water in late December will float in March of next year. The results of this study can be used to roughly estimate the water enter time of the victims in Shanghai. Forensic doctors around the world can also draw on the results of this study to infer the time when the corpses of the victims in the water go up.

Keywords: corpse enter water time, corpse floating time, drowning, forensic pathology, victims in the water

Procedia PDF Downloads 191
18365 Basic Calibration and Normalization Techniques for Time Domain Reflectometry Measurements

Authors: Shagufta Tabassum

Abstract:

The study of dielectric properties in a binary mixture of liquids is very useful to understand the liquid structure, molecular interaction, dynamics, and kinematics of the mixture. Time-domain reflectometry (TDR) is a powerful tool for studying the cooperation and molecular dynamics of the H-bonded system. In this paper, we discuss the basic calibration and normalization procedure for time-domain reflectometry measurements. Our approach is to explain the different types of error occur during TDR measurements and how these errors can be eliminated or minimized.

Keywords: time domain reflectometry measurement techinque, cable and connector loss, oscilloscope loss, and normalization technique

Procedia PDF Downloads 198
18364 An excessive Screen Time of High School Students in Their Free Time Promotes Our Young People’s Risk of Obesity

Authors: Susana Aldaba Yaben, Marga Echauri Ozcoidi, Rosario Osinaga Cenoz

Abstract:

It was decided to make a diagnosis with students of Berriozar High School between 12 and 15 years (both included) for their lifestyles in relation to eating habits, BMI (Body Mass Index), physical activity, drugs, interpersonal relationships and screen time. The aim of this survey is identifying needs of this population and depending on the results, we could program socio-educational activities. This action is part of the Community Health Promotion Programme and healthy lifestyles in childhood and youth of Berriozar. The eating habits, a lack of physical activity and an excessive screen time are causes of 26,75% of obese or overweight young people. First of all, many of them have got a diet enriched in saturated fats and sugars. Secondly, most of them do not practise physical exercise daily and finally, their screen time are higher than the recommendation (until 2 hours a day).

Keywords: lifestyle, diet, BMI, physical activity, screen time, education, youth

Procedia PDF Downloads 569
18363 Evaluation of Video Quality Metrics and Performance Comparison on Contents Taken from Most Commonly Used Devices

Authors: Pratik Dhabal Deo, Manoj P.

Abstract:

With the increasing number of social media users, the amount of video content available has also significantly increased. Currently, the number of smartphone users is at its peak, and many are increasingly using their smartphones as their main photography and recording devices. There have been a lot of developments in the field of Video Quality Assessment (VQA) and metrics like VMAF, SSIM etc. are said to be some of the best performing metrics, but the evaluation of these metrics is dominantly done on professionally taken video contents using professional tools, lighting conditions etc. No study particularly pinpointing the performance of the metrics on the contents taken by users on very commonly available devices has been done. Datasets that contain a huge number of videos from different high-end devices make it difficult to analyze the performance of the metrics on the content from most used devices even if they contain contents taken in poor lighting conditions using lower-end devices. These devices face a lot of distortions due to various factors since the spectrum of contents recorded on these devices is huge. In this paper, we have presented an analysis of the objective VQA metrics on contents taken only from most used devices and their performance on them, focusing on full-reference metrics. To carry out this research, we created a custom dataset containing a total of 90 videos that have been taken from three most commonly used devices, and android smartphone, an IOS smartphone and a DSLR. On the videos taken on each of these devices, the six most common types of distortions that users face have been applied on addition to already existing H.264 compression based on four reference videos. These six applied distortions have three levels of degradation each. A total of the five most popular VQA metrics have been evaluated on this dataset and the highest values and the lowest values of each of the metrics on the distortions have been recorded. Finally, it is found that blur is the artifact on which most of the metrics didn’t perform well. Thus, in order to understand the results better the amount of blur in the data set has been calculated and an additional evaluation of the metrics was done using HEVC codec, which is the next version of H.264 compression, on the camera that proved to be the sharpest among the devices. The results have shown that as the resolution increases, the performance of the metrics tends to become more accurate and the best performing metric among them is VQM with very few inconsistencies and inaccurate results when the compression applied is H.264, but when the compression is applied is HEVC, SSIM and VMAF have performed significantly better.

Keywords: distortion, metrics, performance, resolution, video quality assessment

Procedia PDF Downloads 198
18362 Numerical Investigation of Phase Change Materials (PCM) Solidification in a Finned Rectangular Heat Exchanger

Authors: Mounir Baccar, Imen Jmal

Abstract:

Because of the rise in energy costs, thermal storage systems designed for the heating and cooling of buildings are becoming increasingly important. Energy storage can not only reduce the time or rate mismatch between energy supply and demand but also plays an important role in energy conservation. One of the most preferable storage techniques is the Latent Heat Thermal Energy Storage (LHTES) by Phase Change Materials (PCM) due to its important energy storage density and isothermal storage process. This paper presents a numerical study of the solidification of a PCM (paraffin RT27) in a rectangular thermal storage exchanger for air conditioning systems taking into account the presence of natural convection. Resolution of continuity, momentum and thermal energy equations are treated by the finite volume method. The main objective of this numerical approach is to study the effect of natural convection on the PCM solidification time and the impact of fins number on heat transfer enhancement. It also aims at investigating the temporal evolution of PCM solidification, as well as the longitudinal profiles of the HTF circling in the duct. The present research undertakes the study of two cases: the first one treats the solidification of PCM in a PCM-air heat exchanger without fins, while the second focuses on the solidification of PCM in a heat exchanger of the same type with the addition of fins (3 fins, 5 fins, and 9 fins). Without fins, the stratification of the PCM from colder to hotter during the heat transfer process has been noted. This behavior prevents the formation of thermo-convective cells in PCM area and then makes transferring almost conductive. In the presence of fins, energy extraction from PCM to airflow occurs at a faster rate, which contributes to the reduction of the discharging time and the increase of the outlet air temperature (HTF). However, for a great number of fins (9 fins), the enhancement of the solidification process is not significant because of the effect of confinement of PCM liquid spaces for the development of thermo-convective flow. Hence, it can be concluded that the effect of natural convection is not very significant for a high number of fins. In the optimum case, using 3 fins, the increasing temperature of the HTF exceeds approximately 10°C during the first 30 minutes. When solidification progresses from the surfaces of the PCM-container and propagates to the central liquid phase, an insulating layer will be created in the vicinity of the container surfaces and the fins, causing a low heat exchange rate between PCM and air. As the solid PCM layer gets thicker, a progressive regression of the field of movements is induced in the liquid phase, thus leading to the inhibition of heat extraction process. After about 2 hours, 68% of the PCM became solid, and heat transfer was almost dominated by conduction mechanism.

Keywords: heat transfer enhancement, front solidification, PCM, natural convection

Procedia PDF Downloads 183
18361 A Reinforcement Learning Approach for Evaluation of Real-Time Disaster Relief Demand and Network Condition

Authors: Ali Nadi, Ali Edrissi

Abstract:

Relief demand and transportation links availability is the essential information that is needed for every natural disaster operation. This information is not in hand once a disaster strikes. Relief demand and network condition has been evaluated based on prediction method in related works. Nevertheless, prediction seems to be over or under estimated due to uncertainties and may lead to a failure operation. Therefore, in this paper a stochastic programming model is proposed to evaluate real-time relief demand and network condition at the onset of a natural disaster. To address the time sensitivity of the emergency response, the proposed model uses reinforcement learning for optimization of the total relief assessment time. The proposed model is tested on a real size network problem. The simulation results indicate that the proposed model performs well in the case of collecting real-time information.

Keywords: disaster management, real-time demand, reinforcement learning, relief demand

Procedia PDF Downloads 310
18360 Designing Electronic Kanban in Assembly Line Tailboom at XYZ Corp to Reducing Lead Time

Authors: Nadhifah A. Nugraha, Dida D. Damayanti, Widia Juliani

Abstract:

Airplanes manufacturing is growing along with the increasing demand from consumers. The helicopter's tail called Tailboom is a product of the helicopter division at XYZ Corp, where the Tailboom assembly line is a pull system. Based on observations of existing conditions that occur at XYZ Corp, production is still unable to meet the demands of consumers; lead time occurs greater than the plan agreed upon by the consumers. In the assembly process, each work station experiences a lack of parts and components needed to assemble components. This happens because of the delay in getting the required part information, and there is no warning about the availability of parts needed, it makes some parts unavailable in assembly warehouse. The lack of parts and components from the previous work station causes the assembly process to stop, and the assembly line also stops at the next station. In its completion, the production time was late and not on the schedule. In resolving these problems, the controlling process is needed, which is controlling the assembly line to get all components and subassembly in the right amount and at the right time. This study applies one of Just In Time tools, namely Kanban and automation, should be added as efficiently and effectively communication line becomes electronic Kanban. The problem can be solved by reducing non-value added time, such as waiting time and idle time. The proposed results of controlling the assembly line of Tailboom result in a smooth assembly line without waiting, reduced lead time, and achieving production time according to the schedule agreed with the consumers.

Keywords: kanban, e-Kanban, lead time, pull system

Procedia PDF Downloads 110
18359 Project Time Prediction Model: A Case Study of Construction Projects in Sindh, Pakistan

Authors: Tauha Hussain Ali, Shabir Hussain Khahro, Nafees Ahmed Memon

Abstract:

Accurate prediction of project time for planning and bid preparation stage should contain realistic dates. Constructors use their experience to estimate the project duration for the new projects, which is based on intuitions. It has been a constant concern to both researchers and constructors to analyze the accurate prediction of project duration for bid preparation stage. In Pakistan, such study for time cost relationship has been lacked to predict duration performance for the construction projects. This study is an attempt to explore the time cost relationship that would conclude with a mathematical model to predict the time for the drainage rehabilitation projects in the province of Sindh, Pakistan. The data has been collected from National Engineering Services (NESPAK), Pakistan and regression analysis has been carried out for the analysis of results. Significant relationship has been found between time and cost of the construction projects in Sindh and the generated mathematical model can be used by the constructors to predict the project duration for the upcoming projects of same nature. This study also provides the professionals with a requisite knowledge to make decisions regarding project duration, which is significantly important to win the projects at the bid stage.

Keywords: BTC Model, project time, relationship of time cost, regression

Procedia PDF Downloads 377
18358 Experimental Parameters’ Effects on the Electrical Discharge Machining Performances

Authors: Asmae Tafraouti, Yasmina Layouni, Pascal Kleimann

Abstract:

The growing market for Microsystems (MST) and Micro-Electromechanical Systems (MEMS) is driving the research for alternative manufacturing techniques to microelectronics-based technologies, which are generally expensive and time-consuming. Hot-embossing and micro-injection modeling of thermoplastics appear to be industrially viable processes. However, both require the use of master models, usually made in hard materials such as steel. These master models cannot be fabricated using standard microelectronics processes. Thus, other micromachining processes are used, such as laser machining or micro-electrical discharge machining (µEDM). In this work, µEDM has been used. The principle of µEDM is based on the use of a thin cylindrical micro-tool that erodes the workpiece surface. The two electrodes are immersed in a dielectric with a distance of a few micrometers (gap). When an electrical voltage is applied between the two electrodes, electrical discharges are generated, which cause material machining. In order to produce master models with high resolution and smooth surfaces, it is necessary to well control the discharge mechanism. However, several problems are encountered, such as a random electrical discharge process, the fluctuation of the discharge energy, the electrodes' polarity inversion, and the wear of the micro-tool. The effect of different parameters, such as the applied voltage, the working capacitor, the micro-tool diameter, and the initial gap, has been studied. This analysis helps to improve the machining performances, such as the workpiece surface condition and the lateral crater's gap.

Keywords: craters, electrical discharges, micro-electrical discharge machining, microsystems

Procedia PDF Downloads 69
18357 Subjective Temporal Resources: On the Relationship Between Time Perspective and Chronic Time Pressure to Burnout

Authors: Diamant Irene, Dar Tamar

Abstract:

Burnout, conceptualized within the framework of stress research, is to a large extent a result of a threat on resources of time or a feeling of time shortage. In reaction to numerous tasks, deadlines, high output, management of different duties encompassing work-home conflicts, many individuals experience ‘time pressure’. Time pressure is characterized as the perception of a lack of available time in relation to the amount of workload. It can be a result of local objective constraints, but it can also be a chronic attribute in coping with life. As such, time pressure is associated in the literature with general stress experience and can therefore be a direct, contributory burnout factor. The present study examines the relation of chronic time pressure – feeling of time shortage and of being rushed, with another central aspect in subjective temporal experience - time perspective. Time perspective is a stable personal disposition, capturing the extent to which people subjectively remember the past, live the present and\or anticipate the future. Based on Hobfoll’s Conservation of Resources Theory, it was hypothesized that individuals with chronic time pressure would experience a permanent threat on their time resources resulting in relatively increased burnout. In addition, it was hypothesized that different time perspective profiles, based on Zimbardo’s typology of five dimensions – Past Positive, Past Negative, Present Hedonistic, Present Fatalistic, and Future, would be related to different magnitudes of chronic time pressure and of burnout. We expected that individuals with ‘Past Negative’ or ‘Present Fatalist’ time perspectives would experience more burnout, with chronic time pressure being a moderator variable. Conversely, individuals with a ‘Present Hedonistic’ - with little concern with the future consequences of actions, would experience less chronic time pressure and less burnout. Another temporal experience angle examined in this study is the difference between the actual distribution of time (as in a typical day) versus desired distribution of time (such as would have been distributed optimally during a day). It was hypothesized that there would be a positive correlation between the gap between these time distributions and chronic time pressure and burnout. Data was collected through an online self-reporting survey distributed on social networks, with 240 participants (aged 21-65) recruited through convenience and snowball sampling methods from various organizational sectors. The results of the present study support the hypotheses and constitute a basis for future debate regarding the elements of burnout in the modern work environment, with an emphasis on subjective temporal experience. Our findings point to the importance of chronic and stable temporal experiences, as time pressure and time perspective, in occupational experience. The findings are also discussed with a view to the development of practical methods of burnout prevention.

Keywords: conservation of resources, burnout, time pressure, time perspective

Procedia PDF Downloads 170
18356 Keyloggers Prevention with Time-Sensitive Obfuscation

Authors: Chien-Wei Hung, Fu-Hau Hsu, Chuan-Sheng Wang, Chia-Hao Lee

Abstract:

Nowadays, the abuse of keyloggers is one of the most widespread approaches to steal sensitive information. In this paper, we propose an On-Screen Prompts Approach to Keyloggers (OSPAK) and its analysis, which is installed in public computers. OSPAK utilizes a canvas to cue users when their keystrokes are going to be logged or ignored by OSPAK. This approach can protect computers against recoding sensitive inputs, which obfuscates keyloggers with letters inserted among users' keystrokes. It adds a canvas below each password field in a webpage and consists of three parts: two background areas, a hit area and a moving foreground object. Letters at different valid time intervals are combined in accordance with their time interval orders, and valid time intervals are interleaved with invalid time intervals. It utilizes animation to visualize valid time intervals and invalid time intervals, which can be integrated in a webpage as a browser extension. We have tested it against a series of known keyloggers and also performed a study with 95 users to evaluate how easily the tool is used. Experimental results made by volunteers show that OSPAK is a simple approach.

Keywords: authentication, computer security, keylogger, privacy, information leakage

Procedia PDF Downloads 116
18355 The Evolution of Domestic Terrorism: Global Contemporary Models

Authors: Bret Brooks

Abstract:

As the international community has focused their attention in recent times on international and transnational terrorism, many nations have ignored their own domestic terrorist groups. Domestic terrorism has significantly evolved over the last 15 years and as such nation states must adequately understand their own individual issues as well as the broader worldwide perspective. Contemporary models show that obtaining peace with domestic groups is not only the end goal, but also very obtainable. By evaluating modern examples and incorporating successful strategies, countries around the world have the ability to bring about a diplomatic resolution to domestic extremism and domestic terrorism.

Keywords: domestic, evolution, peace, terrorism

Procedia PDF Downloads 511
18354 Memory Retrieval and Implicit Prosody during Reading: Anaphora Resolution by L1 and L2 Speakers of English

Authors: Duong Thuy Nguyen, Giulia Bencini

Abstract:

The present study examined structural and prosodic factors on the computation of antecedent-reflexive relationships and sentence comprehension in native English (L1) and Vietnamese-English bilinguals (L2). Participants read sentences presented on the computer screen in one of three presentation formats aimed at manipulating prosodic parsing: word-by-word (RSVP), phrase-segment (self-paced), or whole-sentence (self-paced), then completed a grammaticality rating and a comprehension task (following Pratt & Fernandez, 2016). The design crossed three factors: syntactic structure (simple; complex), grammaticality (target-match; target-mismatch) and presentation format. An example item is provided in (1): (1) The actress that (Mary/John) interviewed at the awards ceremony (about two years ago/organized outside the theater) described (herself/himself) as an extreme workaholic). Results showed that overall, both L1 and L2 speakers made use of a good-enough processing strategy at the expense of more detailed syntactic analyses. L1 and L2 speakers’ comprehension and grammaticality judgements were negatively affected by the most prosodically disrupting condition (word-by-word). However, the two groups demonstrated differences in their performance in the other two reading conditions. For L1 speakers, the whole-sentence and the phrase-segment formats were both facilitative in the grammaticality rating and comprehension tasks; for L2, compared with the whole-sentence condition, the phrase-segment paradigm did not significantly improve accuracy or comprehension. These findings are consistent with the findings of Pratt & Fernandez (2016), who found a similar pattern of results in the processing of subject-verb agreement relations using the same experimental paradigm and prosodic manipulation with English L1 and L2 English-Spanish speakers. The results provide further support for a Good-Enough cue model of sentence processing that integrates cue-based retrieval and implicit prosodic parsing (Pratt & Fernandez, 2016) and highlights similarities and differences between L1 and L2 sentence processing and comprehension.

Keywords: anaphora resolution, bilingualism, implicit prosody, sentence processing

Procedia PDF Downloads 146
18353 Traffic Prediction with Raw Data Utilization and Context Building

Authors: Zhou Yang, Heli Sun, Jianbin Huang, Jizhong Zhao, Shaojie Qiao

Abstract:

Traffic prediction is essential in a multitude of ways in modern urban life. The researchers of earlier work in this domain carry out the investigation chiefly with two major focuses: (1) the accurate forecast of future values in multiple time series and (2) knowledge extraction from spatial-temporal correlations. However, two key considerations for traffic prediction are often missed: the completeness of raw data and the full context of the prediction timestamp. Concentrating on the two drawbacks of earlier work, we devise an approach that can address these issues in a two-phase framework. First, we utilize the raw trajectories to a greater extent through building a VLA table and data compression. We obtain the intra-trajectory features with graph-based encoding and the intertrajectory ones with a grid-based model and the technique of back projection that restore their surrounding high-resolution spatial-temporal environment. To the best of our knowledge, we are the first to study direct feature extraction from raw trajectories for traffic prediction and attempt the use of raw data with the least degree of reduction. In the prediction phase, we provide a broader context for the prediction timestamp by taking into account the information that are around it in the training dataset. Extensive experiments on several well-known datasets have verified the effectiveness of our solution that combines the strength of raw trajectory data and prediction context. In terms of performance, our approach surpasses several state-of-the-art methods for traffic prediction.

Keywords: traffic prediction, raw data utilization, context building, data reduction

Procedia PDF Downloads 121
18352 The Use of Electrical Resistivity Measurement, Cracking Test and Ansys Simulation to Predict Concrete Hydration Behavior and Crack Tendency

Authors: Samaila Bawa Muazu

Abstract:

Hydration process, crack potential and setting time of concrete grade C30, C40 and C50 were separately monitored using non-contact electrical resistivity apparatus, a novel plastic ring mould and penetration resistance method respectively. The results show highest resistivity of C30 at the beginning until reaching the acceleration point when C50 accelerated and overtaken the others, and this period corresponds to its final setting time range, from resistivity derivative curve, hydration process can be divided into dissolution, induction, acceleration and deceleration periods, restrained shrinkage crack and setting time tests demonstrated the earliest cracking and setting time of C50, therefore, this method conveniently and rapidly determines the concrete’s crack potential. The highest inflection time (ti), the final setting time (tf) were obtained and used with crack time in coming up with mathematical models for the prediction of concrete’s cracking age for the range being considered. Finally, ANSYS numerical simulations supports the experimental findings in terms of the earliest crack age of C50 and the crack location that, highest stress concentration is always beneath the artificially introduced expansion joint of C50.

Keywords: concrete hydration, electrical resistivity, restrained shrinkage crack, setting time, simulation

Procedia PDF Downloads 206
18351 Balancing Biodiversity and Agriculture: A Broad-Scale Analysis of the Land Sparing/Land Sharing Trade-Off for South African Birds

Authors: Chevonne Reynolds, Res Altwegg, Andrew Balmford, Claire N. Spottiswoode

Abstract:

Modern agriculture has revolutionised the planet’s capacity to support humans, yet has simultaneously had a greater negative impact on biodiversity than any other human activity. Balancing the demand for food with the conservation of biodiversity is one of the most pressing issues of our time. Biodiversity-friendly farming (‘land sharing’), or alternatively, separation of conservation and production activities (‘land sparing’), are proposed as two strategies for mediating the trade-off between agriculture and biodiversity. However, there is much debate regarding the efficacy of each strategy, as this trade-off has typically been addressed by short term studies at fine spatial scales. These studies ignore processes that are relevant to biodiversity at larger scales, such as meta-population dynamics and landscape connectivity. Therefore, to better understand species response to agricultural land-use and provide evidence to underpin the planning of better production landscapes, we need to determine the merits of each strategy at larger scales. In South Africa, a remarkable citizen science project - the South African Bird Atlas Project 2 (SABAP2) – collates an extensive dataset describing the occurrence of birds at a 5-min by 5-min grid cell resolution. We use these data, along with fine-resolution data on agricultural land-use, to determine which strategy optimises the agriculture-biodiversity trade-off in a southern African context, and at a spatial scale never considered before. To empirically test this trade-off, we model bird species population density, derived for each 5-min grid cell by Royle-Nicols single-species occupancy modelling, against both the amount and configuration of different types of agricultural production in the same 5-min grid cell. In using both production amount and configuration, we can show not only how species population densities react to changes in yield, but also describe the production landscape patterns most conducive to conservation. Furthermore, the extent of both the SABAP2 and land-cover datasets allows us to test this trade-off across multiple regions to determine if bird populations respond in a consistent way and whether results can be extrapolated to other landscapes. We tested the land sparing/sharing trade-off for 281 bird species across three different biomes in South Africa. Overall, a higher proportion of species are classified as losers, and would benefit from land sparing. However, this proportion of loser-sparers is not consistent and varies across biomes and the different types of agricultural production. This is most likely because of differences in the intensity of agricultural land-use and the interactions between the differing types of natural vegetation and agriculture. Interestingly, we observe a higher number of species that benefit from agriculture than anticipated, suggesting that agriculture is a legitimate resource for certain bird species. Our results support those seen at smaller scales and across vastly different agricultural systems, that land sparing benefits the most species. However, our analysis suggests that land sparing needs to be implemented at spatial scales much larger than previously considered. Species persistence in agricultural landscapes will require the conservation of large tracts of land, and is an important consideration in developing countries, which are undergoing rapid agricultural development.

Keywords: agriculture, birds, land sharing, land sparing

Procedia PDF Downloads 204
18350 The Impact of the Application of Blockchain Technology in Accounting and Auditing

Authors: Yusuf Adebayo Oduwole

Abstract:

The evaluation of blockchain technology's potential effects on the accounting and auditing fields is the main objective of this essay. It also adds to the existing body of work by examining how these practices alter technological concerns, including cryptocurrency accounting, regulation, governance, accounting practices, and technical challenges. Examples of this advancement include the growth of the concept of blockchain and its application in accounting. This technology is being considered one of the digital revolutions that could disrupt the world and civilization as it can transfer large volumes of virtual currencies like cryptocurrencies with the help of a third party. The basis for this research is a systematic review of the articles using Vosviewer to display and reflect on the bibliometric information of the articles accessible on the Scopus database. Also, as the practice of using blockchain technology in the field of accounting and auditing is still in its infancy, it may be useful to carry out a more thorough analysis of any implications for accounting and auditing regarding aspects of governance, regulation, and cryptocurrency that have not yet been discussed or addressed to any significant extent. The main findings on the relationship between blockchain and accounting show that the application of smart contracts, such as triple-entry accounting, has increased the quality of accounting records as well as reliance on the information available. This results in fewer cyclical assignments, no need for resolution, and real-time accounting, among others. Thereby, to integrate blockchain through a computer system, one must continuously learn and remain naive when using blockchain-integrated accounting software. This includes learning about how cryptocurrencies are accounted for and regulated. In this study, three original and contributed efforts are presented. To offer a transparent view of the state of previous relevant studies and research works in accounting and auditing that focus on blockchain, it begins by using bibliographic visibility analysis and a Scopus narrative analysis. Second, it highlights legislative, governance, and ethical concerns, such as education, where it tackles the use of blockchain in accounting and auditing. Lastly, it examines the impact of blockchain technologies on the accounting recognition of cryptocurrencies. Users of the technology should, therefore, take their time and learn how it works, as well as keep abreast of the different developments. In addition, the accounting industry must integrate blockchain certification and practice, most likely offline or as part of university education for those intending to become auditors or accountants.

Keywords: blockchain, crypto assets, governance, regulation & smart contracts

Procedia PDF Downloads 18