Search results for: time estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19507

Search results for: time estimation

18247 Strategy Management of Soybean (Glycine max L.) for Dealing with Extreme Climate through the Use of Cropsyst Model

Authors: Aminah Muchdar, Nuraeni, Eddy

Abstract:

The aims of the research are: (1) to verify the cropsyst plant model of experimental data in the field of soybean plants and (2) to predict planting time and potential yield soybean plant with the use of cropsyst model. This research is divided into several stages: (1) first calibration stage which conducted in the field from June until September 2015.(2) application models stage, where the data obtained from calibration in the field will be included in cropsyst models. The required data models are climate data, ground data/soil data,also crop genetic data. The relationship between the obtained result in field with simulation cropsyst model indicated by Efficiency Index (EF) which the value is 0,939.That is showing that cropsyst model is well used. From the calculation result RRMSE which the value is 1,922%.That is showing that comparative fault prediction results from simulation with result obtained in the field is 1,92%. The conclusion has obtained that the prediction of soybean planting time cropsyst based models that have been made valid for use. and the appropriate planting time for planting soybeans mainly on rain-fed land is at the end of the rainy season, in which the above study first planting time (June 2, 2015) which gives the highest production, because at that time there was still some rain. Tanggamus varieties more resistant to slow planting time cause the percentage decrease in the yield of each decade is lower than the average of all varieties.

Keywords: soybean, Cropsyst, calibration, efficiency Index, RRMSE

Procedia PDF Downloads 180
18246 Neural Synchronization - The Brain’s Transfer of Sensory Data

Authors: David Edgar

Abstract:

To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.

Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)

Procedia PDF Downloads 126
18245 Black-Hole Dimension: A Distinct Methodology of Understanding Time, Space and Data in Architecture

Authors: Alp Arda

Abstract:

Inspired by Nolan's ‘Interstellar’, this paper delves into speculative architecture, asking, ‘What if an architect could traverse time to study a city?’ It unveils the ‘Black-Hole Dimension,’ a groundbreaking concept that redefines urban identities beyond traditional boundaries. Moving past linear time narratives, this approach draws from the gravitational dynamics of black holes to enrich our understanding of urban and architectural progress. By envisioning cities and structures as influenced by black hole-like forces, it enables an in-depth examination of their evolution through time and space. The Black-Hole Dimension promotes a temporal exploration of architecture, treating spaces as narratives of their current state interwoven with historical layers. It advocates for viewing architectural development as a continuous, interconnected journey molded by cultural, economic, and technological shifts. This approach not only deepens our understanding of urban evolution but also empowers architects and urban planners to create designs that are both adaptable and resilient. Echoing themes from popular culture and science fiction, this methodology integrates the captivating dynamics of time and space into architectural analysis, challenging established design conventions. The Black-Hole Dimension champions a philosophy that welcomes unpredictability and complexity, thereby fostering innovation in design. In essence, the Black-Hole Dimension revolutionizes architectural thought by emphasizing space-time as a fundamental dimension. It reimagines our built environments as vibrant, evolving entities shaped by the relentless forces of time, space, and data. This groundbreaking approach heralds a future in architecture where the complexity of reality is acknowledged and embraced, leading to the creation of spaces that are both responsive to their temporal context and resilient against the unfolding tapestry of time.

Keywords: black-hole, timeline, urbanism, space and time, speculative architecture

Procedia PDF Downloads 73
18244 Changes in Kidney Tissue at Postmortem Magnetic Resonance Imaging Depending on the Time of Fetal Death

Authors: Uliana N. Tumanova, Viacheslav M. Lyapin, Vladimir G. Bychenko, Alexandr I. Shchegolev, Gennady T. Sukhikh

Abstract:

All cases of stillbirth undoubtedly subject to postmortem examination, since it is necessary to find out the cause of the stillbirths, as well as a forecast of future pregnancies and their outcomes. Determination of the time of death is an important issue which is addressed during the examination of the body of a stillborn. It is mean the period from the time of death until the birth of the fetus. The time for fetal deaths determination is based on the assessment of the severity of the processes of maceration. To study the possibilities of postmortem magnetic resonance imaging (MRI) for determining the time of intrauterine fetal death based on the evaluation of maceration in the kidney. We have conducted MRI morphological comparisons of 7 dead fetuses (18-21 gestational weeks) and 26 stillbirths (22-39 gestational weeks), and 15 bodies of died newborns at the age of 2 hours – 36 days. Postmortem MRI 3T was performed before the autopsy. The signal intensity of the kidney tissue (SIK), pleural fluid (SIF), external air (SIA) was determined on T1-WI and T2-WI. Macroscopic and histological signs of maceration severity and time of death were evaluated in the autopsy. Based on the results of the morphological study, the degree of maceration varied from 0 to 4. In 13 cases, the time of intrauterine death was up to 6 hours, in 2 cases - 6-12 hours, in 4 -12-24 hours, in 9 -2-3 days, in 3 -1 week, in 2 -1,5-2 weeks. At 15 dead newborns, signs of maceration were absent, naturally. Based on the data from SIK, SIF, SIA on MR-tomograms, we calculated the coefficient of MR-maceration (M). The calculation of the time of intrauterine death (MP-t) (hours) was performed by our formula: МR-t = 16,87+95,38×М²-75,32×М. A direct positive correlation of MR-t and autopsy data from the dead at the gestational ages 22-40 weeks, with a dead time, not more than 1 week, was received. The maceration at the antenatal fetal death is characterized by changes in T1-WI and T2-WI signals at postmortem MRI. The calculation of MP-t allows defining accurately the time of intrauterine death within one week at the stillbirths who died on 22-40 gestational weeks. Thus, our study convincingly demonstrates that radiological methods can be used for postmortem study of the bodies, in particular, the bodies of stillborn to determine the time of intrauterine death. Postmortem MRI allows for an objective and sufficiently accurate analysis of pathological processes with the possibility of their documentation, storage, and analysis after the burial of the body.

Keywords: intrauterine death, maceration, postmortem MRI, stillborn

Procedia PDF Downloads 125
18243 Support for Planning of Mobile Personnel Tasks by Solving Time-Dependent Routing Problems

Authors: Wlodzimierz Ogryczak, Tomasz Sliwinski, Jaroslaw Hurkala, Mariusz Kaleta, Bartosz Kozlowski, Piotr Palka

Abstract:

Implementation concepts of a decision support system for planning and management of mobile personnel tasks (sales representatives and others) are discussed. Large-scale periodic time-dependent vehicle routing and scheduling problems with complex constraints are solved for this purpose. Complex nonuniform constraints with respect to frequency, time windows, working time, etc. are taken into account with additional fast adaptive procedures for operational rescheduling of plans in the presence of various disturbances. Five individual solution quality indicators with respect to a single personnel person are considered. This paper deals with modeling issues corresponding to the problem and general solution concepts. The research was supported by the European Union through the European Regional Development Fund under the Operational Programme ‘Innovative Economy’ for the years 2007-2013; Priority 1 Research and development of modern technologies under the project POIG.01.03.01-14-076/12: 'Decision Support System for Large-Scale Periodic Vehicle Routing and Scheduling Problems with Complex Constraints.'

Keywords: mobile personnel management, multiple criteria, time dependent, time windows, vehicle routing and scheduling

Procedia PDF Downloads 323
18242 A Comparative Analysis of Asymmetric Encryption Schemes on Android Messaging Service

Authors: Mabrouka Algherinai, Fatma Karkouri

Abstract:

Today, Short Message Service (SMS) is an important means of communication. SMS is not only used in informal environment for communication and transaction, but it is also used in formal environments such as institutions, organizations, companies, and business world as a tool for communication and transactions. Therefore, there is a need to secure the information that is being transmitted through this medium to ensure security of information both in transit and at rest. But, encryption has been identified as a means to provide security to SMS messages in transit and at rest. Several past researches have proposed and developed several encryption algorithms for SMS and Information Security. This research aims at comparing the performance of common Asymmetric encryption algorithms on SMS security. The research employs the use of three algorithms, namely RSA, McEliece, and RABIN. Several experiments were performed on SMS of various sizes on android mobile device. The experimental results show that each of the three techniques has different key generation, encryption, and decryption times. The efficiency of an algorithm is determined by the time that it takes for encryption, decryption, and key generation. The best algorithm can be chosen based on the least time required for encryption. The obtained results show the least time when McEliece size 4096 is used. RABIN size 4096 gives most time for encryption and so it is the least effective algorithm when considering encryption. Also, the research shows that McEliece size 2048 has the least time for key generation, and hence, it is the best algorithm as relating to key generation. The result of the algorithms also shows that RSA size 1024 is the most preferable algorithm in terms of decryption as it gives the least time for decryption.

Keywords: SMS, RSA, McEliece, RABIN

Procedia PDF Downloads 163
18241 Relationship between Smartphone Addiction and Academic Performance among University Students

Authors: Arooba Azam Khan

Abstract:

The present study aims to focus on the relationship between smartphone addiction and academic performance of students along with social networking sites, overuse of smartphone, GPA’s and time management skills as their sub-variables. In this world of technology, the smartphone becomes a vital part of everyone’s life. The addiction of smartphones has both negative and positive impact on young people (students). Students keep themselves busy with smartphones without noticing that smartphone addiction is creating a negative impact on their social, academic, and personal lives. A quantitative approach was used to collect data through questionnaire from 360 students of two private universities in Pakistan in summer 2017. The target age group was 19-24 studying in Bachelors programmes. Data were analyzed by using SPSS (version 20), linear correlation and regression tests were applied. Results reveal that there is a negative relationship between smartphone addiction and academic performance. Moreover, it has been proved that students with good time management skills achieve high grades/GPA’s than those who have poor time management skills. From the findings, the researcher suggests that students should spend their time wisely and use their smartphones for educational purpose. However, students need training and close monitoring to get benefits out of smartphones use.

Keywords: smartphone addiction, academic performance, time management skills, quantitative research

Procedia PDF Downloads 162
18240 Mixed Model Sequencing in Painting Production Line

Authors: Unchalee Inkampa, Tuanjai Somboonwiwat

Abstract:

Painting process of automobiles and automobile parts, which is a continuous process based on EDP (Electrode position paint, EDP). Through EDP, all work pieces will be continuously sent to the painting process. Work process can be divided into 2 groups based on the running time: Painting Room 1 and Painting Room 2. This leads to continuous operation. The problem that arises is waiting for workloads onto Painting Room. The grading process EDP to Painting Room is a major problem. Therefore, this paper aim to develop production sequencing method by applying EDP to painting process. It also applied fixed rate launching for painting room and earliest due date (EDD) for EDP process and swap pairwise interchange for waiting time to a minimum of machine. The result found that the developed method could improve painting reduced waiting time, on time delivery, meeting customers wants and improved productivity of painting unit.

Keywords: sequencing, mixed model lines, painting process, electrode position paint

Procedia PDF Downloads 420
18239 A Bayesian Approach for Health Workforce Planning in Portugal

Authors: Diana F. Lopes, Jorge Simoes, José Martins, Eduardo Castro

Abstract:

Health professionals are the keystone of any health system, by delivering health services to the population. Given the time and cost involved in training new health professionals, the planning process of the health workforce is particularly important as it ensures a proper balance between the supply and demand of these professionals and it plays a central role on the Health 2020 policy. In the past 40 years, the planning of the health workforce in Portugal has been conducted in a reactive way lacking a prospective vision based on an integrated, comprehensive and valid analysis. This situation may compromise not only the productivity and the overall socio-economic development but the quality of the healthcare services delivered to patients. This is even more critical given the expected shortage of the health workforce in the future. Furthermore, Portugal is facing an aging context of some professional classes (physicians and nurses). In 2015, 54% of physicians in Portugal were over 50 years old, and 30% of all members were over 60 years old. This phenomenon associated to an increasing emigration of young health professionals and a change in the citizens’ illness profiles and expectations must be considered when planning resources in healthcare. The perspective of sudden retirement of large groups of professionals in a short time is also a major problem to address. Another challenge to embrace is the health workforce imbalances, in which Portugal has one of the lowest nurse to physician ratio, 1.5, below the European Region and the OECD averages (2.2 and 2.8, respectively). Within the scope of the HEALTH 2040 project – which aims to estimate the ‘Future needs of human health resources in Portugal till 2040’ – the present study intends to get a comprehensive dynamic approach of the problem, by (i) estimating the needs of physicians and nurses in Portugal, by specialties and by quinquenium till 2040; (ii) identifying the training needs of physicians and nurses, in medium and long term, till 2040, and (iii) estimating the number of students that must be admitted into medicine and nursing training systems, each year, considering the different categories of specialties. The development of such approach is significantly more critical in the context of limited budget resources and changing health care needs. In this context, this study presents the drivers of the healthcare needs’ evolution (such as the demographic and technological evolution, the future expectations of the users of the health systems) and it proposes a Bayesian methodology, combining the best available data with experts opinion, to model such evolution. Preliminary results considering different plausible scenarios are presented. The proposed methodology will be integrated in a user-friendly decision support system so it can be used by politicians, with the potential to measure the impact of health policies, both at the regional and the national level.

Keywords: bayesian estimation, health economics, health workforce planning, human health resources planning

Procedia PDF Downloads 252
18238 Numerical Response of Coaxial HPGe Detector for Skull and Knee Measurement

Authors: Pabitra Sahu, M. Manohari, S. Priyadharshini, R. Santhanam, S. Chandrasekaran, B. Venkatraman

Abstract:

Radiation workers of reprocessing plants have a potential for internal exposure due to actinides and fission products. Radionuclides like Americium, lead, Polonium and Europium are bone seekers and get accumulated in the skeletal part. As the major skeletal content is in the skull (13%) and knee (22%), measurements of old intake have to be carried out in the skull and knee. At the Indira Gandhi Centre for Atomic Research, a twin HPGe-based actinide monitor is used for the measurement of actinides present in bone. Efficiency estimation, which is one of the prerequisites for the quantification of radionuclides, requires anthropomorphic phantoms. Such phantoms are very limited. Hence, in this study, efficiency curves for a Twin HPGe-based actinide monitoring system are established theoretically using the FLUKA Monte Carlo method and ICRP adult male voxel phantom. In the case of skull measurement, the detector is placed over the forehead, and for knee measurement, one detector is placed over each knee. The efficiency values of radionuclides present in the knee and skull vary from 3.72E-04 to 4.19E-04 CPS/photon and 5.22E-04 to 7.07E-04 CPS/photon, respectively, for the energy range 17 to 3000keV. The efficiency curves for the measurement are established, and it is found that initially, the efficiency value increases up to 100 keV and then starts decreasing. It is found that the skull efficiency values are 4% to 63% higher than that of the knee, depending on the energy for all the energies except 17.74 keV. The reason is the closeness of the detector to the skull compared to the knee. But for 17.74 keV the efficiency of the knee is more than the skull due to the higher attenuation caused in the skull bones because of its greater thickness. The Minimum Detectable Activity (MDA) for 241Am present in the skull and knee is 9 Bq. 239Pu has a MDA of 950 Bq and 1270 Bq for knee and skull, respectively, for a counting time of 1800 sec. This paper discusses the simulation method and the results obtained in the study.

Keywords: FLUKA Monte Carlo Method, ICRP adult male voxel phantom, knee, Skull.

Procedia PDF Downloads 51
18237 First Time Voters Representation of Leadership as Exemplified by 2016 Presidentiables

Authors: Fevy Kae Mateo, Kimberly Javier, Alyzza Marie Palles

Abstract:

Leadership is a process of relationship involving interaction with other people. Leaders emphasise authority, which executes and implements regulations, maintains the rules and leads to a better future. The First Time voters are very significant because there are the stakeholders of the type of leader to be deployed. They also have the capacity of engaging the government and can be the agents of change. The objective of the study is to identify the strengths and weaknesses of leader. Moreover, the study identifies the qualities of a leader. Finally, the study determines first-time voter’s representation of a leader. Focus Group Discussion was carried out into two groups of first time voter’s ages 18 to 21 years old. Verbatim transcripts of the discussion were analyzed using Thematic Analysis. Overall results showed super ordinate themes for weaknesses of leader: Lace of transparency in the government, poor communication strategy, and valuing experience over potential and other contributory factor; for strength of a leader: analytical skill, emotional intelligence in political work, analytical ability and economic status on political participation; finally, in the representation of a leader: positive representation of a leader and negative representation of a leader.

Keywords: first time voters, focus group discussion, leadership, qualitative research design

Procedia PDF Downloads 251
18236 Risks of Investment in the Development of Its Personnel

Authors: Oksana Domkina

Abstract:

According to the modern economic theory, human capital became one of the main production factors and the most promising direction of investment, as such investment provides opportunity of obtaining high and long-term economic and social effects. Informational technology (IT) sector is the representative of this new economy which is most dependent on human capital as the main competitive factor. So the question for this sector is not whether investment in development of personal should be made, but what are the most effective ways of executing it and who has to pay for the education: Worker, company or government. In this paper we examine the IT sector, describe the labor market of IT workers and its development, and analyze the risks that IT companies may face if they invest in the development of their workers and what factors influence it. The main problem and difficulty of quantitative estimation of risk of investment in human capital of a company and its forecasting is human factor. Human behavior is often unpredictable and complex, so it requires specific approaches and methods of assessment. To build a comprehensive method of estimation of the risk of investment in human capital of a company considering human factor, we decided to use the method of analytic hierarchy process (AHP), that initially was created and developed. We separated three main group of factors: Risks related to the worker, related to the company, and external factors. To receive data for our research, we conducted a survey among the HR departments of Ukrainian IT companies used them as experts for the AHP method. Received results showed that IT companies mostly invest in the development of their workers, although several hire only already qualified personnel. According to the results, the most significant risks are the risk of ineffective training and the risk of non-investment that are both related to the firm. The analysis of risk factors related to the employee showed that, the factors of personal reasons, motivation, and work performance have almost the same weights of importance. Regarding internal factors of the company, there is a high role of the factor of compensation and benefits, factors of interesting projects, team, and career opportunities. As for the external environment, one of the most dangerous factor of risk is competitor activities, meanwhile the political and economical situation factor also has a relatively high weight, which is easy to explain by the influence of severe crisis in Ukraine during 2014-2015. The presented method allows to take into consideration all main factors that affect the risk of investment in human capital of a company. This gives a base for further research in this field and allows for a creation of a practical framework for making decisions regarding the personnel development strategy and specific employees' development plans for the HR departments.

Keywords: risks, personnel development, investment in development, factors of risk, risk of investment in development, IT, analytic hierarchy process, AHP

Procedia PDF Downloads 300
18235 Optimal Scheduling of Load and Operational Strategy of a Load Aggregator to Maximize Profit with PEVs

Authors: Md. Shafiullah, Ali T. Al-Awami

Abstract:

This project proposes optimal scheduling of imported power of a load aggregator with the utilization of EVs to maximize its profit. As with the increase of renewable energy resources, electricity price in competitive market becomes more uncertain and, on the other hand, with the penetration of renewable distributed generators in the distribution network the predicted load of a load aggregator also becomes uncertain in real time. Though there is uncertainties in both load and price, the use of EVs storage capacity can make the operation of load aggregator flexible. LA submits its offer to day-ahead market based on predicted loads and optimized use of its EVs to maximize its profit, as well as in real time operation it uses its energy storage capacity in such a way that it can maximize its profit. In this project, load aggregators profit maximization algorithm is formulated and the optimization problem is solved with the help of CVX. As in real time operation the forecasted loads differ from actual load, the mismatches are settled in real time balancing market. Simulation results compare the profit of a load aggregator with a hypothetical group of 1000 EVs and without EVs.

Keywords: CVX, electricity market, load aggregator, load and price uncertainties, profit maximization, real time balancing operation

Procedia PDF Downloads 416
18234 A Real Time Monitoring System of the Supply Chain Conditions, Products and Means of Transport

Authors: Dimitris E. Kontaxis, George Litainas, Dimitris P. Ptochos

Abstract:

Real-time monitoring of the supply chain conditions and procedures is a critical element for the optimal coordination and safety of the deliveries, as well as for the minimization of the delivery time and cost. Real-time monitoring requires IoT data streams, which are related to the conditions of the products and the means of transport (e.g., location, temperature/humidity conditions, kinematic state, ambient light conditions, etc.). These streams are generated by battery-based IoT tracking devices, equipped with appropriate sensors, and are transmitted to a cloud-based back-end system. Proper handling and processing of the IoT data streams, using predictive and artificial intelligence algorithms, can provide significant and useful results, which can be exploited by the supply chain stakeholders in order to enhance their financial benefits, as well as the efficiency, security, transparency, coordination, and sustainability of the supply chain procedures. The technology, the features, and the characteristics of a complete, proprietary system, including hardware, firmware, and software tools -developed in the context of a co-funded R&D programme- are addressed and presented in this paper.

Keywords: IoT embedded electronics, real-time monitoring, tracking device, sensor platform

Procedia PDF Downloads 177
18233 Public Debt and Fiscal Stability in Nigeria

Authors: Abdulkarim Yusuf

Abstract:

Motivation: The Nigerian economy has seen significant macroeconomic instability, fuelled mostly by an overreliance on fluctuating oil revenues. The rising disparity between tax receipts and government spending in Nigeria necessitates government borrowing to fund the anticipated pace of economic growth. Rising public debt and fiscal sustainability are limiting the government's ability to invest in key infrastructure that promotes private investment and growth in Nigeria. Objective: This paper fills an empirical research vacuum by examining the impact of public debt on fiscal sustainability in Nigeria, given the significance of fiscal stability in decreasing poverty and the constraints that an unsustainable debt burden imposes on it. Data and method: Annual time series data covering the period 1980 to 2022 exposed to conventional and structural breaks stationarity tests and the Autoregressive Distributed Lag estimation approach were adopted for this study. Results: The results reveal that domestic debt stock, debt service payment, foreign reserve stock, exchange rate, and private investment all had a major adverse effect on fiscal stability in the long and short run, corroborating the debt overhang and crowding-out hypothesis. External debt stock, prime lending rate, and degree of trade openness, which boosted fiscal stability in the long run, had a major detrimental effect on fiscal stability in the short run, whereas foreign direct investment inflows had an important beneficial impact on fiscal stability in both the long and short run. Implications: The results indicate that fiscal measures that inspire domestic resource mobilization, sustainable debt management techniques, and dependence on external debt to boost deficit financing will improve fiscal stability and drive growth.

Keywords: ARDL co-integration, debt overhang, debt servicing, fiscal stability, public debt

Procedia PDF Downloads 57
18232 Analysing the Benefit of Real-Time Digital Translation for ESL Learners in a Post-secondary Canadian Classroom

Authors: Jordan Shuler

Abstract:

The goal of this study is to determine whether real-time language translation benefits ESL learners by contributing to overall equity in the classroom. Equity will be measured quantitatively through assessment performance and qualitatively through student survey. Two separate sections of students studying the same course will receive identical curriculum: one group, the control, will be taught in English and the other group in English with real-time translation into the students' first languages. The professor will use Microsoft Translator during lectures, in-class discussions, and Q&A time. The college is committed to finding new ways of teaching and learning, as outlined in Strategy 2022. If this research finds a positive relationship between language translation and student academic success, the technology will surely be encouraged for adoption by all George Brown College faculty. With greater acceptance, this technology could influence equity and pedagogy in the larger educational community.

Keywords: ESL learners, equity, innovative teaching strategies, language translation

Procedia PDF Downloads 120
18231 Correlation between Sprint Performance and Vertical Jump Height in Elite Female Football Players

Authors: Svetlana Missina, Anatoliy Shipilov, Alexandr Vavaev

Abstract:

The purpose of the present study was to investigate the relationship between sprint and vertical jump performance in elite female football players. Twenty four professional female football players (age, 18.6±3.1 years; height, 168.3±6.3 cm, body mass 61.6±7.4 kg; mean±SD) were tested for 30-m sprint time, 10-m sprint time and vertical countermovement (CMJ) and squat (SJ) jumps height. Participants performed three countermovement jumps and three squat jumps for maximal height on a force platform. Mean values of three trials were used in statistical analysis. The displacement of center of mass (COM) during flight phase (e.g. jump height) was calculated using the vertical velocity of the COM at the moment of take-off. 30-m and 10-m sprint time were measured using OptoGait optical system. The best of three trials were used for analysis. A significant negative correlation was found between 30-m sprint time and CMJ, SJ height (r = -0.85, r = -0.79 respectively), between 10-m sprint time and CMJ, SJ height (r = -0.73, r = -0.8 respectively), and step frequency was significantly related to CMJ peak power (r = -0.57). Our study indicates that there is strong correlation between sprint and jump performance in elite female football players, thus vertical jump test can be considered as a good sprint and agility predictor in female football.

Keywords: agility, female football players, sprint performance, vertical jump height

Procedia PDF Downloads 469
18230 A Hybrid Genetic Algorithm for Assembly Line Balancing In Automotive Sector

Authors: Qazi Salman Khalid, Muhammad Khalid, Shahid Maqsood

Abstract:

This paper presents a solution for optimizing the cycle time in an assembly line with human-robot collaboration and diverse operators. A genetic algorithm with tailored parameters is used to address the assembly line balancing problem in the automobile sector. A mathematical model is developed, depicting the problem. Currently, the firm runs on the largest candidate rule; however, it causes a lag in orders, which ultimately gets penalized. The results of the study show that the proposed GA is effective in providing efficient solutions and that the cycle time has significantly impacted productivity.

Keywords: line balancing, cycle time, genetic algorithm, productivity

Procedia PDF Downloads 137
18229 Evaluation of Solid-Gas Separation Efficiency in Natural Gas Cyclones

Authors: W. I. Mazyan, A. Ahmadi, M. Hoorfar

Abstract:

Objectives/Scope: This paper proposes a mathematical model for calculating the solid-gas separation efficiency in cyclones. This model provides better agreement with experimental results compared to existing mathematical models. Methods: The separation ratio efficiency, ϵsp, is evaluated by calculating the outlet to inlet count ratio. Similar to mathematical derivations in the literature, the inlet and outlet particle count were evaluated based on Eulerian approach. The model also includes the external forces acting on the particle (i.e., centrifugal and drag forces). In addition, the proposed model evaluates the exact length that the particle travels inside the cyclone for the evaluation of number of turns inside the cyclone. The separation efficiency model derivation using Stoke’s law considers the effect of the inlet tangential velocity on the separation performance. In cyclones, the inlet velocity is a very important factor in determining the performance of the cyclone separation. Therefore, the proposed model provides accurate estimation of actual cyclone separation efficiency. Results/Observations/Conclusion: The separation ratio efficiency, ϵsp, is studied to evaluate the performance of the cyclone for particles ranging from 1 microns to 10 microns. The proposed model is compared with the results in the literature. It is shown that the proposed mathematical model indicates an error of 7% between its efficiency and the efficiency obtained from the experimental results for 1 micron particles. At the same time, the proposed model gives the user the flexibility to analyze the separation efficiency at different inlet velocities. Additive Information: The proposed model determines the separation efficiency accurately and could also be used to optimize the separation efficiency of cyclones at low cost through trial and error testing, through dimensional changes to enhance separation and through increasing the particle centrifugal forces. Ultimately, the proposed model provides a powerful tool to optimize and enhance existing cyclones at low cost.

Keywords: cyclone efficiency, solid-gas separation, mathematical model, models error comparison

Procedia PDF Downloads 392
18228 Identification Strategies for Unknown Victims from Mass Disasters and Unknown Perpetrators from Violent Crime or Terrorist Attacks

Authors: Michael Josef Schwerer

Abstract:

Background: The identification of unknown victims from mass disasters, violent crimes, or terrorist attacks is frequently facilitated through information from missing persons lists, portrait photos, old or recent pictures showing unique characteristics of a person such as scars or tattoos, or simply reference samples from blood relatives for DNA analysis. In contrast, the identification or at least the characterization of an unknown perpetrator from criminal or terrorist actions remains challenging, particularly in the absence of material or data for comparison, such as fingerprints, which had been previously stored in criminal records. In scenarios that result in high levels of destruction of the perpetrator’s corpse, for instance, blast or fire events, the chance for a positive identification using standard techniques is further impaired. Objectives: This study shows the forensic genetic procedures in the Legal Medicine Service of the German Air Force for the identification of unknown individuals, including such cases in which reference samples are not available. Scenarios requiring such efforts predominantly involve aircraft crash investigations, which are routinely carried out by the German Air Force Centre of Aerospace Medicine as one of the Institution’s essential missions. Further, casework by military police or military intelligence is supported based on administrative cooperation. In the talk, data from study projects, as well as examples from real casework, will be demonstrated and discussed with the audience. Methods: Forensic genetic identification in our laboratories involves the analysis of Short Tandem Repeats and Single Nucleotide Polymorphisms in nuclear DNA along with mitochondrial DNA haplotyping. Extended DNA analysis involves phenotypic markers for skin, hair, and eye color together with the investigation of a person’s biogeographic ancestry. Assessment of the biological age of an individual employs CpG-island methylation analysis using bisulfite-converted DNA. Forensic Investigative Genealogy assessment allows the detection of an unknown person’s blood relatives in reference databases. Technically, end-point-PCR, real-time PCR, capillary electrophoresis, pyrosequencing as well as next generation sequencing using flow-cell-based and chip-based systems are used. Results and Discussion: Optimization of DNA extraction from various sources, including difficult matrixes like formalin-fixed, paraffin-embedded tissues, degraded specimens from decomposed bodies or from decedents exposed to blast or fire events, provides soil for successful PCR amplification and subsequent genetic profiling. For cases with extremely low yields of extracted DNA, whole genome preamplification protocols are successfully used, particularly regarding genetic phenotyping. Improved primer design for CpG-methylation analysis, together with validated sampling strategies for the analyzed substrates from, e.g., lymphocyte-rich organs, allows successful biological age estimation even in bodies with highly degraded tissue material. Conclusions: Successful identification of unknown individuals or at least their phenotypic characterization using pigmentation markers together with age-informative methylation profiles, possibly supplemented by family tree search employing Forensic Investigative Genealogy, can be provided in specialized laboratories. However, standard laboratory procedures must be adapted to work with difficult and highly degraded sample materials.

Keywords: identification, forensic genetics, phenotypic markers, CPG methylation, biological age estimation, forensic investigative genealogy

Procedia PDF Downloads 51
18227 Evaluation of Quick Covering Machine for Grain Drying Pavement

Authors: Fatima S. Rodriguez, Victorino T. Taylan, Manolito C. Bulaong, Helen F. Gavino, Vitaliana U. Malamug

Abstract:

In sundrying the quality of the grains are greatly reduced when paddy grains were caught by the rain unsacked and unstored resulting to reduced profit. The objectives of this study were to design and fabricate a quick covering machine for grain drying pavement; to test and evaluate the operating characteristics of the machine according to its deployment speed, recovery speed, deployment time, recovery time, power consumption, aesthetics of laminated sack; and to conduct partial budget and cost curve analysis. The machine was able to cover the grains in a 12.8 m x 22.5 m grain drying pavement at an average time of 17.13 s. It consumed 0.53 W-hr for the deployment and recovery of the cover. The machine entailed an investment cost of $1,344.40 and an annual cost charge of $647.32. Moreover, the savings per year using the quick covering machine was $101.83.

Keywords: quick covering machine, grain drying pavement, laminated polypropylene, recovery time

Procedia PDF Downloads 323
18226 Floor Response Spectra of RC Frames: Influence of the Infills on the Seismic Demand on Non-Structural Components

Authors: Gianni Blasi, Daniele Perrone, Maria Antonietta Aiello

Abstract:

The seismic vulnerability of non-structural components is nowadays recognized to be a key issue in performance-based earthquake engineering. Recent loss estimation studies, as well as the damage observed during past earthquakes, evidenced how non-structural damage represents the highest rate of economic loss in a building and can be in many cases crucial in a life-safety view during the post-earthquake emergency. The procedures developed to evaluate the seismic demand on non-structural components have been constantly improved and recent studies demonstrated how the existing formulations provided by main Standards generally ignore features which have a sensible influence on the definition of the seismic acceleration/displacements subjecting non-structural components. Since the influence of the infills on the dynamic behaviour of RC structures has already been evidenced by many authors, it is worth to be noted that the evaluation of the seismic demand on non-structural components should consider the presence of the infills as well as their mechanical properties. This study focuses on the evaluation of time-history floor acceleration in RC buildings; which is a useful mean to perform seismic vulnerability analyses of non-structural components through the well-known cascade method. Dynamic analyses are performed on an 8-storey RC frame, taking into account the presence of the infills; the influence of the elastic modulus of the panel on the results is investigated as well as the presence of openings. Floor accelerations obtained from the analyses are used to evaluate the floor response spectra, in order to define the demand on non-structural components depending on the properties of the infills. Finally, the results are compared with formulations provided by main International Standards, in order to assess the accuracy and eventually define the improvements required according to the results of the present research work.

Keywords: floor spectra, infilled RC frames, non-structural components, seismic demand

Procedia PDF Downloads 326
18225 Life Cycle Carbon Dioxide Emissions from the Construction Phase of Highway Sector in China

Authors: Yuanyuan Liu, Yuanqing Wang, Di Li

Abstract:

Carbon dioxide (CO2) emissions mitigation from road construction activities is one of the potential pathways to deal with climate change due to its higher use of materials, machinery energy consumption, and high quantity of vehicle and equipment fuels for transportation and on-site construction activities. Aiming to assess the environmental impact of the road infrastructure construction activities and to identify hotspots of emissions sources, this study developed a life-cycle CO2 emissions assessment framework covering three stages of material production, to-site and on-site transportation under the guidance of the principle of LCA ISO14040. Then streamlined inventory analysis on sub-processes of each stage was conducted based on the budget files from cases of highway projects in China. The calculation results were normalized into functional unit represented as ton per km per lane. Then a comparison between the amount of emissions from each stage, and sub-process was made to identify the major contributor in the whole highway lifecycle. In addition, the calculating results were used to be compared with results in other countries for understanding the level of CO2 emissions associated with Chinese road infrastructure in the world. The results showed that materials production stage produces the most of the CO2 emissions (for more than 80%), and the production of cement and steel accounts for large quantities of carbon emissions. Life cycle CO2 emissions of fuel and electric energy associated with to-site and on-site transportation vehicle and equipment are a minor component of total life cycle CO2 emissions from highway project construction activities. Bridges and tunnels are dominant large carbon contributor compared to the road segments. The life cycle CO2 emissions of road segment in highway project in China are slightly higher than the estimation results of highways in European countries and USA, about 1500 ton per km per lane. In particularly, the life cycle CO2 emissions of road pavement in majority cities all over the world are about 500 ton per km per lane. However, there is obvious difference between the cities when the estimation on life cycle CO2 emissions of highway projects included bridge and tunnel. The findings of the study could offer decision makers a more comprehensive reference to understand the contribution of road infrastructure to climate change, especially understand the contribution from road infrastructure construction activities in China. In addition, the identified hotspots of emissions sources provide the insights of how to reduce road carbon emissions for development of sustainable transportation.

Keywords: carbon dioxide emissions, construction activities, highway, life cycle assessment

Procedia PDF Downloads 269
18224 Continuous Catalytic Hydrogenation and Purification for Synthesis Non-Phthalate

Authors: Chia-Ling Li

Abstract:

The scope of this article includes the production of 10,000 metric tons of non-phthalate per annum. The production process will include hydrogenation, separation, purification, and recycling of unprocessed feedstock. Based on experimental data, conversion and selectivity were chosen as reaction model parameters. The synthesis and separation processes of non-phthalate and phthalate were established by using Aspen Plus software. The article will be divided into six parts: estimation of physical properties, integration of production processes, purification case study, utility consumption, economic feasibility study and identification of bottlenecks. The purities of products was higher than 99.9 wt. %. Process parameters have important guiding significance to the commercialization of hydrogenation of phthalate.

Keywords: economic analysis, hydrogenation, non-phthalate, process simulation

Procedia PDF Downloads 277
18223 A Versatile Data Processing Package for Ground-Based Synthetic Aperture Radar Deformation Monitoring

Authors: Zheng Wang, Zhenhong Li, Jon Mills

Abstract:

Ground-based synthetic aperture radar (GBSAR) represents a powerful remote sensing tool for deformation monitoring towards various geohazards, e.g. landslides, mudflows, avalanches, infrastructure failures, and the subsidence of residential areas. Unlike spaceborne SAR with a fixed revisit period, GBSAR data can be acquired with an adjustable temporal resolution through either continuous or discontinuous operation. However, challenges arise from processing high temporal-resolution continuous GBSAR data, including the extreme cost of computational random-access-memory (RAM), the delay of displacement maps, and the loss of temporal evolution. Moreover, repositioning errors between discontinuous campaigns impede the accurate measurement of surface displacements. Therefore, a versatile package with two complete chains is developed in this study in order to process both continuous and discontinuous GBSAR data and address the aforementioned issues. The first chain is based on a small-baseline subset concept and it processes continuous GBSAR images unit by unit. Images within a window form a basic unit. By taking this strategy, the RAM requirement is reduced to only one unit of images and the chain can theoretically process an infinite number of images. The evolution of surface displacements can be detected as it keeps temporarily-coherent pixels which are present only in some certain units but not in the whole observation period. The chain supports real-time processing of the continuous data and the delay of creating displacement maps can be shortened without waiting for the entire dataset. The other chain aims to measure deformation between discontinuous campaigns. Temporal averaging is carried out on a stack of images in a single campaign in order to improve the signal-to-noise ratio of discontinuous data and minimise the loss of coherence. The temporal-averaged images are then processed by a particular interferometry procedure integrated with advanced interferometric SAR algorithms such as robust coherence estimation, non-local filtering, and selection of partially-coherent pixels. Experiments are conducted using both synthetic and real-world GBSAR data. Displacement time series at the level of a few sub-millimetres are achieved in several applications (e.g. a coastal cliff, a sand dune, a bridge, and a residential area), indicating the feasibility of the developed GBSAR data processing package for deformation monitoring of a wide range of scientific and practical applications.

Keywords: ground-based synthetic aperture radar, interferometry, small baseline subset algorithm, deformation monitoring

Procedia PDF Downloads 161
18222 Testing for Endogeneity of Foreign Direct Investment: Implications for Economic Policy

Authors: Liwiusz Wojciechowski

Abstract:

Research background: The current knowledge does not give a clear answer to the question of the impact of FDI on productivity. Results of the empirical studies are still inconclusive, no matter how extensive and diverse in terms of research approaches or groups of countries analyzed they are. It should also take into account the possibility that FDI and productivity are linked and that there is a bidirectional relationship between them. This issue is particularly important because on one hand FDI can contribute to changes in productivity in the host country, but on the other hand its level and dynamics may imply that FDI should be undertaken in a given country. As already mentioned, a two-way relationship between the presence of foreign capital and productivity in the host country should be assumed, taking into consideration the endogenous nature of FDI. Purpose of the article: The overall objective of this study is to determine the causality between foreign direct investment and total factor productivity in host county in terms of different relative absorptive capacity across countries. In the classic sense causality among variables is not always obvious and requires for testing, which would facilitate proper specification of FDI models. The aim of this article is to study endogeneity of selected macroeconomic variables commonly being used in FDI models in case of Visegrad countries: main recipients of FDI in CEE. The findings may be helpful in determining the structure of the actual relationship between variables, in appropriate models estimation and in forecasting as well as economic policymaking. Methodology/methods: Panel and time-series data techniques including GMM estimator, VEC models and causality tests were utilized in this study. Findings & Value added: The obtained results allow to confirm the hypothesis states the bi-directional causality between FDI and total factor productivity. Although results differ from among countries and data level of aggregation implications may be useful for policymakers in case of providing foreign capital attracting policy.

Keywords: endogeneity, foreign direct investment, multi-equation models, total factor productivity

Procedia PDF Downloads 197
18221 Quantum Statistical Machine Learning and Quantum Time Series

Authors: Omar Alzeley, Sergey Utev

Abstract:

Minimizing a constrained multivariate function is the fundamental of Machine learning, and these algorithms are at the core of data mining and data visualization techniques. The decision function that maps input points to output points is based on the result of optimization. This optimization is the central of learning theory. One approach to complex systems where the dynamics of the system is inferred by a statistical analysis of the fluctuations in time of some associated observable is time series analysis. The purpose of this paper is a mathematical transition from the autoregressive model of classical time series to the matrix formalization of quantum theory. Firstly, we have proposed a quantum time series model (QTS). Although Hamiltonian technique becomes an established tool to detect a deterministic chaos, other approaches emerge. The quantum probabilistic technique is used to motivate the construction of our QTS model. The QTS model resembles the quantum dynamic model which was applied to financial data. Secondly, various statistical methods, including machine learning algorithms such as the Kalman filter algorithm, are applied to estimate and analyses the unknown parameters of the model. Finally, simulation techniques such as Markov chain Monte Carlo have been used to support our investigations. The proposed model has been examined by using real and simulated data. We establish the relation between quantum statistical machine and quantum time series via random matrix theory. It is interesting to note that the primary focus of the application of QTS in the field of quantum chaos was to find a model that explain chaotic behaviour. Maybe this model will reveal another insight into quantum chaos.

Keywords: machine learning, simulation techniques, quantum probability, tensor product, time series

Procedia PDF Downloads 469
18220 H∞ Sampled-Data Control for Linear Systems Time-Varying Delays: Application to Power System

Authors: Chang-Ho Lee, Seung-Hoon Lee, Myeong-Jin Park, Oh-Min Kwon

Abstract:

This paper investigates improved stability criteria for sampled-data control of linear systems with disturbances and time-varying delays. Based on Lyapunov-Krasovskii stability theory, delay-dependent conditions sufficient to ensure H∞ stability for the system are derived in the form of linear matrix inequalities(LMI). The effectiveness of the proposed method will be shown in numerical examples.

Keywords: sampled-data control system, Lyapunov-Krasovskii functional, time delay-dependent, LMI, H∞ control

Procedia PDF Downloads 320
18219 Antibacterial Property of ZnO Nanoparticles: Effect of Intrinsic Defects

Authors: Suresh Kumar Verma, Jugal Kishore Das, Ealisha Jha, Mrutyunjay Suar, SKS Parashar

Abstract:

In recent years nanoforms of inorganic metallic oxides has attracted a lot of interest due to their small size and significantly improved physical, chemical and biological properties compared to their molecular precursor. Some of the inorganic materials such as TiO2, ZnO, MgO, CaO, Al2O3 have been extensively used in biological applications. Zinc Oxide is a Wurtzite-type semiconductor and piezo-electric material exhibiting excellent electrical, optical and chemical properties with a band energy gap of 3.1-3.4 eV. Nanoforms of Zinc Oxide (ZnO) are increasingly recognised for their utility in biological application. The significant physical parameters such as surface area, particle size, surface charge and Zeta potential of Zinc Oxide (ZnO) nanoparticles makes it suitable for the uptake, persistance, biological, and chemical activities inside the living cells. The present study shows the effect of intrinsic defects of ZnO nanocrystals synthesized by high energy ball milling (HEBM) technique in their antibacterial activities. Bulk Zinc oxide purchased from market were ball milled for 7 h, 10 h, and 15 h respectively to produce nanosized Zinc Oxide. The structural and optical modification of such synthesized particles were determined by X-ray diffraction (XRD), Scanning Electron Microscopy and Electron Paramagnetic Resonance (EPR). The antibacterial property of synthesized Zinc Oxide nanoparticles was tested using well diffusion, minimum inhibitory Concentration, minimum bacteriocidal concentration, reactive oxygen species (ROS) estimation and membrane potential determination methods. In this study we observed that antibacterial activity of ZnO nanoparticles is because of the intrinsic defects that exist as a function of difference in size and milling time.

Keywords: high energy ball milling, ZnO nanoparticles, EPR, Antibacterial properties

Procedia PDF Downloads 428
18218 Relationship Between Pain Intensity at the Time of the Hamstring Muscle Injury and Hamstring Muscle Lesion Volume Measured by Magnetic Resonance Imaging

Authors: Grange Sylvain, Plancher Ronan, Reurink Guustav, Croisille Pierre, Edouard Pascal

Abstract:

The primary objective of this study was to analyze the potential correlation between the pain experienced at the time of a hamstring muscle injury and the volume of the lesion measured on MRI. The secondary objectives were to analyze a correlation between this pain and the lesion grade as well as the affected hamstring muscle. We performed a retrospective analysis of the data collected in a prospective, multicenter, non-interventional cohort study (HAMMER). Patients with suspected hamstring muscle injury had an MRI after the injury and at the same time were evaluated for their pain intensity experienced at the time of the injury with a Numerical Pain Rating Scale (NPRS) from 0 to 10. A total of 61 patients were included in the present analysis. MRIs were performed in an average of less than 8 days. There was a significant correlation between pain and the injury volume (r=0.287; p=0.025). There was no significant correlation between the pain and the lesion grade (p>0.05), nor between the pain and affected hamstring muscle (p>0.05). Pain at the time of injury appeared to be correlated with the volume of muscle affected. These results confirm the value of a clinical approach in the initial evaluation of hamstring injuries to better select patients eligible for further imaging.

Keywords: hamstring muscle injury, MRI, volume lesion, pain

Procedia PDF Downloads 98