Search results for: fixed time
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18622

Search results for: fixed time

18232 Subjective Time as a Marker of the Present Consciousness

Authors: Anastasiya Paltarzhitskaya

Abstract:

Subjective time plays an important role in consciousness processes and self-awareness at the moment. The concept of intrinsic neural timescales (INT) explains the difference in perceiving various time intervals. The capacity to experience the present builds on the fundamental properties of temporal cognition. The challenge that both philosophy and neuroscience try to answer is how the brain differentiates the present from the past and future. In our work, we analyze papers which describe mechanisms involved in the perception of ‘present’ and ‘non-present’, i.e., future and past moments. Taking into account that we perceive time intervals even during rest or relaxation, we suppose that the default-mode network activity can code time features, including the present moment. We can compare some results of time perceptual studies, where brain activity was shown in states with different flows of time, including resting states and during “mental time travel”. According to the concept of mental traveling, we employ a range of scenarios which demand episodic memory. However, some papers show that the hippocampal region does not activate during time traveling. It is a controversial result that is further complicated by the phenomenological aspect that includes a holistic set of information about the individual’s past and future.

Keywords: temporal consciousness, time perception, memory, present

Procedia PDF Downloads 43
18231 Analysis of the Production Time in a Pharmaceutical Company

Authors: Hanen Khanchel, Karim Ben Kahla

Abstract:

Pharmaceutical companies are facing competition. Indeed, the price differences between competing products can be such that it becomes difficult to compensate them by differences in value added. The conditions of competition are no longer homogeneous for the players involved. The price of a product is a given that puts a company and its customer face to face. However, price fixing obliges the company to consider internal factors relating to production costs and external factors such as customer attitudes, the existence of regulations and the structure of the market on which the firm evolved. In setting the selling price, the company must first take into account internal factors relating to its costs: costs of production fall into two categories, fixed costs and variable costs that depend on the quantities produced. The company cannot consider selling below what it costs the product. It, therefore, calculates the unit cost of production to which it adds the unit cost of distribution, enabling it to know the unit cost of production of the product. The company adds its margin and thus determines its selling price. The margin is used to remunerate the capital providers and to finance the activity of the company and its investments. Production costs are related to the quantities produced: large-scale production generally reduces the unit cost of production, which is an asset for companies with mass production markets. This shows that small and medium-sized companies with limited market segments need to make greater efforts to ensure their profit margins. As a result, and faced with high and low market prices for raw materials and increasing staff costs, the company must seek to optimize its production time in order to reduce loads and eliminate waste. Then, the customer pays only value added. Thus, and based on this principle we decided to create a project that deals with the problem of waste in our company, and having as objectives the reduction of production costs and improvement of performance indicators. This paper presents the implementation of the Value Stream Mapping (VSM) project in a pharmaceutical company. It is structured as follows: 1) determination of the family of products, 2) drawing of the current state, 3) drawing of the future state, 4) action plan and implementation.

Keywords: VSM, waste, production time, kaizen, cartography, improvement

Procedia PDF Downloads 128
18230 Optimization of Temperature for Crystal Violet Dye Adsorption Using Castor Leaf Powder by Response Surface Methodology

Authors: Vipan Kumar Sohpal

Abstract:

Temperature effect on the adsorption of crystal violet dye (CVD) was investigated using a castor leaf powder (CLP) that was prepared from the mature leaves of castor trees, through chemical reaction. The optimum values of pH (8), adsorbent dose (10g/L), initial dye concentration (10g/L), time (2hrs), and stirrer speed (120 rpm) were fixed to investigate the influence of temperature on adsorption capacity, percentage of removal of dye and free energy. A central composite design (CCD) was successfully employed for experimental design and analysis of the results. The combined effect of temperature, absorbance, and concentration on the dye adsorption was studied and optimized using response surface methodology. The optimum values of adsorption capacity, percentage of removal of dye and free energy were found to be 0.965(mg/g), 93.38 %, -8202.7(J/mol) at temperature 55.97 °C having desirability > 90% for removal of crystal violet dye respectively. The experimental values were in good agreement with predicted values.

Keywords: crystal violet dye, CVD, castor leaf powder, CLP, response surface methodology, temperature, optimization

Procedia PDF Downloads 103
18229 Issues of Time's Urgency and Ritual in Children's Picture Books: A Closer Look at the Contributions of Grandparents

Authors: Karen Armstrong

Abstract:

Although invisible and fleeting, time is an essential variable in perception. Ritual is proposed as an antithesis to the passage of time, a way of linking our narratives with the past, present and future. This qualitative exploration examines a variety of award winning twentieth-century children’s picture books, specifically regarding the issues of time’s urgency and ritual with respect to children and grandparents. The paper will begin with a consideration of issues of time from the area of psychology, with regard to age, specifically contrasting later age and childhood. Next the value of ritual as represented by the presence of grandparents in children’s books. Specific instances of the contributions of grandparents or older adults with regard to this balancing function between time’s urgency and ritual will be discussed. Recommendations for future research include a consideration of grandparents’ or older characters’ depictions in books for older children.

Keywords: children's picture books, grandparents, ritual, time

Procedia PDF Downloads 280
18228 Evaluating the Success of an Intervention Course in a South African Engineering Programme

Authors: Alessandra Chiara Maraschin, Estelle Trengove

Abstract:

In South Africa, only 23% of engineering students attain their degrees in the minimum time of 4 years. This begs the question: Why is the 4-year throughput rate so low? Improving the throughput rate is crucial in assisting students to the shortest possible path to completion. The Electrical Engineering programme has a fixed curriculum and students must pass all courses in order to graduate. In South Africa, as is the case in several other countries, many students rely on external funding such as bursaries from companies in industry. If students fail a course, they often lose their bursaries, and most might not be able to fund their 'repeating year' fees. It is thus important to improve the throughput rate, since for many students, graduating from university is a way out of poverty for an entire family. In Electrical Engineering, it has been found that the Software Development I course (an introduction to C++ programming) is a significant hurdle course for students and has been found to have a low pass rate. It has been well-documented that students struggle with this type of course as it introduces a number of new threshold concepts that can be challenging to grasp in a short time frame. In an attempt to mitigate this situation, a part-time night-school for Software Development I was introduced in 2015 as an intervention measure. The course includes all the course material from the Software Development I module and allows students who failed the course in first semester a second chance by repeating the course through taking the night-school course. The purpose of this study is to determine whether the introduction of this intervention course could be considered a success. The success of the intervention is assessed in two ways. The study will first look at whether the night-school course contributed to improving the pass rate of the Software Development I course. Secondly, the study will examine whether the intervention contributed to improving the overall throughput from the 2nd year to the 3rd year of study at a South African University. Second year academic results for a sample of 1216 students have been collected from 2010-2017. Preliminary results show that the lowest pass rate for Software Development I was found to be in 2017 with a pass rate of 34.9%. Since the intervention course's inception, the pass rate for Software Development I has increased each year from 2015-2017 by 13.75%, 25.53% and 25.81% respectively. To conclude, the preliminary results show that the intervention course is a success in improving the pass rate of Software Development I.

Keywords: academic performance, electrical engineering, engineering education, intervention course, low pass rate, software development course, throughput

Procedia PDF Downloads 142
18227 The Functional Rehabilitation of Peri-Implant Tissue Defects: A Case Report

Authors: Özgür Öztürk, Cumhur Sipahi, Hande Yeşil

Abstract:

Implant retained restorations commonly consist of a metal-framework veneered with ceramic or composite facings. The increasing and expanding use of indirect resin composites in dentistry is a result of innovations in materials and processing techniques. Of special interest to the implant restorative field is the possibility that composites present significantly lower peak vertical and transverse forces transmitted at the peri-implant level compared to metal-ceramic supra structures in implant-supported restorations. A 43-year-old male patient referred to the department of prosthodontics for an implant retained fixed prosthesis. The clinical and radiographic examination of the patient demonstrated the presence of an implant in the right mandibular first molar tooth region. A considerable amount of marginal bone loss around the implant was detected in radiographic examinations combined with a remarkable peri-implant soft tissue deficiency. To minimize the chewing loads transmitted to the implant-bone interface it was decided to fabricate an indirect composite resin veneered single metal crown over a screw-retained abutment. At the end of the treatment, the functional and aesthetic deficiencies were fully compensated. After a 6 months clinical and radiographic follow-up period the not any additional pathologic invasion was detected in the implant-bone interface and implant retained restoration did not reveal any vehement complication.

Keywords: dental implant, fixed partial dentures, indirect composite resin, peri-implant defects

Procedia PDF Downloads 239
18226 Real-Time Radiological Monitoring of the Atmosphere Using an Autonomous Aerosol Sampler

Authors: Miroslav Hyza, Petr Rulik, Vojtech Bednar, Jan Sury

Abstract:

An early and reliable detection of an increased radioactivity level in the atmosphere is one of the key aspects of atmospheric radiological monitoring. Although the standard laboratory procedures provide detection limits as low as few µBq/m³, their major drawback is the delayed result reporting: typically a few days. This issue is the main objective of the HAMRAD project, which gave rise to a prototype of an autonomous monitoring device. It is based on the idea of sequential aerosol sampling using a carrousel sample changer combined with a gamma-ray spectrometer. In our hardware configuration, the air is drawn through a filter positioned on the carrousel so that it could be rotated into the measuring position after a preset sampling interval. Filter analysis is performed via a 50% HPGe detector inside an 8.5cm lead shielding. The spectrometer output signal is then analyzed using DSP electronics and Gamwin software with preset nuclide libraries and other analysis parameters. After the counting, the filter is placed into a storage bin with a capacity of 250 filters so that the device can run autonomously for several months depending on the preset sampling frequency. The device is connected to a central server via GPRS/GSM where the user can view monitoring data including raw spectra and technological data describing the state of the device. All operating parameters can be remotely adjusted through a simple GUI. The flow rate is continuously adjustable up to 10 m³/h. The main challenge in spectrum analysis is the natural background subtraction. As detection limits are heavily influenced by the deposited activity of radon decay products and the measurement time is fixed, there must exist an optimal sample decay time (delayed spectrum acquisition). To solve this problem, we adopted a simple procedure based on sequential spectrum acquisition and optimal partial spectral sum with respect to the detection limits for a particular radionuclide. The prototyped device proved to be able to detect atmospheric contamination at the level of mBq/m³ per an 8h sampling.

Keywords: aerosols, atmosphere, atmospheric radioactivity monitoring, autonomous sampler

Procedia PDF Downloads 124
18225 Productive Safety Net Program and Rural Livelihood in Ethiopia

Authors: Desta Brhanu Gebrehiwot

Abstract:

The purpose of this review was to analyze the overall or combined effect of scholarly studies conducted on the impacts of Food for work (FFW) and Productive Safety Net Program (PSNP) on farm households’ livelihood (agricultural investment on the adoption of fertilizer, food security, livestock holding, nutrition and its’ disincentive effect) in Ethiopia. In addition, to make a critical assessment of the internal and external validity of the existing studies, the review also indicates the possibility to redesign the program. The method of selecting eligible studies for review was PICOS (Participants, Intervention, Comparison, Outcomes, and Settings) framework. The method of analysis was the fixed effects model under Meta-Analysis. The findings of this systematic review confirm the overall or combined positive significant impact of PSNP on fertilizer adoption (combined point estimate=0.015, standard error=0.005, variance=0.000, lower limit 0.004 up to the upper limit=0.026, z-value=2.726, and p-value=0.006). And the program had a significant positive impact on the child nutrition of rural households and had no significant disincentive effect. However, the program had no significant impact on livestock holdings. Thus, PSNP is important for households whose livelihood depends on rain-fed agriculture and are exposed to rainfall shocks. Thus, better to integrate the program into the national agricultural policy. In addition, most of the studies suggested that PSNP needs more attention to the design and targeting issued in order to be effective and efficient in social protection.

Keywords: meta-analysis, fixed effect model, PSNP, rural-livelihood, Ethiopia

Procedia PDF Downloads 45
18224 A New Study on Mathematical Modelling of COVID-19 with Caputo Fractional Derivative

Authors: Sadia Arshad

Abstract:

The new coronavirus disease or COVID-19 still poses an alarming situation around the world. Modeling based on the derivative of fractional order is relatively important to capture real-world problems and to analyze the realistic situation of the proposed model. Weproposed a mathematical model for the investigation of COVID-19 dynamics in a generalized fractional framework. The new model is formulated in the Caputo sense and employs a nonlinear time-varying transmission rate. The existence and uniqueness solutions of the fractional order derivative have been studied using the fixed-point theory. The associated dynamical behaviors are discussed in terms of equilibrium, stability, and basic reproduction number. For the purpose of numerical implementation, an effcient approximation scheme is also employed to solve the fractional COVID-19 model. Numerical simulations are reported for various fractional orders, and simulation results are compared with a real case of COVID-19 pandemic. According to the comparative results with real data, we find the best value of fractional orderand justify the use of the fractional concept in the mathematical modelling, for the new fractional modelsimulates the reality more accurately than the other classical frameworks.

Keywords: fractional calculus, modeling, stability, numerical solution

Procedia PDF Downloads 83
18223 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions

Authors: Vikrant Gupta, Amrit Goswami

Abstract:

The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.

Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition

Procedia PDF Downloads 109
18222 Optimizing Nature Protection and Tourism in Urban Parks

Authors: Milena Lakicevic

Abstract:

The paper deals with the problem of optimizing management options for urban parks within different scenarios of nature protection and tourism importance. The procedure is demonstrated on a case study example of urban parks in Novi Sad (Serbia). Six management strategies for the selected area have been processed by the decision support method PROMETHEE. Two criteria used for the evaluation were nature protection and tourism and each of them has been divided into a set of indicators: for nature protection those were biodiversity and preservation of original landscape, while for tourism those were recreation potential, aesthetic values, accessibility and culture features. It was pre-assumed that each indicator in a set is equally important to a corresponding criterion. This way, the research was focused on a sensitivity analysis of criteria weights. In other words, weights of indicators were fixed and weights of criteria altered along the entire scale (from the value of 0 to the value of 1), and the assessment has been performed in two-dimensional surrounding. As a result, one could conclude which management strategy would be the most appropriate along with changing of criteria importance. The final ranking of management alternatives was followed up by investigating the mean PROMETHEE Φ values for all options considered and when altering the importance of nature protection/tourism. This type of analysis enabled detecting an alternative with a solid performance along the entire scale, i.e., regardlessly of criteria importance. That management strategy can be seen as a compromise solution when the weight of criteria is not defined. As a conclusion, it can be said that, in some cases, instead of having criteria importance fixed it is important to test the outputs depending on the different schemes of criteria weighting. The research demonstrates the state of the final decision when the decision maker can estimate criteria importance, but also in cases when the importance of criteria is not established or known.

Keywords: criteria weights, PROMETHEE, sensitivity analysis, urban parks

Procedia PDF Downloads 164
18221 The Quantity and Quality of Teacher Talking Time in EFL Classroom

Authors: Hanan Abufares Elkhimry

Abstract:

Looking for more effective teaching and learning approaches, teaching instructors have been telling trainee teachers to decrease their talking time, but the problem is how best to do this. Doing classroom research, specifically in the area of teacher talking time (TTT), is worthwhile, as it could improve the quality of teaching languages, as the learners are the ones who should be practicing and using the language. This work hopes to ascertain if teachers consider this need in a way that provides the students with the opportunities to increase their production of language. This is a question that is worthwhile answering. As many researchers have found, TTT should be decreased to 30% of classroom talking time and STT should be increased up to 70%. Other researchers agree with this, but add that it should be with awareness of the quality of teacher talking time. Therefore, this study intends to investigate the balance between quantity and quality of teacher talking time in the EFL classroom. For this piece of research and in order to capture the amount of talking in a four classrooms. The amount of talking time was measured. A Checklist was used to assess the quality of the talking time In conclusion, In order to improve the quality of TTT, the results showed that teachers may use more or less than 30% of the classroom talking time and still produce a successful classroom learning experience. As well as, the important factors that can affect TTT is the English level of the students. This was clear in the classroom observations, where the highest TTT recorded was with the lowest English level group.

Keywords: teacher talking time TTT, learning experience, classroom research, effective teaching

Procedia PDF Downloads 390
18220 Sensitivity Analysis of the Thermal Properties in Early Age Modeling of Mass Concrete

Authors: Farzad Danaei, Yilmaz Akkaya

Abstract:

In many civil engineering applications, especially in the construction of large concrete structures, the early age behavior of concrete has shown to be a crucial problem. The uneven rise in temperature within the concrete in these constructions is the fundamental issue for quality control. Therefore, developing accurate and fast temperature prediction models is essential. The thermal properties of concrete fluctuate over time as it hardens, but taking into account all of these fluctuations makes numerical models more complex. Experimental measurement of the thermal properties at the laboratory conditions also can not accurately predict the variance of these properties at site conditions. Therefore, specific heat capacity and the heat conductivity coefficient are two variables that are considered constant values in many of the models previously recommended. The proposed equations demonstrate that these two quantities are linearly decreasing as cement hydrates, and their value are related to the degree of hydration. The effects of changing the thermal conductivity and specific heat capacity values on the maximum temperature and the time it takes for concrete to reach that temperature are examined in this study using numerical sensibility analysis, and the results are compared to models that take a fixed value for these two thermal properties. The current study is conducted in 7 different mix designs of concrete with varying amounts of supplementary cementitious materials (fly ash and ground granulated blast furnace slag). It is concluded that the maximum temperature will not change as a result of the constant conductivity coefficient, but variable specific heat capacity must be taken into account, also about duration when a concrete's central node reaches its max value again variable specific heat capacity can have a considerable effect on the final result. Also, the usage of GGBFS has more influence compared to fly ash.

Keywords: early-age concrete, mass concrete, specific heat capacity, thermal conductivity coefficient

Procedia PDF Downloads 51
18219 Catalytic Soot Gasification in Single and Mixed Atmospheres of CO2 and H2O in the Presence of CO and H2

Authors: Yeidy Sorani Montenegro Camacho, Samir Bensaid, Nunzio Russo, Debora Fino

Abstract:

LiFeO2 nano-powders were prepared via solution combustion synthesis (SCS) method and were used as carbon gasification catalyst in a reduced atmosphere. The gasification of soot with CO2 and H2O in the presence of CO and H2 (syngas atmosphere) were also investigated under atmospheric conditions using a fixed-bed micro-reactor placed in an electric, PID-regulated oven. The catalytic bed was composed of 150 mg of inert silica, 45 mg of carbon (Printex-U) and 5 mg of catalyst. The bed was prepared by ball milling the mixture at 240 rpm for 15 min to get an intimate contact between the catalyst and soot. A Gas Hourly Space Velocity (GHSV) of 38.000 h-1 was used for the tests campaign. The furnace was heated up to the desired temperature, a flow of 120 mL/min was sent into the system and at the same time the concentrations of CO, CO2 and H2 were recorded at the reactor outlet using an EMERSON X-STREAM XEGP analyzer. Catalytic and non-catalytic soot gasification reactions were studied in a temperature range of 120°C – 850°C with a heating rate of 5 °C/min (non-isothermal case) and at 650°C for 40 minutes (isothermal case). Experimental results show that the gasification of soot with H2O and CO2 are inhibited by the H2 and CO, respectively. The soot conversion at 650°C decreases from 70.2% to 31.6% when the CO is present in the feed. Besides, the soot conversion was 73.1% and 48.6% for H2O-soot and H2O-H2-soot gasification reactions, respectively. Also, it was observed that the carbon gasification in mixed atmosphere, i.e., when simultaneous carbon gasification with CO2 and steam take place, with H2 and CO as co-reagents; the gasification reaction is strongly inhibited by CO and H2, as well has been observed in single atmospheres for the isothermal and non-isothermal reactions. Further, it has been observed that when CO2 and H2O react with carbon at the same time, there is a passive cooperation of steam and carbon dioxide in the gasification reaction, this means that the two gases operate on separate active sites without influencing each other. Finally, despite the extreme reduced operating conditions, it has been demonstrated that the 32.9% of the initial carbon was gasified using LiFeO2-catalyst, while in the non-catalytic case only 8% of the soot was gasified at 650°C.

Keywords: soot gasification, nanostructured catalyst, reducing environment, syngas

Procedia PDF Downloads 234
18218 Modeling and Simulation Methods Using MATLAB/Simulink

Authors: Jamuna Konda, Umamaheswara Reddy Karumuri, Sriramya Muthugi, Varun Pishati, Ravi Shakya,

Abstract:

This paper investigates the challenges involved in mathematical modeling of plant simulation models ensuring the performance of the plant models much closer to the real time physical model. The paper includes the analysis performed and investigation on different methods of modeling, design and development for plant model. Issues which impact the design time, model accuracy as real time model, tool dependence are analyzed. The real time hardware plant would be a combination of multiple physical models. It is more challenging to test the complete system with all possible test scenarios. There are possibilities of failure or damage of the system due to any unwanted test execution on real time.

Keywords: model based design (MBD), MATLAB, Simulink, stateflow, plant model, real time model, real-time workshop (RTW), target language compiler (TLC)

Procedia PDF Downloads 326
18217 A Minimally Invasive Approach Using Bio-Miniatures Implant System for Full Arch Rehabilitation

Authors: Omid Allan

Abstract:

The advent of ultra-narrow diameter implants initially offered an alternative to wider conventional implants. However, their design limitations have restricted their applicability primarily to overdentures and cement-retained fixed prostheses, often with unpredictable long-term outcomes. The introduction of the new Miniature Implants has revolutionized the field of implant dentistry, leading to a more streamlined approach. The utilization of Miniature Implants has emerged as a promising alternative to the traditional approach that entails the traumatic sequential bone drilling procedures and the use of conventional implants for full and partial arch restorations. The innovative "BioMiniatures Implant System serves as a groundbreaking bridge connecting mini implants with standard implant systems. This system allows practitioners to harness the advantages of ultra-small implants, enabling minimally invasive insertion and facilitating the application of fixed screw-retained prostheses, which were only available to conventional wider implant systems. This approach streamlines full and partial arch rehabilitation with minimal or even no bone drilling, significantly reducing surgical risks and complications for clinicians while minimizing patient morbidity. The ultra-narrow diameter and self-advancing features of these implants eliminate the need for invasive and technically complex procedures such as bone augmentation and guided bone regeneration (GBR), particularly in cases involving thin alveolar ridges. Furthermore, the absence of a microcap between the implant and abutment eliminates the potential for micro-leakage and micro-pumping effects, effectively mitigating the risk of marginal bone loss and future peri-implantitis. The cumulative experience of restoring over 50 full and partial arch edentulous cases with this system has yielded an outstanding success rate exceeding 97%. The long-term success with a stable marginal bone level in the study firmly establishes these implants as a dependable alternative to conventional implants, especially for full arch rehabilitation cases. Full arch rehabilitation with these implants holds the promise of providing a simplified solution for edentulous patients who typically present with atrophic narrow alveolar ridges, eliminating the need for extensive GBR and bone augmentation to restore their dentition with fixed prostheses.

Keywords: mini-implant, biominiatures, miniature implants, minimally invasive dentistry, full arch rehabilitation

Procedia PDF Downloads 47
18216 Information Technology (IT) Outsourcing and the Challenges of Implementation in Financial Industries: A Case Study of Guarantee Trust Assurance PLC

Authors: Salim Ahmad, Ahamed Sani Kazaure, Haruna Musa

Abstract:

Outsourcing had been the contractual relationship in which the responsibility for a function or task is handed over to an outside firm for a fixed period of time which is not the same as contracting where a specific one-off task is allocated to an external business; therefore in information technology a specialist area such as maintenance of web servers is controlled by an outside firm or if the department is not a critical factor the whole IT section may be outsourced. Organisation contracts is frequently a major area in successful outsourcing relationship, whereby the contracts specify the right, liability and expectation of the vendor and contracts are mostly of high value and last for very long. Therefore, in this research one particular project that is been outsourced for the financial industry (Guarantee Trust Assurance PlC) is been discussed along with the approach used and the various problems encountered, though Outsourcing is not necessarily a perfect and easy way out for business. It is extremely critical for a company to look at all the aspect of outsourcing before deciding to use it as an instrument for development. Moreover, critical analysis of the management issues encountered while implementing the outsourcing project have been fully discussed in the paper.

Keywords: outsourcing, techniques used in outsourcing, challenges of outsourcing implementation, management issues during implementation of outsourcing project

Procedia PDF Downloads 354
18215 Effect of Gas Boundary Layer on the Stability of a Radially Expanding Liquid Sheet

Authors: Soumya Kedia, Puja Agarwala, Mahesh Tirumkudulu

Abstract:

Linear stability analysis is performed for a radially expanding liquid sheet in the presence of a gas medium. A liquid sheet can break up because of the aerodynamic effect as well as its thinning. However, the study of the aforementioned effects is usually done separately as the formulation becomes complicated and is difficult to solve. Present work combines both, aerodynamic effect and thinning effect, ignoring the non-linearity in the system. This is done by taking into account the formation of the gas boundary layer whilst neglecting viscosity in the liquid phase. Axisymmetric flow is assumed for simplicity. Base state analysis results in a Blasius-type system which can be solved numerically. Perturbation theory is then applied to study the stability of the liquid sheet, where the gas-liquid interface is subjected to small deformations. The linear model derived here can be applied to investigate the instability for sinuous as well as varicose modes, where the former represents displacement in the centerline of the sheet and the latter represents modulation in sheet thickness. Temporal instability analysis is performed for sinuous modes, which are significantly more unstable than varicose modes, for a fixed radial distance implying local stability analysis. The growth rates, measured for fixed wavenumbers, predicated by the present model are significantly lower than those obtained by the inviscid Kelvin-Helmholtz instability and compare better with experimental results. Thus, the present theory gives better insight into understanding the stability of a thin liquid sheet.

Keywords: boundary layer, gas-liquid interface, linear stability, thin liquid sheet

Procedia PDF Downloads 204
18214 Structured Access Control Mechanism for Mesh-based P2P Live Streaming Systems

Authors: Chuan-Ching Sue, Kai-Chun Chuang

Abstract:

Peer-to-Peer (P2P) live streaming systems still suffer a challenge when thousands of new peers want to join into the system in a short time, called flash crowd, and most of new peers suffer long start-up delay. Recent studies have proposed a slot-based user access control mechanism, which periodically determines a certain number of new peers to enter the system, and a user batch join mechanism, which divides new peers into several tree structures with fixed tree size. However, the slot-based user access control mechanism is difficult for accurately determining the optimal time slot length, and the user batch join mechanism is hard for determining the optimal tree size. In this paper, we propose a structured access control (SAC) mechanism, which constructs new peers to a multi-layer mesh structure. The SAC mechanism constructs new peer connections layer by layer to replace periodical access control, and determines the number of peers in each layer according to the system’s remaining upload bandwidth and average video rate. Furthermore, we propose an analytical model to represent the behavior of the system growth if the system can utilize the upload bandwidth efficiently. The analytical result has shown the similar trend in system growth as the SAC mechanism. Additionally, the extensive simulation is conducted to show the SAC mechanism outperforms two previously proposed methods in terms of system growth and start-up delay.

Keywords: peer-to-peer, live video streaming system, flash crowd, start-up delay, access control

Procedia PDF Downloads 296
18213 Multiple Positive Solutions for Boundary Value Problem of Nonlinear Fractional Differential Equation

Authors: A. Guezane-Lakoud, S. Bensebaa

Abstract:

In this paper, we study a boundary value problem of nonlinear fractional differential equation. Existence and positivity results of solutions are obtained.

Keywords: positive solution, fractional caputo derivative, Banach contraction principle, Avery and Peterson fixed point theorem

Procedia PDF Downloads 386
18212 A Review of the Drawbacks of Current Fixed Connection Façade Systems, Non-Structural Standards, and Ways of Integrating Movable Façade Technology into Buildings

Authors: P. Abtahi, B. Samali

Abstract:

Façade panels of various shapes, weights, and connections usually act as a barrier between the indoor and outdoor environments. They also play a major role in enhancing the aesthetics of building structures. They are attached by different types of connections to the primary structure or inner panels in double skin façade skins. Structural buildings designed to withstand seismic shocks have been undergoing a critical appraisal in recent years, with the emphasis changing from ‘strength’ to ‘performance’. Performance based design and analysis have found their way into research, development, and practice of earthquake engineering, particularly after the 1994 Northridge and 1995 Kobe earthquakes. The design performance of facades as non-structural elements has now focused mainly on evaluating the damage sustained by façade frames with fixed connections, not movable ones. This paper will review current design standards for structural buildings, including the performance of structural and non-structural components during earthquake excitations in order to overview and evaluate the damage assessment and behaviour of various façade systems in building structures during seismic activities. The proposed solutions for each facade system will be discussed case by case to evaluate their potential for incorporation with newly designed connections. Finally, Double-Skin-Facade systems can potentially be combined with movable facade technology, although other glazing systems would require minor to major changes in their design before being integrated into the system.

Keywords: building performance, earthquake engineering, glazing system, movable façade technology

Procedia PDF Downloads 522
18211 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression

Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin

Abstract:

This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.

Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression

Procedia PDF Downloads 268
18210 The Potential of On-Demand Shuttle Services to Reduce Private Car Use

Authors: B. Mack, K. Tampe-Mai, E. Diesch

Abstract:

Findings of an ongoing discrete choice study of future transport mode choice will be presented. Many urban centers face the triple challenge of having to cope with ever increasing traffic congestion, environmental pollution, and greenhouse gas emission brought about by private car use. In principle, private car use may be diminished by extending public transport systems like bus lines, trams, tubes, and trains. However, there are limits to increasing the (perceived) spatial and temporal flexibility and reducing peak-time crowding of classical public transport systems. An emerging new type of system, publicly or privately operated on-demand shuttle bus services, seem suitable to ameliorate the situation. A fleet of on-demand shuttle busses operates without fixed stops and schedules. It may be deployed efficiently in that each bus picks up passengers whose itineraries may be combined into an optimized route. Crowding may be minimized by limiting the number of seats and the inter-seat distance for each bus. The study is conducted as a discrete choice experiment. The choice between private car, public transport, and shuttle service is registered as a function of several push and pull factors (financial costs, travel time, walking distances, mobility tax/congestion charge, and waiting time/parking space search time). After the completion of the discrete choice items, the study participant is asked to rate the three modes of transport with regard to the pull factors of comfort, safety, privacy, and opportunity to engage in activities like reading or surfing the internet. These ratings are entered as additional predictors into the discrete choice experiment regression model. The study is conducted in the region of Stuttgart in southern Germany. N=1000 participants are being recruited. Participants are between 18 and 69 years of age, hold a driver’s license, and live in the city or the surrounding region of Stuttgart. In the discrete choice experiment, participants are asked to assume they lived within the Stuttgart region, but outside of the city, and were planning the journey from their apartment to their place of work, training, or education during the peak traffic time in the morning. Then, for each item of the discrete choice experiment, they are asked to choose between the transport modes of private car, public transport, and on-demand shuttle in the light of particular values of the push and pull factors studied. The study will provide valuable information on the potential of switching from private car use to the use of on-demand shuttles, but also on the less desirable potential of switching from public transport to on-demand shuttle services. Furthermore, information will be provided on the modulation of these switching potentials by pull and push factors.

Keywords: determinants of travel mode choice, on-demand shuttle services, private car use, public transport

Procedia PDF Downloads 154
18209 Optimal Control of Generators and Series Compensators within Multi-Space-Time Frame

Authors: Qian Chen, Lin Xu, Ping Ju, Zhuoran Li, Yiping Yu, Yuqing Jin

Abstract:

The operation of power grid is becoming more and more complex and difficult due to its rapid development towards high voltage, long distance, and large capacity. For instance, many large-scale wind farms have connected to power grid, where their fluctuation and randomness is very likely to affect the stability and safety of the grid. Fortunately, many new-type equipments based on power electronics have been applied to power grid, such as UPFC (Unified Power Flow Controller), TCSC (Thyristor Controlled Series Compensation), STATCOM (Static Synchronous Compensator) and so on, which can help to deal with the problem above. Compared with traditional equipment such as generator, new-type controllable devices, represented by the FACTS (Flexible AC Transmission System), have more accurate control ability and respond faster. But they are too expensive to use widely. Therefore, on the basis of the comparison and analysis of the controlling characteristics between traditional control equipment and new-type controllable equipment in both time and space scale, a coordinated optimizing control method within mutil-time-space frame is proposed in this paper to bring both kinds of advantages into play, which can better both control ability and economical efficiency. Firstly, the coordination of different space sizes of grid is studied focused on the fluctuation caused by large-scale wind farms connected to power grid. With generator, FSC (Fixed Series Compensation) and TCSC, the coordination method on two-layer regional power grid vs. its sub grid is studied in detail. The coordination control model is built, the corresponding scheme is promoted, and the conclusion is verified by simulation. By analysis, interface power flow can be controlled by generator and the specific line power flow between two-layer regions can be adjusted by FSC and TCSC. The smaller the interface power flow adjusted by generator, the bigger the control margin of TCSC, instead, the total consumption of generator is much higher. Secondly, the coordination of different time sizes is studied to further the amount of the total consumption of generator and the control margin of TCSC, where the minimum control cost can be acquired. The coordination method on two-layer ultra short-term correction vs. AGC (Automatic Generation Control) is studied with generator, FSC and TCSC. The optimal control model is founded, genetic algorithm is selected to solve the problem, and the conclusion is verified by simulation. Finally, the aforementioned method within multi-time-space scale is analyzed with practical cases, and simulated on PSASP (Power System Analysis Software Package) platform. The correctness and effectiveness are verified by the simulation result. Moreover, this coordinated optimizing control method can contribute to the decrease of control cost and will provide reference to the following studies in this field.

Keywords: FACTS, multi-space-time frame, optimal control, TCSC

Procedia PDF Downloads 243
18208 Developmental Trends on Initial Letter Fluency in Typically Developing Children

Authors: Sunila John, B. Rajashekhar

Abstract:

Initial letter fluency tasks are one of the simple behavioral measures to evaluate the complex nature of word retrieval ability. This task requires the participant to retrieve as many words as possible beginning with a particular letter in a fixed time frame. Though the task of verbal fluency is popular among adult clinical conditions, its role in children has been less emphasized. There exists a lack of in-depth understanding of processes underlying verbal fluency performance in typically developing children. The present study, therefore, aims to delineate the developmental trend on initial letter fluency task observed in typically developing Malayalam speaking children. The participants were aged between 5 to 10 years and categorized into three groups: Group I (class I and II, mean (SD) age years: 6.44(.78)), Group II (class III and IV, mean (SD) age years: 8.59 (.83)) and group III (class V and VI, mean (SD) age years: 10.28 (.80). On two tasks of initial letter fluency, the verbal fluency outcome measures were analyzed. The study findings revealed a distinct pattern of initial letter fluency development which may enhance its usefulness in clinical and research settings.

Keywords: children, development, initial letter fluency, word retrieval

Procedia PDF Downloads 434
18207 Assessing the Effect of Waste-based Geopolymer on Asphalt Binders

Authors: Amani A. Saleh, Maram M. Saudy, Mohamed N. AbouZeid

Abstract:

Asphalt cement concrete is a very commonly used material in the construction of roads. It has many advantages, such as being easy to use as well as providing high user satisfaction in terms of comfortability and safety on the road. However, there are some problems that come with asphalt cement concrete, such as its high carbon footprint, which makes it environmentally unfriendly. In addition, pavements require frequent maintenance, which could be very costly and uneconomic. The aim of this research is to study the effect of mixing waste-based geopolymers with asphalt binders. Geopolymer mixes were prepared by combining alumino-silicate sources such as fly ash, silica fumes, and metakaolin with alkali activators. The purpose of mixing geopolymers with the asphalt binder is to enhance the rheological and microstructural properties of asphalt. This was done through two phases, where the first phase was developing an optimum mix design of the geopolymer additive itself. The following phase was testing the geopolymer-modified asphalt binder after the addition of the optimum geopolymer mix design to it. The testing of the modified binder is performed according to the Superpave testing procedures, which include the dynamic shear rheometer to measure parameters such as rutting and fatigue cracking, and the rotational viscometer to measure workability. In addition, the microstructural properties of the modified binder is studied using the environmental scanning electron microscopy test (ESEM). In the testing phase, the aim is to observe whether the addition of different geopolymer percentages to the asphalt binder will enhance the properties of the binder and yield desirable results. Furthermore, the tests on the geopolymer-modified binder were carried out at fixed time intervals, therefore, the curing time was the main parameter being tested in this research. It was observed that the addition of geopolymers to asphalt binder has shown an increased performance of asphalt binder with time. It is worth mentioning that carbon emissions are expected to be reduced since geopolymers are environmentally friendly materials that minimize carbon emissions and lead to a more sustainable environment. Additionally, the use of industrial by-products such as fly ash and silica fumes is beneficial in the sense that they are recycled into producing geopolymers instead of being accumulated in landfills and therefore wasting space.

Keywords: geopolymer, rutting, superpave, fatigue cracking, sustainability, waste

Procedia PDF Downloads 103
18206 Modeling of Cold Tube Drawing with a Fixed Plug by Finite Element Method and Determination of Optimum Drawing Parameters

Authors: E. Yarar, E. A. Guven, S. Karabay

Abstract:

In this study, a comprehensive simulation was made for the cold tube drawing with fixed plug. The cold tube drawing process is preferred due to its high surface quality and the high mechanical properties. In drawing processes applied to materials with low plastic deformability, cracks can occur on the surfaces and the process efficiency decreases. The aim of the work is to investigate the effects of different drawing parameters on drawing forces and stresses. In the simulations, optimum conditions were investigated for four different materials, Ti64Al4V, AA5052, AISI4140, and C365. One of the most important parameters for the cold drawing process is the die angle. Three dies were designed for the analysis with semi die angles of 5°, 10°, and 15°. Three different parameters were used for the friction coefficient between die and the material. In the simulations, reduction of area and the drawing speed is kept constant. Drawing is done in one pass. According to the simulation results, the highest drawing forces were obtained in Ti64Al4V. As the semi die angle increases, the drawing forces decrease. The change in semi die angle was most effective on Ti64Al4V. Increasing the coefficient of friction is another effect that increases the drawing forces. The increase in the friction coefficient has also increased in drawing stresses. The increase in die angle also increased the drawing stress distribution for the other three materials outside C365. According to the results of the analysis, it is found that the designed drawing die is suitable for drawing. The lowest drawing stress distribution and drawing forces were obtained for AA5052. Drawing die parameters have a direct effect on the results. In addition, lubricants used for drawing have a significant effect on drawing forces.

Keywords: cold tube drawing, drawing force, drawing stress, semi die angle

Procedia PDF Downloads 144
18205 Fast and Scale-Adaptive Target Tracking via PCA-SIFT

Authors: Yawen Wang, Hongchang Chen, Shaomei Li, Chao Gao, Jiangpeng Zhang

Abstract:

As the main challenge for target tracking is accounting for target scale change and real-time, we combine Mean-Shift and PCA-SIFT algorithm together to solve the problem. We introduce similarity comparison method to determine how the target scale changes, and taking different strategies according to different situation. For target scale getting larger will cause location error, we employ backward tracking to reduce the error. Mean-Shift algorithm has poor performance when tracking scale-changing target due to the fixed bandwidth of its kernel function. In order to overcome this problem, we introduce PCA-SIFT matching. Through key point matching between target and template, that adjusting the scale of tracking window adaptively can be achieved. Because this algorithm is sensitive to wrong match, we introduce RANSAC to reduce mismatch as far as possible. Furthermore target relocating will trigger when number of match is too small. In addition we take comprehensive consideration about target deformation and error accumulation to put forward a new template update method. Experiments on five image sequences and comparison with 6 kinds of other algorithm demonstrate favorable performance of the proposed tracking algorithm.

Keywords: target tracking, PCA-SIFT, mean-shift, scale-adaptive

Procedia PDF Downloads 410
18204 Scenario Based Reaction Time Analysis for Seafarers

Authors: Umut Tac, Leyla Tavacioglu, Pelin Bolat

Abstract:

Human factor has been one of the elements that cause vulnerabilities which can be resulted with accidents in maritime transportation. When the roots of human factor based accidents are analyzed, gaps in performing cognitive abilities (reaction time, attention, memory…) are faced as the main reasons for the vulnerabilities in complex environment of maritime systems. Thus cognitive processes in maritime systems have arisen important subject that should be investigated comprehensively. At this point, neurocognitive tests such as reaction time analysis tests have been used as coherent tools that enable us to make valid assessments for cognitive status. In this respect, the aim of this study is to evaluate the reaction time (response time or latency) of seafarers due to their occupational experience and age. For this study, reaction time for different maneuverers has been taken while the participants were performing a sea voyage through a simulator which was run up with a certain scenario. After collecting the data for reaction time, a statistical analyze has been done to understand the relation between occupational experience and cognitive abilities.

Keywords: cognitive abilities, human factor, neurocognitive test battery, reaction time

Procedia PDF Downloads 280
18203 Towards an Effective Approach for Modelling near Surface Air Temperature Combining Weather and Satellite Data

Authors: Nicola Colaninno, Eugenio Morello

Abstract:

The urban environment affects local-to-global climate and, in turn, suffers global warming phenomena, with worrying impacts on human well-being, health, social and economic activities. Physic-morphological features of the built-up space affect urban air temperature, locally, causing the urban environment to be warmer compared to surrounding rural. This occurrence, typically known as the Urban Heat Island (UHI), is normally assessed by means of air temperature from fixed weather stations and/or traverse observations or based on remotely sensed Land Surface Temperatures (LST). The information provided by ground weather stations is key for assessing local air temperature. However, the spatial coverage is normally limited due to low density and uneven distribution of the stations. Although different interpolation techniques such as Inverse Distance Weighting (IDW), Ordinary Kriging (OK), or Multiple Linear Regression (MLR) are used to estimate air temperature from observed points, such an approach may not effectively reflect the real climatic conditions of an interpolated point. Quantifying local UHI for extensive areas based on weather stations’ observations only is not practicable. Alternatively, the use of thermal remote sensing has been widely investigated based on LST. Data from Landsat, ASTER, or MODIS have been extensively used. Indeed, LST has an indirect but significant influence on air temperatures. However, high-resolution near-surface air temperature (NSAT) is currently difficult to retrieve. Here we have experimented Geographically Weighted Regression (GWR) as an effective approach to enable NSAT estimation by accounting for spatial non-stationarity of the phenomenon. The model combines on-site measurements of air temperature, from fixed weather stations and satellite-derived LST. The approach is structured upon two main steps. First, a GWR model has been set to estimate NSAT at low resolution, by combining air temperature from discrete observations retrieved by weather stations (dependent variable) and the LST from satellite observations (predictor). At this step, MODIS data, from Terra satellite, at 1 kilometer of spatial resolution have been employed. Two time periods are considered according to satellite revisit period, i.e. 10:30 am and 9:30 pm. Afterward, the results have been downscaled at 30 meters of spatial resolution by setting a GWR model between the previously retrieved near-surface air temperature (dependent variable), the multispectral information as provided by the Landsat mission, in particular the albedo, and Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM), both at 30 meters. Albedo and DEM are now the predictors. The area under investigation is the Metropolitan City of Milan, which covers an area of approximately 1,575 km2 and encompasses a population of over 3 million inhabitants. Both models, low- (1 km) and high-resolution (30 meters), have been validated according to a cross-validation that relies on indicators such as R2, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). All the employed indicators give evidence of highly efficient models. In addition, an alternative network of weather stations, available for the City of Milano only, has been employed for testing the accuracy of the predicted temperatures, giving and RMSE of 0.6 and 0.7 for daytime and night-time, respectively.

Keywords: urban climate, urban heat island, geographically weighted regression, remote sensing

Procedia PDF Downloads 171