Search results for: initial cost
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8778

Search results for: initial cost

7158 Experimental Studies on Stress Strain Behavior of Expanded Polystyrene Beads-Sand Mixture

Authors: K. N. Ashna

Abstract:

Lightweight fills are a viable alternative where weak soils such as soft clay, peat, and loose silt are encountered. Materials such as Expanded Polystyrene (EPS) geo-foam, plastics, tire wastes, rubber wastes have been used along with soil in order to obtain a lightweight fill. Out of these, Expanded Polystyrene (EPS) geo-foam has gained wide popularity in civil engineering over the past years due to its wide variety of applications. It is extremely lightweight, durable and is available in various densities to meet the strength requirements. It can be used as backfill behind retaining walls to reduce lateral load, as a fill over soft clay or weak soils to prevent the excessive settlements and to reduce seismic forces. Geo-foam is available in block form as well as beads form. In this project Expanded Polystyrene (EPS) beads of various diameters and varying densities were mixed along with sand to study their lightweight as well as strength properties. Four types of EPS beads were used 1mm, 2mm, 3-7 mm and a mix of 1-7 mm. In this project, EPS beads were varied at .25%, .5%, .75% and 1% by weight of sand. A water content of 10% by weight of sand was added to prevent segregation of the mixture. Unconsolidated Unconfined (UU) tri-axial test was conducted at 100kPa, 200 kPa and 300 kPa and angle of internal friction, and cohesion was obtained. Unit weight of the mix was obtained for a relative density of 65%. The results showed that by increasing the EPS content by weight, maximum deviator stress, unit weight, angle of internal friction and initial elastic modulus decreased. An optimum EPS bead content was arrived at by considering the strength as well as the unit weight. The stress-strain behaviour of the mix was found to be dependent on type of bead, bead content and density of the beads. Finally, regression equations were developed to predict the initial elastic modulus of the mix.

Keywords: expanded polystyrene beads, geofoam, lightweight fills, stress-strain behavior, triaxial test

Procedia PDF Downloads 260
7157 Syntax and Words as Evolutionary Characters in Comparative Linguistics

Authors: Nancy Retzlaff, Sarah J. Berkemer, Trudie Strauss

Abstract:

In the last couple of decades, the advent of digitalization of any kind of data was probably one of the major advances in all fields of study. This paves the way for also analysing these data even though they might come from disciplines where there was no initial computational necessity to do so. Especially in linguistics, one can find a rather manual tradition. Still when considering studies that involve the history of language families it is hard to overlook the striking similarities to bioinformatics (phylogenetic) approaches. Alignments of words are such a fairly well studied example of an application of bioinformatics methods to historical linguistics. In this paper we will not only consider alignments of strings, i.e., words in this case, but also alignments of syntax trees of selected Indo-European languages. Based on initial, crude alignments, a sophisticated scoring model is trained on both letters and syntactic features. The aim is to gain a better understanding on which features in two languages are related, i.e., most likely to have the same root. Initially, all words in two languages are pre-aligned with a basic scoring model that primarily selects consonants and adjusts them before fitting in the vowels. Mixture models are subsequently used to filter ‘good’ alignments depending on the alignment length and the number of inserted gaps. Using these selected word alignments it is possible to perform tree alignments of the given syntax trees and consequently find sentences that correspond rather well to each other across languages. The syntax alignments are then filtered for meaningful scores—’good’ scores contain evolutionary information and are therefore used to train the sophisticated scoring model. Further iterations of alignments and training steps are performed until the scoring model saturates, i.e., barely changes anymore. A better evaluation of the trained scoring model and its function in containing evolutionary meaningful information will be given. An assessment of sentence alignment compared to possible phrase structure will also be provided. The method described here may have its flaws because of limited prior information. This, however, may offer a good starting point to study languages where only little prior knowledge is available and a detailed, unbiased study is needed.

Keywords: alignments, bioinformatics, comparative linguistics, historical linguistics, statistical methods

Procedia PDF Downloads 148
7156 Chipless RFID Capacity Enhancement Using the E-pulse Technique

Authors: Haythem H. Abdullah, Hesham Elkady

Abstract:

With the fast increase in radio frequency identification (RFID) applications such as medical recording, library management, etc., the limitation of active tags stems from its need to external batteries as well as passive or active chips. The chipless RFID tag reduces the cost to a large extent but at the expense of utilizing the spectrum. The reduction of the cost of chipless RFID is due to the absence of the chip itself. The identification is done by utilizing the spectrum in such a way that the frequency response of the tags consists of some resonance frequencies that represent the bits. The system capacity is decided by the number of resonators within the pre-specified band. It is important to find a solution to enhance the spectrum utilization when using chipless RFID. Target identification is a process that results in a decision that a specific target is present or not. Several target identification schemes are present, but one of the most successful techniques in radar target identification in the oscillatory region is the extinction pulse technique (E-Pulse). The E-Pulse technique is used to identify targets via its characteristics (natural) modes. By introducing an innovative solution for chipless RFID reader and tag designs, the spectrum utilization goes to the optimum case. In this paper, a novel capacity enhancement scheme based on the E-pulse technique is introduced to improve the performance of the chipless RFID system.

Keywords: chipless RFID, E-pulse, natural modes, resonators

Procedia PDF Downloads 67
7155 Academic Knowledge Transfer Units in the Western Balkans: Building Service Capacity and Shaping the Business Model

Authors: Andrea Bikfalvi, Josep Llach, Ferran Lazaro, Bojan Jovanovski

Abstract:

Due to the continuous need to foster university-business cooperation in both developed and developing countries, some higher education institutions face the challenge of designing, piloting, operating, and consolidating knowledge and technology transfer units. University-business cooperation has different maturity stages worldwide, with some higher education institutions excelling in these practices, but with lots of others that could be qualified as intermediate, or even some situated at the very beginning of their knowledge transfer adventure. These latter face the imminent necessity to formally create the technology transfer unit and to draw its roadmap. The complexity of this operation is due to various aspects that need to align and coordinate, including a major change in mission, vision, structure, priorities, and operations. Qualitative in approach, this study presents 5 case studies, consisting of higher education institutions located in the Western Balkans – 2 in Albania, 2 in Bosnia and Herzegovina, 1 in Montenegro- fully immersed in the entrepreneurial journey of creating their knowledge and technology transfer unit. The empirical evidence is developed in a pan-European project, illustratively called KnowHub (reconnecting universities and enterprises to unleash regional innovation and entrepreneurial activity), which is being implemented in three countries and has resulted in at least 15 pilot cooperation agreements between academia and business. Based on a peer-mentoring approach including more experimented and more mature technology transfer models of European partners located in Spain, Finland, and Austria, a series of initial lessons learned are already available. The findings show that each unit developed its tailor-made approach to engage with internal and external stakeholders, offer value to the academic staff, students, as well as business partners. The latest technology underpinning KnowHub services and institutional commitment are found to be key success factors. Although specific strategies and plans differ, they are based on a general strategy jointly developed and based on common tools and methods of strategic planning and business modelling. The main output consists of providing good practice for designing, piloting, and initial operations of units aiming to fully valorise knowledge and expertise available in academia. Policymakers can also find valuable hints on key aspects considered vital for initial operations. The value of this contribution is its focus on the intersection of three perspectives (service orientation, organisational innovation, business model) since previous research has only relied on a single topic or dual approaches, most frequently in the business context and less frequently in higher education.

Keywords: business model, capacity building, entrepreneurial education, knowledge transfer

Procedia PDF Downloads 137
7154 Optimization of Lubricant Distribution with Alternative Coordinates and Number of Warehouses Considering Truck Capacity and Time Windows

Authors: Taufik Rizkiandi, Teuku Yuri M. Zagloel, Andri Dwi Setiawan

Abstract:

Distribution and growth in the transportation and warehousing business sector decreased by 15,04%. There was a decrease in Gross Domestic Product (GDP) contribution level from rank 7 of 4,41% in 2019 to 3,81% in rank 8 in 2020. A decline in the transportation and warehousing business sector contributes to GDP, resulting in oil and gas companies implementing an efficient supply chain strategy to ensure the availability of goods, especially lubricants. Fluctuating demand for lubricants and warehouse service time limits are essential things that are taken into account in determining an efficient route. Add depots points as a solution so that demand for lubricants is fulfilled (not stock out). However, adding a depot will increase operating costs and storage costs. Therefore, it is necessary to optimize the addition of depots using the Capacitated Vehicle Routing Problem with Time Windows (CVRPTW). This research case study was conducted at an oil and gas company that produces lubricants from 2019 to 2021. The study results obtained the optimal route and the addition of a depot with a minimum additional cost. The total cost remains efficient with the addition of a depot when compared to one depot from Jakarta.

Keywords: CVRPTW, optimal route, depot, tabu search algorithm

Procedia PDF Downloads 131
7153 Preliminary Composite Overwrapped Pressure Vessel Design for Hydrogen Storage Using Netting Analysis and American Society of Mechanical Engineers Section X

Authors: Natasha Botha, Gary Corderely, Helen M. Inglis

Abstract:

With the move to cleaner energy applications the transport industry is working towards on-board hydrogen, or compressed natural gas-fuelled vehicles. A popular method for storage is to use composite overwrapped pressure vessels (COPV) because of their high strength to weight ratios. The proper design of these COPVs are according to international standards; this study aims to provide a preliminary design for a 350 Bar Type IV COPV (i.e. a polymer liner with a composite overwrap). Netting analysis, a popular analytical approach, is used as a first step to generate an initial design concept for the composite winding. This design is further improved upon by following the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel standards, Section X: Fibre-reinforced composite pressure vessels. A design program based on these two approaches is developed using Python. A numerical model of a burst test simulation is developed based on the two approaches and compared. The results indicate that the netting analysis provides a good preliminary design, while the ASME-based design is more robust and accurate as it includes a better approximation of the material behaviour. Netting analysis is an easy method to follow when considering an initial concept design for the composite winding when not all the material characteristics are known. Once these characteristics have been fully defined with experimental testing, an ASME-based design should always be followed to ensure that all designs conform to international standards and practices. Future work entails more detailed numerical testing of the design for improvement, this will include the boss design. Once finalised prototype manufacturing and experimental testing will be conducted, and the results used to improve on the COPV design.

Keywords: composite overwrapped pressure vessel, netting analysis, design, American Society of Mechanical Engineers section x, fiber-reinforced, hydrogen storage

Procedia PDF Downloads 243
7152 An Analysis of New Service Interchange Designs

Authors: Joseph E. Hummer

Abstract:

An efficient freeway system will be essential to the development of Africa, and interchanges are a key to that efficiency. Around the world, many interchanges between freeways and surface streets, called service interchanges, are of the diamond configuration, and interchanges using roundabouts or loop ramps are also popular. However, many diamond interchanges have serious operational problems, interchanges with roundabouts fail at high demand levels, and loops use lots of expensive land. Newer service interchange designs provide other options. The most popular new interchange design in the US at the moment is the double crossover diamond (DCD), also known as the diverging diamond. The DCD has enormous potential, but also has several significant limitations. The objectives of this paper are to review new service interchange options and to highlight some of the main features of those alternatives. The paper tests four conventional and seven unconventional designs using seven measures related to efficiency, cost, and safety. The results show that there is no superior design in all measures investigated. The DCD is better than most designs tested on most measures examined. However, the DCD was only superior to all other designs for bridge width. The DCD performed relatively poorly for capacity and for serving pedestrians. Based on the results, African freeway designers are encouraged to investigate the full range of alternatives that could work at the spot of interest. Diamonds and DCDs have their niches, but some of the other designs investigated could be optimum at some spots.

Keywords: interchange, diamond, diverging diamond, capacity, safety, cost

Procedia PDF Downloads 248
7151 Challenges and Implications for Choice of Caesarian Section and Natural Birth in Pregnant Women with Pre-Eclampsia in Western Nigeria

Authors: F. O. Adeosun, I. O. Orubuloye, O. O. Babalola

Abstract:

Although caesarean section has greatly improved obstetric care throughout the world, in developing countries there is a great aversion to caesarean section. This study was carried out to examine the rate at which pregnant women with pre-eclampsia choose caesarean section over natural birth. A cross-sectional study was conducted among 500 pre-eclampsia antenatal clients seen at the States University Teaching Hospitals in the last one year. The sample selection was purposive. Information on their educational background, beliefs and attitudes were collected. Data analysis was presented using simple percentages. Out of 500 women studied, 38% favored caesarean section while 62% were against it. About 89% of them understood what caesarean section is, 57.3% of those who understood what caesarean section is will still not choose it as an option. Over 85% of the women believed caesarean section is done for medical reasons. If caesarean section is given as an option for childbirth, 38% would go for it, 29% would try religious intervention, 5.5% would not choose it because of fear, while 27.5% would reject it because they believe it is culturally wrong. Majority of respondents (85%) who favored caesarean delivery are aware of the risk attached to choosing virginal birth but go an extra mile in sourcing funds for a caesarean session while over 64% cannot afford the cost of caesarean delivery. It is therefore pertinent to encourage research in prediction methods and prevention of occurrence, since this would assist patients to plan on how to finance treatment.

Keywords: caesarean section, choice, cost, pre eclampsia, prediction methods

Procedia PDF Downloads 313
7150 A Folk’s Theory of the MomConnect (mHealth) Initiative in South Africa

Authors: Eveline Muika Kabongo, Peter Delobelle, Ferdinand Mukumbang, Edward Nicol

Abstract:

Introduction: Studies have been conducted to establish the effect of the MomConnect program in South Africa, but these studies did not focus on the stakeholders' and implementers' perspectives and the underlying program theory of the MomConnect initiative program. We strived to obtain stakeholders’ perspectives and assumptions on the MomConnect program and develop an initial program theory (IPT) of how the MomConnect initiative was expected to work. Methods: A realist-informed explanatory design used. The interviewer was performed with 10 key informants selected purposively among MomConnect key informants at the a national level of NDoH South Africa. The interview was done via zoom and lasted for 30 to 60 minutes. Introduction and abduction inferencing approaches were applied. The deductive and inductive approaches were performed during the analysis. ICAMO hereustic framework was used to analysed the data in order to get key informants expectations on how the MomConnect will work or not. Results: We developed three folk’s theories illustrating how the key informants’ expected the MomConnect to work. These theories showed that the MomConnect intended to provide users with health information and education that will empower and motivate them with knowledge which will allow the improvement of health services delivery among HCPs and improvement of the uptake of MCH services among pregnant women and mothers and decrease the rate of maternal and child mortality in the country. The lack of an updated mechanism to link women to the outcome was an issue. Another problem enlightened was the introduction of the WhatsApp program instead of SMS messaging, which was free of charge to women. Conclusion: The Folk’s theory developed from this study provided an insight into how the MomConnect was expected to work and what did not work. The folk’s theory will be merged with information from candidate theories on synthesis review and document review to develop our initial program theory of the MomConnect initiative.

Keywords: mHealth, MomConnect program, realist evaluation, maternal and child health, maternal and child health services, introduction, theory-driven

Procedia PDF Downloads 186
7149 Bayesian Variable Selection in Quantile Regression with Application to the Health and Retirement Study

Authors: Priya Kedia, Kiranmoy Das

Abstract:

There is a rich literature on variable selection in regression setting. However, most of these methods assume normality for the response variable under consideration for implementing the methodology and establishing the statistical properties of the estimates. In many real applications, the distribution for the response variable may be non-Gaussian, and one might be interested in finding the best subset of covariates at some predetermined quantile level. We develop dynamic Bayesian approach for variable selection in quantile regression framework. We use a zero-inflated mixture prior for the regression coefficients, and consider the asymmetric Laplace distribution for the response variable for modeling different quantiles of its distribution. An efficient Gibbs sampler is developed for our computation. Our proposed approach is assessed through extensive simulation studies, and real application of the proposed approach is also illustrated. We consider the data from health and retirement study conducted by the University of Michigan, and select the important predictors when the outcome of interest is out-of-pocket medical cost, which is considered as an important measure for financial risk. Our analysis finds important predictors at different quantiles of the outcome, and thus enhance our understanding on the effects of different predictors on the out-of-pocket medical cost.

Keywords: variable selection, quantile regression, Gibbs sampler, asymmetric Laplace distribution

Procedia PDF Downloads 153
7148 A Systematic Review on Orphan Drugs Pricing, and Prices Challenges

Authors: Seyran Naghdi

Abstract:

Background: Orphan drug development is limited by very high costs attributed to the research and development and small size market. How health policymakers address this challenge to consider both supply and demand sides need to be explored for directing the policies and plans in the right way. The price is an important signal for pharmaceutical companies’ profitability and the patients’ accessibility as well. Objective: This study aims to find out the orphan drugs' price-setting patterns and approaches in health systems through a systematic review of the available evidence. Methods: The Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) approach was used. MedLine, Embase, and Web of Sciences were searched via appropriate search strategies. Through Medical Subject Headings (MeSH), the appropriate terms for pricing were 'cost and cost analysis', and it was 'orphan drug production', and 'orphan drug', for orphan drugs. The critical appraisal was performed by the Joanna-Briggs tool. A Cochrane data extraction form was used to obtain the data about the studies' characteristics, results, and conclusions. Results: Totally, 1,197 records were found. It included 640 hits from Embase, 327 from Web of Sciences, and 230 MedLine. After removing the duplicates, 1,056 studies remained. Of them, 924 studies were removed in the primary screening phase. Of them, 26 studies were included for data extraction. The majority of the studies (>75%) are from developed countries, among them, approximately 80% of the studies are from European countries. Approximately 85% of evidence has been produced in the recent decade. Conclusions: There is a huge variation of price-setting among countries, and this is related to the specific pharmacological market structure and the thresholds that governments want to intervene in the process of pricing. On the other hand, there is some evidence on the availability of spaces to reduce the very high costs of orphan drugs development through an early agreement between pharmacological firms and governments. Further studies need to focus on how the governments could incentivize the companies to agree on providing the drugs at lower prices.

Keywords: orphan drugs, orphan drug production, pricing, costs, cost analysis

Procedia PDF Downloads 162
7147 Performants: Making the Organization of Concerts Easier

Authors: Ioannis Andrianakis, Panagiotis Panagiotopoulos, Kyriakos Chatzidimitriou, Dimitrios Tampakis, Manolis Falelakis

Abstract:

Live music, whether performed in organized venues, restaurants, hotels or any other spots, creates value chains that support and develop local economies and tourism development. In this paper, we describe PerformAnts, a platform that increases the mobility of musicians and their accessibility to remotely located venues by rationalizing the cost of live acts. By analyzing the event history and taking into account their potential availability, the platform provides bespoke recommendations to both bands and venues while also facilitating the organization of tours and helping rationalize transportation expenses by realizing an innovative mechanism called “chain booking”. Moreover, the platform provides an environment where complicated tasks such as technical and financial negotiations, concert promotion or copyrights are easily manipulated by users using best practices. The proposed solution provides important benefits to the whole spectrum of small/medium size concert organizers, as the complexity and the cost of the production are rationalized. The environment is also very beneficial for local talent, musicians that are very mobile, venues located away from large urban areas or in touristic destinations, and managers who will be in a position to coordinate a larger number of musicians without extra effort.

Keywords: machine learning, music industry, creative industries, web applications

Procedia PDF Downloads 91
7146 Modeling and Simulating Productivity Loss Due to Project Changes

Authors: Robert Pellerin, Michel Gamache, Remi Trudeau, Nathalie Perrier

Abstract:

The context of large engineering projects is particularly favorable to the appearance of engineering changes and contractual modifications. These elements are potential causes for claims. In this paper, we investigate one of the critical components of the claim management process: the calculation of the impacts of changes in terms of losses of productivity due to the need to accelerate some project activities. When project changes are initiated, delays can arise. Indeed, project activities are often executed in fast-tracking in an attempt to respect the completion date. But the acceleration of project execution and the resulting rework can entail important costs as well as induce productivity losses. In the past, numerous methods have been proposed to quantify the duration of delays, the gains achieved by project acceleration, and the loss of productivity. The calculation related to those changes can be divided into two categories: direct cost and indirect cost. The direct cost is easily quantifiable as opposed to indirect costs which are rarely taken into account during the calculation of the cost of an engineering change or contract modification despite several research projects have been made on this subject. However, proposed models have not been accepted by companies yet, nor they have been accepted in court. Those models require extensive data and are often seen as too specific to be used for all projects. These techniques are also ignoring the resource constraints and the interdependencies between the causes of delays and the delays themselves. To resolve this issue, this research proposes a simulation model that mimics how major engineering changes or contract modifications are handled in large construction projects. The model replicates the use of overtime in a reactive scheduling mode in order to simulate the loss of productivity present when a project change occurs. Multiple tests were conducted to compare the results of the proposed simulation model with statistical analysis conducted by other researchers. Different scenarios were also conducted in order to determine the impact the number of activities, the time of occurrence of the change, the availability of resources, and the type of project changes on productivity loss. Our results demonstrate that the number of activities in the project is a critical variable influencing the productivity of a project. When changes occur, the presence of a large number of activities leads to a much lower productivity loss than a small number of activities. The speed of reducing productivity for 30-job projects is about 25 percent faster than the reduction speed for 120-job projects. The moment of occurrence of a change also shows a significant impact on productivity. Indeed, the sooner the change occurs, the lower the productivity of the labor force. The availability of resources also impacts the productivity of a project when a change is implemented. There is a higher loss of productivity when the amount of resources is restricted.

Keywords: engineering changes, indirect costs overtime, productivity, scheduling, simulation

Procedia PDF Downloads 235
7145 Seismic Performance Evaluation of Structures with Hybrid Dampers Based on FEMA P-58 Methodology

Authors: Minsung Kim, Hyunkoo Kang, Jinkoo Kim

Abstract:

In this study, a hybrid energy dissipation device is developed by combining a steel slit plate and friction pads to be used for seismic retrofit of structures, and its effectiveness is investigated by comparing the life cycle costs of the structure before and after the retrofit. The seismic energy dissipation capability of the dampers is confirmed by cyclic loading tests. The probabilities of reaching various damage states are obtained by fragility analysis, and the life cycle costs of the model structures are computed using the PACT (Performance Assessment Calculation Tool) program based on FEMA P-58 methodology. The fragility analysis shows that the probabilities of reaching limit states are minimized by the seismic retrofit with hybrid dampers and increasing column size. The seismic retrofit with increasing column size and hybrid dampers results in the lowest repair cost and shortest repair time. This research was supported by a grant (13AUDP-B066083-01) from Architecture & Urban Development Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

Keywords: FEMA P-58, friction dampers, life cycle cost, seismic retrofit

Procedia PDF Downloads 331
7144 The Use of Ultrasound as a Safe and Cost-Efficient Technique to Assess Visceral Fat in Children with Obesity

Authors: Bassma A. Abdel Haleem, Ehab K. Emam, George E. Yacoub, Ashraf M. Salem

Abstract:

Background: Obesity is an increasingly common problem in childhood. Childhood obesity is considered the main risk factor for the development of metabolic syndrome (MetS) (diabetes type 2, dyslipidemia, and hypertension). Recent studies estimated that among children with obesity 30-60% will develop MetS. Visceral fat thickness is a valuable predictor of the development of MetS. Computed tomography and dual-energy X-ray absorptiometry are the main techniques to assess visceral fat. However, they carry the risk of radiation exposure and are expensive procedures. Consequently, they are seldom used in the assessment of visceral fat in children. Some studies explored the potential of ultrasound as a substitute to assess visceral fat in the elderly and found promising results. Given the vulnerability of children to radiation exposure, we sought to evaluate ultrasound as a safer and more cost-efficient alternative for measuring visceral fat in obese children. Additionally, we assessed the correlation between visceral fat and obesity indicators such as insulin resistance. Methods: A cross-sectional study was conducted on 46 children with obesity (aged 6–16 years). Their visceral fat was evaluated by ultrasound. Subcutaneous fat thickness (SFT), i.e., the measurement from the skin-fat interface to the linea alba, and visceral fat thickness (VFT), i.e., the thickness from the linea alba to the aorta, were measured and correlated with anthropometric measures, fasting lipid profile, homeostatic model assessment for insulin resistance (HOMA-IR) and liver enzymes (ALT). Results: VFT assessed via ultrasound was found to strongly correlate with the BMI, HOMA-IR with AUC for VFT as a predictor of insulin resistance of 0.858 and cut off point of >2.98. VFT also correlates positively with serum triglycerides and serum ALT. VFT correlates negatively with HDL. Conclusions: Ultrasound, a safe and cost-efficient technique, could be a useful tool for measuring the abdominal fat thickness in children with obesity. Ultrasound-measured VFT could be an appropriate prognostic factor for insulin resistance, hypertriglyceridemia, and elevated liver enzymes in obese children.

Keywords: metabolic syndrome, pediatric obesity, sonography, visceral fat

Procedia PDF Downloads 118
7143 Interactions and Integration: Implications of Victim-Agent Portrayals for Refugees and Asylum Seekers in Germany

Authors: Denise Muro

Abstract:

Conflict in Syria, producing over 11 million displaced persons, has incited global attention to displacement. Although neighboring countries have borne the largest part of the displacement burden, due to the influx of refugees into Europe, the so-called ‘refugee crisis’ is taking place on two fronts: Syria’s neighboring countries, with millions of refugees, and Europe, a destination goal for so many that European states face unprecedented challenges. With increasing attention to displacement, forcibly displaced persons are consistently portrayed as either un-agentic victims, or as dangerous free agents. Recognizing that these dominant portrayals involve discourses of power and inequality, this research investigates the extent to which this victim-agent dichotomy affects refugees and organizations that work closely with them during initial integration processes in Berlin, Germany. The research measures initial integration based on German policy measures regarding integration juxtaposed with the way refugees and those who work with them understand integration. Additionally, the study examines day-to-day interactions of refugees in Germany as a way to gauge social integration in a bottom-up approach. This study involved a discourse analysis of portrayals of refugees and participant observation and interviews with refugees and those who work closely with them, which took place during fieldwork in Berlin in the summer of 2016. Germany is unique regarding their migration history and lack of successful integration, in part due to the persistent refrain, ‘Wir sind kein einwanderungsland’ (‘We are not an immigration country’). Still, their accepted asylum seeker population has grown exponentially in the past few years. Findings suggest that the victim-agent dichotomy is present and impactful in the process of refugees entering and integrating into Germany. Integration is hindered due to refugees either being patronized or criminalized to such an extent that, despite being constantly told that they must integrate, they cannot become part of German society.

Keywords: discourse analysis, Germany, integration, refugee crisis

Procedia PDF Downloads 268
7142 Predicting Factors for Occurrence of Cardiac Arrest in Critical, Emergency and Urgency Patients in an Emergency Department

Authors: Angkrit Phitchayangkoon, Ar-Aishah Dadeh

Abstract:

Background: A key aim of triage is to identify the patients with high risk of cardiac arrest because they require intensive monitoring, resuscitation facilities, and early intervention. We aimed to identify the predicting factors such as initial vital signs, serum pH, serum lactate level, initial capillary blood glucose, and Modified Early Warning Score (MEWS) which affect the occurrence of cardiac arrest in an emergency department (ED). Methods: We conducted a retrospective data review of ED patients in an emergency department (ED) from 1 August 2014 to 31 July 2016. Significant variables in univariate analysis were used to create a multivariate analysis. Differentiation of predicting factors between cardiac arrest patient and non-cardiac arrest patients for occurrence of cardiac arrest in an emergency department (ED) was the primary outcome. Results: The data of 527 non-trauma patients with Emergency Severity Index (ESI) 1-3 were collected. The factors found to have a significant association (P < 0.05) in the non-cardiac arrest group versus the cardiac arrest group at the ED were systolic BP (mean [IQR] 135 [114,158] vs 120 [90,140] mmHg), oxygen saturation (mean [IQR] 97 [89,98] vs 82.5 [78,95]%), GCS (mean [IQR] 15 [15,15] vs 11.5 [8.815]), normal sinus rhythm (mean 59.8 vs 30%), sinus tachycardia (mean 46.7 vs 21.7%), pH (mean [IQR] 7.4 [7.3,7.4] vs 7.2 [7,7.3]), serum lactate (mean [IQR] 2 [1.1,4.2] vs 7 [5,10.8]), and MEWS score (mean [IQR] 3 [2,5] vs 5 [3,6]). A multivariate analysis was then performed. After adjusting for multiple factors, ESI level 2 patients were more likely to have cardiac arrest in the ER compared with ESI 1 (odds ratio [OR], 1.66; P < 0.001). Furthermore, ESI 2 patients were more likely than ESI 1 patients to have cardiovascular disease (OR, 1.89; P = 0.01), heart rate < 55 (OR, 6.83; P = 0.18), SBP < 90 (OR, 3.41; P = 0.006), SpO2 < 94 (OR, 4.76; P = 0.012), sinus tachycardia (OR, 4.32; P = 0.002), lactate > 4 (OR, 10.66; P = < 0.001), and MEWS > 4 (OR, 4.86; P = 0.028). These factors remained predictive of cardiac arrest at the ED. Conclusion: The factors related to cardiac arrest in the ED are ESI 1 patients, ESI 2 patients, patients diagnosed with cardiovascular disease, SpO2 < 94, lactate > 4, and a MEWS > 4. These factors can be used as markers in the event of simultaneous arrival of many patients and can help as a pre-state for patients who have a tendency to develop cardiac arrest. The hemodynamic status and vital signs of these patients should be closely monitored. Early detection of potentially critical conditions to prevent critical medical intervention is mandatory.

Keywords: cardiac arrest, predicting factor, emergency department, emergency patient

Procedia PDF Downloads 158
7141 Towards Human-Interpretable, Automated Learning of Feedback Control for the Mixing Layer

Authors: Hao Li, Guy Y. Cornejo Maceda, Yiqing Li, Jianguo Tan, Marek Morzynski, Bernd R. Noack

Abstract:

We propose an automated analysis of the flow control behaviour from an ensemble of control laws and associated time-resolved flow snapshots. The input may be the rich database of machine learning control (MLC) optimizing a feedback law for a cost function in the plant. The proposed methodology provides (1) insights into the control landscape, which maps control laws to performance, including extrema and ridge-lines, (2) a catalogue of representative flow states and their contribution to cost function for investigated control laws and (3) visualization of the dynamics. Key enablers are classification and feature extraction methods of machine learning. The analysis is successfully applied to the stabilization of a mixing layer with sensor-based feedback driving an upstream actuator. The fluctuation energy is reduced by 26%. The control replaces unforced Kelvin-Helmholtz vortices with subsequent vortex pairing by higher-frequency Kelvin-Helmholtz structures of lower energy. These efforts target a human interpretable, fully automated analysis of MLC identifying qualitatively different actuation regimes, distilling corresponding coherent structures, and developing a digital twin of the plant.

Keywords: machine learning control, mixing layer, feedback control, model-free control

Procedia PDF Downloads 217
7140 Modelling of Recovery and Application of Low-Grade Thermal Resources in the Mining and Mineral Processing Industry

Authors: S. McLean, J. A. Scott

Abstract:

The research topic is focusing on improving sustainable operation through recovery and reuse of waste heat in process water streams, an area in the mining industry that is often overlooked. There are significant advantages to the application of this topic, including economic and environmental benefits. The smelting process in the mining industry presents an opportunity to recover waste heat and apply it to alternative uses, thereby enhancing the overall process. This applied research has been conducted at the Sudbury Integrated Nickel Operations smelter site, in particular on the water cooling towers. The aim was to determine and optimize methods for appropriate recovery and subsequent upgrading of thermally low-grade heat lost from the water cooling towers in a manner that makes it useful for repurposing in applications, such as within an acid plant. This would be valuable to mining companies as it would be an opportunity to reduce the cost of the process, as well as decrease environmental impact and primary fuel usage. The waste heat from the cooling towers needs to be upgraded before it can be beneficially applied, as lower temperatures result in a decrease of the number of potential applications. Temperature and flow rate data were collected from the water cooling towers at an acid plant over two years. The research includes process control strategies and the development of a model capable of determining if the proposed heat recovery technique is economically viable, as well as assessing any environmental impact with the reduction in net energy consumption by the process. Therefore, comprehensive cost and impact analyses are carried out to determine the best area of application for the recovered waste heat. This method will allow engineers to easily identify the value of thermal resources available to them and determine if a full feasibility study should be carried out. The rapid scoping model developed will be applicable to any site that generates large amounts of waste heat. Results show that heat pumps are an economically viable solution for this application, allowing for reduced cost and CO₂ emissions.

Keywords: environment, heat recovery, mining engineering, sustainability

Procedia PDF Downloads 106
7139 Experimental Study of Application of Steel Slag as Aggregate in Road Construction

Authors: Meftah M. Elsaraiti, Samir Milad Elsariti

Abstract:

Steel slag is a by-product of the steel production and utilizing it potentially as new or substitute materials in road construction is advantageous regarding cost reduction and flattening improvement or properties pavement. Ease of use, low cost, and resource availability are some of few advantages of reuse and recycling of steel slag. This study assesses the use of Steel Slag Aggregates (SSA) as an alternative to natural road building aggregates. This paper discusses the basic characteristics of steel slag based on extensive laboratory tests, and to determine the possibilities of using steel slag in road construction. Samples were taken from the furnaces directly at different times and dates. Moreover, random samples were also taken from the slag field from various areas at different far distances from each other. A necessary analysis was performed through the use of (XRF). Three different percentages of SSA (0, 50 and 100%) were added as an alternative to natural aggregate in hot mix asphalt (HMA) production. The proposed design of the mix was made according to the Marshall mix design. The results of the experiments revealed that the percentages of iron oxide ranged from (9 to 26%) and that the addition of SSA has a significant improvement on HMA properties. It was observed that the Marshall stability obtained in the mix of 100% slag ranged from 600 to 800 N as a minimum, and the flow of Marshall obtained from 2.4 to 3.23 mm and the specification requires from 2 to 4 mm. The results may be showed possibilities to use steel slag as new or substitute materials in road construction in Libya.

Keywords: by-product material, properties, road construction, steel slag

Procedia PDF Downloads 183
7138 Extension of D Blast Furnace Campaign Life at Tata Steel Ltd

Authors: Biswajit Seal, Dushyant Kumar, Shambhu Nath, A. B. Raju

Abstract:

Extension of blast furnace campaign life is highly desired for blast furnace operators mainly because of reduction of operating cost and to avoid capital expenditure cost. Tata Steel Ltd, Jamshedpur plant operates seven blast furnaces with combination of old and new technologies. The focus of Tata Steel Ltd is to push for increasing productivity with good quality product and increasing campaign life. This has been challenging for older furnaces because older furnaces are generally equipped with less automation, old design and old equipment. Good operational practices, appropriate remedial measures, and regular planned maintenance helps to achieve long campaign life of old furnaces. Good operating practices like stable and consistent productivity, control of burden distribution, remedial measures like stack gunning and shotcreting for protection of stack wall, enhanced cooling system, and intermediate stack repair helps to achieve long campaign life of old blast furnaces. This paper describes experiences with the current old equipment and design of Tata Steel’s D Blast Furnace for campaign life extension.

Keywords: blast furnace, burden distribution, campaign life, productivity

Procedia PDF Downloads 259
7137 Linear Evolution of Compressible Görtler Vortices Subject to Free-Stream Vortical Disturbances

Authors: Samuele Viaro, Pierre Ricco

Abstract:

Görtler instabilities generate in boundary layers from an unbalance between pressure and centrifugal forces caused by concave surfaces. Their spatial streamwise evolution influences transition to turbulence. It is therefore important to understand even the early stages where perturbations, still small, grow linearly and could be controlled more easily. This work presents a rigorous theoretical framework for compressible flows using the linearized unsteady boundary region equations, where only the streamwise pressure gradient and streamwise diffusion terms are neglected from the full governing equations of fluid motion. Boundary and initial conditions are imposed through an asymptotic analysis in order to account for the interaction of the boundary layer with free-stream turbulence. The resulting parabolic system is discretize with a second-order finite difference scheme. Realistic flow parameters are chosen from wind tunnel studies performed at supersonic and subsonic conditions. The Mach number ranges from 0.5 to 8, with two different radii of curvature, 5 m and 10 m, frequencies up to 2000 Hz, and vortex spanwise wavelengths from 5 mm to 20 mm. The evolution of the perturbation flow is shown through velocity, temperature, pressure profiles relatively close to the leading edge, where non-linear effects can still be neglected, and growth rate. Results show that a global stabilizing effect exists with the increase of Mach number, frequency, spanwise wavenumber and radius of curvature. In particular, at high Mach numbers curvature effects are less pronounced and thermal streaks become stronger than velocity streaks. This increase of temperature perturbations saturates at approximately Mach 4 flows, and is limited in the early stage of growth, near the leading edge. In general, Görtler vortices evolve closer to the surface with respect to a flat plate scenario but their location shifts toward the edge of the boundary layer as the Mach number increases. In fact, a jet-like behavior appears for steady vortices having small spanwise wavelengths (less than 10 mm) at Mach 8, creating a region of unperturbed flow close to the wall. A similar response is also found at the highest frequency considered for a Mach 3 flow. Larger vortices are found to have a higher growth rate but are less influenced by the Mach number. An eigenvalue approach is also employed to study the amplification of the perturbations sufficiently downstream from the leading edge. These eigenvalue results are compared with the ones obtained through the initial value approach with inhomogeneous free-stream boundary conditions. All of the parameters here studied have a significant influence on the evolution of the instabilities for the Görtler problem which is indeed highly dependent on initial conditions.

Keywords: compressible boundary layers, Görtler instabilities, receptivity, turbulence transition

Procedia PDF Downloads 250
7136 Forecasting Optimal Production Program Using Profitability Optimization by Genetic Algorithm and Neural Network

Authors: Galal H. Senussi, Muamar Benisa, Sanja Vasin

Abstract:

In our business field today, one of the most important issues for any enterprises is cost minimization and profit maximization. Second issue is how to develop a strong and capable model that is able to give us desired forecasting of these two issues. Many researches deal with these issues using different methods. In this study, we developed a model for multi-criteria production program optimization, integrated with Artificial Neural Network. The prediction of the production cost and profit per unit of a product, dealing with two obverse functions at same time can be extremely difficult, especially if there is a great amount of conflict information about production parameters. Feed-Forward Neural Networks are suitable for generalization, which means that the network will generate a proper output as a result to input it has never seen. Therefore, with small set of examples the network will adjust its weight coefficients so the input will generate a proper output. This essential characteristic is of the most important abilities enabling this network to be used in variety of problems spreading from engineering to finance etc. From our results as we will see later, Feed-Forward Neural Networks has a strong ability and capability to map inputs into desired outputs.

Keywords: project profitability, multi-objective optimization, genetic algorithm, Pareto set, neural networks

Procedia PDF Downloads 441
7135 An Open-Source Guidance System for an Autonomous Planter Robot in Precision Agriculture

Authors: Nardjes Hamini, Mohamed Bachir Yagoubi

Abstract:

Precision agriculture has revolutionized farming by enabling farmers to monitor their crops remotely in real-time. By utilizing technologies such as sensors, farmers can detect the state of growth, hydration levels, and nutritional status and even identify diseases affecting their crops. With this information, farmers can make informed decisions regarding irrigation, fertilization, and pesticide application. Automated agricultural tasks, such as plowing, seeding, planting, and harvesting, are carried out by autonomous robots and have helped reduce costs and increase production. Despite the advantages of precision agriculture, its high cost makes it inaccessible to small and medium-sized farms. To address this issue, this paper presents an open-source guidance system for an autonomous planter robot. The system is composed of a Raspberry Pi-type nanocomputer equipped with Wi-Fi, a GPS module, a gyroscope, and a power supply module. The accompanying application allows users to enter and calibrate maps with at least four coordinates, enabling the localized contour of the parcel to be captured. The application comprises several modules, such as the mission entry module, which traces the planting trajectory and points, and the action plan entry module, which creates an ordered list of pre-established tasks such as loading, following the plan, returning to the garage, and entering sleep mode. A remote control module enables users to control the robot manually, visualize its location on the map, and use a real-time camera. Wi-Fi coverage is provided by an outdoor access point, covering a 2km circle. This open-source system offers a low-cost alternative for small and medium-sized farms, enabling them to benefit from the advantages of precision agriculture.

Keywords: autonomous robot, guidance system, low-cost, medium farms, open-source system, planter robot, precision agriculture, real-time monitoring, remote control, small farms

Procedia PDF Downloads 102
7134 Multicenter Evaluation of the ACCESS Anti-HCV Assay on the DxI 9000 ACCESS Immunoassay Analyzer, for the Detection of Hepatitis C Virus Antibody

Authors: Dan W. Rhodes, Juliane Hey, Magali Karagueuzian, Florianne Martinez, Yael Sandowski, Vanessa Roulet, Mahmoud Badawi, Mohammed-Amine Chakir, Valérie Simon, Jérémie Gautier, Françoise Le Boulaire, Catherine Coignard, Claire Vincent, Sandrine Greaume, Isabelle Voisin

Abstract:

Background: Beckman Coulter, Inc. (BEC) has recently developed a fully automated second-generation anti-HCV test on a new immunoassay platform. The objective of this multicenter study conducted in Europe was to evaluate the performance of the ACCESS anti-HCV assay on the recently CE-marked DxI 9000 ACCESS Immunoassay Analyzer as an aid in the diagnosis of HCV (Hepatitis C Virus) infection and as a screening test for blood and plasma donors. Methods: The clinical specificity of the ACCESS anti-HCV assay was determined using HCV antibody-negative samples from blood donors and hospitalized patients. Sample antibody status was determined by a CE-marked anti-HCV assay (Abbott ARCHITECTTM anti-HCV assay or Abbott PRISM HCV assay) with an additional confirmation method (Immunoblot testing with INNO-LIATM HCV Score - Fujirebio), if necessary, according to pre-determined testing algorithms. The clinical sensitivity was determined using known HCV antibody-positive samples, identified positive by Immunoblot testing with INNO-LIATM HCV Score - Fujirebio. HCV RNA PCR or genotyping was available on all Immunoblot positive samples for further characterization. The false initial reactive rate was determined on fresh samples from blood donors and hospitalized patients. Thirty (30) commercially available seroconversion panels were tested to assess the sensitivity for early detection of HCV infection. The study was conducted from November 2019 to March 2022. Three (3) external sites and one (1) internal site participated. Results: Clinical specificity (95% CI) was 99.7% (99.6 – 99.8%) on 5852 blood donors and 99.0% (98.4 – 99.4%) on 1527 hospitalized patient samples. There were 15 discrepant samples (positive on ACCESS anti-HCV assay and negative on both ARCHITECT and Immunoblot) observed with hospitalized patient samples, and of note, additional HCV RNA PCR results showed five (5) samples had positive HCV RNA PCR results despite the absence of HCV antibody detection by ARCHITECT and Immunoblot, suggesting a better sensitivity of the ACCESS anti-HCV assay with these five samples compared to the ARCHITECT and Immunoblot anti-HCV assays. Clinical sensitivity (95% CI) on 510 well-characterized, known HCV antibody-positive samples was 100.0% (99.3 – 100.0%), including 353 samples with known HCV genotypes (1 to 6). The overall false initial reactive rate (95% CI) on 6630 patient samples was 0.02% (0.00 – 0.09%). Results obtained on 30 seroconversion panels demonstrated that the ACCESS anti-HCV assay had equivalent sensitivity performances, with an average bleed difference since the first reactive bleed below one (1), compared to the ARCHITECTTM anti-HCV assay. Conclusion: The newly developed ACCESS anti-HCV assay from BEC for use on the DxI 9000 ACCESS Immunoassay Analyzer demonstrated high clinical sensitivity and specificity, equivalent to currently marketed anti-HCV assays, as well as a low false initial reactive rate.

Keywords: DxI 9000 ACCESS Immunoassay Analyzer, HCV, HCV antibody, Hepatitis C virus, immunoassay

Procedia PDF Downloads 95
7133 Experimental Characterization of Composite Material with Non Contacting Methods

Authors: Nikolaos Papadakis, Constantinos Condaxakis, Konstantinos Savvakis

Abstract:

The aim of this paper is to determine the elastic properties (elastic modulus and Poisson ratio) of a composite material based on noncontacting imaging methods. More specifically, the significantly reduced cost of digital cameras has given the opportunity of the high reliability of low-cost strain measurement. The open source platform Ncorr is used in this paper which utilizes the method of digital image correlation (DIC). The use of digital image correlation in measuring strain uses random speckle preparation on the surface of the gauge area, image acquisition, and postprocessing the image correlation to obtain displacement and strain field on surface under study. This study discusses technical issues relating to the quality of results to be obtained are discussed. [0]8 fabric glass/epoxy composites specimens were prepared and tested at different orientations 0[o], 30[o], 45[o], 60[o], 90[o]. Each test was recorded with the camera at a constant frame rate and constant lighting conditions. The recorded images were processed through the use of the image processing software. The parameters of the test are reported. The strain map output which is obtained through strain measurement using Ncorr is validated by a) comparing the elastic properties with expected values from Classical laminate theory, b) through finite element analysis.

Keywords: composites, Ncorr, strain map, videoextensometry

Procedia PDF Downloads 139
7132 Preparation of Indium Tin Oxide Nanoparticle-Modified 3-Aminopropyltrimethoxysilane-Functionalized Indium Tin Oxide Electrode for Electrochemical Sulfide Detection

Authors: Md. Abdul Aziz

Abstract:

Sulfide ion is water soluble, highly corrosive, toxic and harmful to the human beings. As a result, knowing the exact concentration of sulfide in water is very important. However, the existing detection and quantification methods have several shortcomings, such as high cost, low sensitivity, and massive instrumentation. Consequently, the development of novel sulfide sensor is relevant. Nevertheless, electrochemical methods gained enormous popularity due to a vast improvement in the technique and instrumentation, portability, low cost, rapid analysis and simplicity of design. Successful field application of electrochemical devices still requires vast improvement, which depends on the physical, chemical and electrochemical aspects of the working electrode. The working electrode made of bulk gold (Au) and platinum (Pt) are quite common, being very robust and endowed with good electrocatalytic properties. High cost, and electrode poisoning, however, have so far hindered their practical application in many industries. To overcome these obstacles, we developed a sulfide sensor based on an indium tin oxide nanoparticle (ITONP)-modified ITO electrode. To prepare ITONP-modified ITO, various methods were tested. Drop-drying of ITONPs (aq.) on aminopropyltrimethoxysilane-functionalized ITO (APTMS/ITO) was found to be the best method on the basis of voltammetric analysis of the sulfide ion. ITONP-modified APTMS/ITO (ITONP/APTMS/ITO) yielded much better electrocatalytic properties toward sulfide electro-οxidation than did bare or APTMS/ITO electrodes. The ITONPs and ITONP-modified ITO were also characterized using transmission electron microscopy and field emission scanning electron microscopy, respectively. Optimization of the type of inert electrolyte and pH yielded an ITONP/APTMS/ITO detector whose amperometrically and chronocoulοmetrically determined limits of detection for sulfide in aqueous solution were 3.0 µM and 0.90 µM, respectively. ITONP/APTMS/ITO electrodes which displayed reproducible performances were highly stable and were not susceptible to interference by common contaminants. Thus, the developed electrode can be considered as a promising tool for sensing sulfide.

Keywords: amperometry, chronocoulometry, electrocatalytic properties, ITO-nanoparticle-modified ITO, sulfide sensor

Procedia PDF Downloads 125
7131 Quantitative Analysis of Contract Variations Impact on Infrastructure Project Performance

Authors: Soheila Sadeghi

Abstract:

Infrastructure projects often encounter contract variations that can significantly deviate from the original tender estimates, leading to cost overruns, schedule delays, and financial implications. This research aims to quantitatively assess the impact of changes in contract variations on project performance by conducting an in-depth analysis of a comprehensive dataset from the Regional Airport Car Park project. The dataset includes tender budget, contract quantities, rates, claims, and revenue data, providing a unique opportunity to investigate the effects of variations on project outcomes. The study focuses on 21 specific variations identified in the dataset, which represent changes or additions to the project scope. The research methodology involves establishing a baseline for the project's planned cost and scope by examining the tender budget and contract quantities. Each variation is then analyzed in detail, comparing the actual quantities and rates against the tender estimates to determine their impact on project cost and schedule. The claims data is utilized to track the progress of work and identify deviations from the planned schedule. The study employs statistical analysis using R to examine the dataset, including tender budget, contract quantities, rates, claims, and revenue data. Time series analysis is applied to the claims data to track progress and detect variations from the planned schedule. Regression analysis is utilized to investigate the relationship between variations and project performance indicators, such as cost overruns and schedule delays. The research findings highlight the significance of effective variation management in construction projects. The analysis reveals that variations can have a substantial impact on project cost, schedule, and financial outcomes. The study identifies specific variations that had the most significant influence on the Regional Airport Car Park project's performance, such as PV03 (additional fill, road base gravel, spray seal, and asphalt), PV06 (extension to the commercial car park), and PV07 (additional box out and general fill). These variations contributed to increased costs, schedule delays, and changes in the project's revenue profile. The study also examines the effectiveness of project management practices in managing variations and mitigating their impact. The research suggests that proactive risk management, thorough scope definition, and effective communication among project stakeholders can help minimize the negative consequences of variations. The findings emphasize the importance of establishing clear procedures for identifying, assessing, and managing variations throughout the project lifecycle. The outcomes of this research contribute to the body of knowledge in construction project management by demonstrating the value of analyzing tender, contract, claims, and revenue data in variation impact assessment. However, the research acknowledges the limitations imposed by the dataset, particularly the absence of detailed contract and tender documents. This constraint restricts the depth of analysis possible in investigating the root causes and full extent of variations' impact on the project. Future research could build upon this study by incorporating more comprehensive data sources to further explore the dynamics of variations in construction projects.

Keywords: contract variation impact, quantitative analysis, project performance, claims analysis

Procedia PDF Downloads 34
7130 Design of Semi-Autonomous Street Cleaning Vehicle

Authors: Khouloud Safa Azoud, Süleyman Baştürk

Abstract:

In the pursuit of cleaner and more sustainable urban environments, advanced technologies play a critical role in evolving sanitation systems. This paper presents two distinct advancements in automated cleaning machines designed to improve urban sanitation. The first advancement is a semi-automatic road surface cleaning machine that integrates human labor with solar energy to enhance environmental sustainability and adaptability, especially in regions with limited access to electricity. By reducing carbon emissions and increasing operational efficiency, this approach offers significant potential for urban sanitation enhancement. The second advancement is a multifunctional semi-automatic street cleaning machine equipped with a camera, Arduino programming, and GPS for an autonomous operation aimed at addressing cost barriers in developing countries. Prioritizing low energy consumption and cost-effectiveness, this machine provides versatile cleaning solutions adaptable to various environmental conditions. By integrating solar energy with autonomous operating systems and careful design, these developments represent substantial progress in sustainable urban sanitation, particularly in developing regions.

Keywords: automated cleaning machines, solar energy integration, operational efficiency, urban sanitation systems

Procedia PDF Downloads 21
7129 Comparison of Patient Satisfaction and Observer Rating of Outpatient Care among Public Hospitals in Shanghai

Authors: Tian Yi Du, Guan Rong Fan, Dong Dong Zou, Di Xue

Abstract:

Background: The patient satisfaction survey is becoming of increasing importance for hospitals or other providers to get more reimbursement and/or more governmental subsidies. However, when the results of patient satisfaction survey are compared among medical institutions, there are some concerns. The primary objectives of this study were to evaluate patient satisfaction in tertiary hospitals of Shanghai and to compare the satisfaction rating on physician services between patients and observers. Methods: Two hundred outpatients were randomly selected for patient satisfaction survey in each of 28 public tertiary hospitals of Shanghai. Four or five volunteers were selected to observe 5 physicians’ practice in each of above hospitals and rated observed physicians’ practice. The outpatients that the volunteers observed their physician practice also filled in the satisfaction questionnaires. The rating scale for outpatient survey and volunteers’ observation was: 1 (very dissatisfied) to 6 (very satisfied). If the rating was equal to or greater than 5, we considered the outpatients and volunteers were satisfied with the services. The validity and reliability of the measure were assessed. Multivariate regressions for each of the 4 dimensions and overall of patient satisfaction were used in analyses. Paired t tests were applied to analyze the rating agreement on physician services between outpatients and volunteers. Results: Overall, 90% of surveyed outpatients were satisfied with outpatient care in the tertiary public hospitals of Shanghai. The lowest three satisfaction rates were seen in the items of ‘Restrooms were sanitary and not crowded’ (81%), ‘It was convenient for the patient to pay medical bills’ (82%), and ‘Medical cost in the hospital was reasonable’ (84%). After adjusting the characteristics of patients, the patient satisfaction in general hospitals was higher than that in specialty hospitals. In addition, after controlling the patient characteristics and number of hospital visits, the hospitals with higher outpatient cost per visit had lower patient satisfaction. Paired t tests showed that the rating on 6 items in the dimension of physician services (total 14 items) was significantly different between outpatients and observers, in which 5 were rated lower by the observers than by the outpatients. Conclusions: The hospital managers and physicians should use patient satisfaction and observers’ evaluation to detect the room for improvement in areas such as social skills cost control, and medical ethics.

Keywords: patient satisfaction, observation, quality, hospital

Procedia PDF Downloads 318