Search results for: budget distribution
4096 Cost Reduction Techniques for Provision of Shelter to Homeless
Authors: Mukul Anand
Abstract:
Quality oriented affordable shelter for all has always been the key issue in the housing sector of our country. Homelessness is the acute form of housing need. It is a paradox that in spite of innumerable government initiated programmes for affordable housing, certain section of society is still devoid of shelter. About nineteen million (18.78 million) households grapple with housing shortage in Urban India in 2012. In Indian scenario there is major mismatch between the people for whom the houses are being built and those who need them. The prime force faced by public authorities in facilitation of quality housing for all is high cost of construction. The present paper will comprehend executable techniques for dilution of cost factor in housing the homeless. The key actors responsible for delivery of cheap housing stock such as capacity building, resource optimization, innovative low cost building material and indigenous skeleton housing system will also be incorporated in developing these techniques. Time performance, which is an important angle of above actors, will also be explored so as to increase the effectiveness of low cost housing. Along with this best practices will be taken up as case studies where both conventional techniques of housing and innovative low cost housing techniques would be cited. Transportation consists of approximately 30% of total construction budget. Thus use of alternative local solutions depending upon the region would be covered so as to highlight major components of low cost housing. Government is laid back regarding base line information on use of innovative low cost method and technique of resource optimization. Therefore, the paper would be an attempt to bring to light simpler solutions for achieving low cost housing.Keywords: construction, cost, housing, optimization, shelter
Procedia PDF Downloads 4454095 Measurement of Project Success in Construction Using Performance Indices
Authors: Annette Joseph
Abstract:
Background: The construction industry is dynamic in nature owing to the increasing uncertainties in technology, budgets, and development processes making projects more complex. Thus, predicting project performance and chances of its likely success has become difficult. The goal of all parties involved in construction projects is to successfully complete it on schedule, within planned budget and with the highest quality and in the safest manner. However, the concept of project success has remained ambiguously defined in the mind of the construction professionals. Purpose: This paper aims to study the analysis of a project in terms of its performance and measure the success. Methodology: The parameters for evaluating project success and the indices to measure success/performance of a project are identified through literature study. Through questionnaire surveys aimed at the stakeholders in the projects, data is collected from two live case studies (an ongoing and completed project) on the overall performance in terms of its success/failure. Finally, with the help of SPSS tool, the data collected from the surveys are analyzed and applied on the selected performance indices. Findings: The score calculated by using the indices and models helps in assessing the overall performance of the project and interpreting it to find out whether the project will be a success or failure. This study acts as a reference for firms to carry out performance evaluation and success measurement on a regular basis helping projects to identify the areas which are performing well and those that require improvement. Originality & Value: The study signifies that by measuring project performance; a project’s deviation towards success/failure can be assessed thus helping in suggesting early remedial measures to bring it on track ensuring that a project will be completed successfully.Keywords: project, performance, indices, success
Procedia PDF Downloads 1914094 Configuration as a Service in Multi-Tenant Enterprise Resource Planning System
Authors: Mona Misfer Alshardan, Djamal Ziani
Abstract:
Enterprise resource planning (ERP) systems are the organizations tickets to the global market. With the implementation of ERP, organizations can manage and coordinate all functions, processes, resources and data from different departments by a single software. However, many organizations consider the cost of traditional ERP to be expensive and look for alternative affordable solutions within their budget. One of these alternative solutions is providing ERP over a software as a service (SaaS) model. This alternative could be considered as a cost effective solution compared to the traditional ERP system. A key feature of any SaaS system is the multi-tenancy architecture where multiple customers (tenants) share the system software. However, different organizations have different requirements. Thus, the SaaS developers accommodate each tenant’s unique requirements by allowing tenant-level customization or configuration. While customization requires source code changes and in most cases a programming experience, the configuration process allows users to change many features within a predefined scope in an easy and controlled manner. The literature provides many techniques to accomplish the configuration process in different SaaS systems. However, the nature and complexity of SaaS ERP needs more attention to the details regarding the configuration process which is merely described in previous researches. Thus, this research is built on strong knowledge regarding the configuration in SaaS to define specifically the configuration borders in SaaS ERP and to design a configuration service with the consideration of the different configuration aspects. The proposed architecture will ensure the easiness of the configuration process by using wizard technology. Also, the privacy and performance are guaranteed by adopting the databases isolation technique.Keywords: configuration, software as a service, multi-tenancy, ERP
Procedia PDF Downloads 3934093 An Assessment of Factors Affecting the Cost and Time Performance of Subcontractors
Authors: Adedayo Jeremiah Adeyekun, Samuel Oluwagbemiga Ishola,
Abstract:
This paper is an assessment of factors influencing the cost and time performance of subcontractors and the need for effective performance of subcontractors at the project sites. The factors influencing the performance of subcontractors are grouped, similar to those identified with the project or an organization and on another hand, there are significant factors influencing the performance of the subcontractors. These factors incorporate management level leadership, time required to complete the project, profit, staff capability/expertise, reputation, installment method, organization history, and project procurement method strategy, security, bidding technique, insurance, bond and relationship with the major contractors. The factors influencing the management of subcontractors in building development projects includes performance of significant past projects, standard of workmanship, consistence with guidelines, regular payment to labourers, adherence to program, regularity and viability of communication with main contractor, adherence to subcontract necessities. Other factors comprise adherence to statutory environmental regulations, number of experienced sites administrative staff, inspection and maintenance of good workplace, number of artisans and workers, quality of as-built and shop drawings and ability to carry out the quantity of work and so on. This study also aimed to suggest a way forward to improve the performance of subcontractors which is the reason for exceeding budget at the project sites. To carry out this study, a questionnaire was drafted to derive information on the causes of low performance of subcontractors and the implication to cost.Keywords: performance, contractor, subcontractors, construction
Procedia PDF Downloads 764092 Design and Application of a Model Eliciting Activity with Civil Engineering Students on Binomial Distribution to Solve a Decision Problem Based on Samples Data Involving Aspects of Randomness and Proportionality
Authors: Martha E. Aguiar-Barrera, Humberto Gutierrez-Pulido, Veronica Vargas-Alejo
Abstract:
Identifying and modeling random phenomena is a fundamental cognitive process to understand and transform reality. Recognizing situations governed by chance and giving them a scientific interpretation, without being carried away by beliefs or intuitions, is a basic training for citizens. Hence the importance of generating teaching-learning processes, supported using technology, paying attention to model creation rather than only executing mathematical calculations. In order to develop the student's knowledge about basic probability distributions and decision making; in this work a model eliciting activity (MEA) is reported. The intention was applying the Model and Modeling Perspective to design an activity related to civil engineering that would be understandable for students, while involving them in its solution. Furthermore, the activity should imply a decision-making challenge based on sample data, and the use of the computer should be considered. The activity was designed considering the six design principles for MEA proposed by Lesh and collaborators. These are model construction, reality, self-evaluation, model documentation, shareable and reusable, and prototype. The application and refinement of the activity was carried out during three school cycles in the Probability and Statistics class for Civil Engineering students at the University of Guadalajara. The analysis of the way in which the students sought to solve the activity was made using audio and video recordings, as well as with the individual and team reports of the students. The information obtained was categorized according to the activity phase (individual or team) and the category of analysis (sample, linearity, probability, distributions, mechanization, and decision-making). With the results obtained through the MEA, four obstacles have been identified to understand and apply the binomial distribution: the first one was the resistance of the student to move from the linear to the probabilistic model; the second one, the difficulty of visualizing (infering) the behavior of the population through the sample data; the third one, viewing the sample as an isolated event and not as part of a random process that must be viewed in the context of a probability distribution; and the fourth one, the difficulty of decision-making with the support of probabilistic calculations. These obstacles have also been identified in literature on the teaching of probability and statistics. Recognizing these concepts as obstacles to understanding probability distributions, and that these do not change after an intervention, allows for the modification of these interventions and the MEA. In such a way, the students may identify themselves the erroneous solutions when they carrying out the MEA. The MEA also showed to be democratic since several students who had little participation and low grades in the first units, improved their participation. Regarding the use of the computer, the RStudio software was useful in several tasks, for example in such as plotting the probability distributions and to exploring different sample sizes. In conclusion, with the models created to solve the MEA, the Civil Engineering students improved their probabilistic knowledge and understanding of fundamental concepts such as sample, population, and probability distribution.Keywords: linear model, models and modeling, probability, randomness, sample
Procedia PDF Downloads 1184091 A Critical Review of Assessments of Geological CO2 Storage Resources in Pennsylvania and the Surrounding Region
Authors: Levent Taylan Ozgur Yildirim, Qihao Qian, John Yilin Wang
Abstract:
A critical review of assessments of geological carbon dioxide (CO2) storage resources in Pennsylvania and the surrounding region was completed with a focus on the studies of Midwest Regional Carbon Sequestration Partnership (MRCSP), United States Department of Energy (US-DOE), and United States Geological Survey (USGS). Pennsylvania Geological Survey participated in the MRCSP Phase I research to characterize potential storage formations in Pennsylvania. The MRCSP’s volumetric method estimated ~89 gigatonnes (Gt) of total CO2 storage resources in deep saline formations, depleted oil and gas reservoirs, coals, and shales in Pennsylvania. Meanwhile, the US-DOE calculated storage efficiency factors using log-odds normal distribution and Monte Carlo sampling, revealing contingent storage resources of ~18 Gt to ~20 Gt in deep saline formations, depleted oil and gas reservoirs, and coals in Pennsylvania. Additionally, the USGS employed Beta-PERT distribution and Monte Carlo sampling to determine buoyant and residual storage efficiency factors, resulting in 20 Gt of contingent storage resources across four storage assessment units in Appalachian Basin. However, few studies have explored CO2 storage resources in shales in the region, yielding inconclusive findings. This article provides a critical and most up to date review and analysis of geological CO2 storage resources in Pennsylvania and the region.Keywords: carbon capture and storage, geological CO2 storage, pennsylvania, appalachian basin
Procedia PDF Downloads 524090 Comparative Analysis of Hybrid Dynamic Stabilization and Fusion for Degenerative Disease of the Lumbosacral Spine: Finite Element Analysis
Authors: Mohamed Bendoukha, Mustapha Mosbah
Abstract:
The Radiographic apparent assumed that the asymptomatic adjacent segment disease ASD is common after lumbar fusion, but this does not correlate with the functional outcomes while compensatory increased motion and stresses at the adjacent level of fusion is well-known to be associated to ASD. Newly developed, the hybrid stabilization are allocated to substituted for mostly the superior level of the fusion in an attempt to reduce the number of fusion levels and likelihood of degeneration process at the adjacent levels during the fusion with pedicle screws. Nevertheless, its biomechanical efficiencies still remain unknown and complications associated with failure of constructs such screw loosening and toggling should be elucidated In the current study, a finite element (FE) study was performed using a validated L2/S1 model subjected to a moment of 7.5 Nm and follower load of 400 N to assess the biomedical behavior of hybrid constructs based on dynamic topping off, semi rigid fusion. The residual range of motion (ROM), stress distribution at the fused and adjacent levels, stress distribution at the disc and the cage-endplate interface with respect to changes of bone quality were investigated. The hybrid instrumentation was associated with a reduction in compressive stresses compared to the fusion construct in the adjacent-level disc and showed high substantial axial force in the implant while fusion instrumentation increased the motion for both flexion and extension.Keywords: intervertebral disc, lumbar spine, degenerative nuclesion, L4-L5, range of motion finite element model, hyperelasticy
Procedia PDF Downloads 1854089 Circulating Oxidized LDL and Insulin Resistance among Obese School Students
Authors: Nayera E. Hassan, Sahar A. El-Masry, Mones M. Abu Shady, Rokia A. El Banna, Muhammad Al-Tohamy, Mehrevan M. Abd El-Moniem, Mona Anwar
Abstract:
Circulating oxidized LDL (ox-LDL) is associated with obesity, insulin resistance (HOMA), metabolic syndrome, and cardiovascular disease in adults. Little is known about relations in children. Aim: To assess association of ox-LDL with fat distribution and insulin resistance in a group of obese Egyptian children. Methods: Study is cross-sectional consisting of 68 obese children, with a mean age of 9.96 ± 1.32. Each underwent a complete physical examination; blood pressure (SBP, DBP) and anthropometric measurements (weight, height, BMI; waist, hip circumferences, waist/hip ratio), biochemical tests of fasting blood glucose (FBS), insulin levels; lipid profile (TC, LDL,HDL, TG) and ox-LDL; calculated HOMA. Sample was classified according to waist/hip ratio into: group I with and group II without central obesity. Results: ox-LDL showed significant positive correlation with LDL and TC in all groups of obesity. After adjustment for age and sex, significant positive correlation was detected between ox-LDL with SBP, DBP, TC, LDL, insulin, and HOMA in group II and with TC and FBS in group I. Insignificant association was detected between ox-LDL and other anthropometric parameters including BMI in any group of obese children (p > 0.05). Conclusions: ox-LDL, as a marker of oxidative stress is not correlated with BMI among all studied obese children (aged 6-12 years). Increased oxidative stress has causal effects on insulin resistance in obese children without central obesity and on fasting blood sugar in those with central obesity. These findings emphasize the importance of obesity during childhood and suggest that the metabolic complications of obesity and body fat distribution are detectable early in life.Keywords: ox-LDL, obesity, insulin resistance, children
Procedia PDF Downloads 3584088 Detection and Distribution Pattern of Prevelant Genotypes of Hepatitis C in a Tertiary Care Hospital of Western India
Authors: Upasana Bhumbla
Abstract:
Background: Hepatitis C virus is a major cause of chronic hepatitis, which can further lead to cirrhosis of the liver and hepatocellular carcinoma. Worldwide the burden of Hepatitis C infection has become a serious threat to the human race. Hepatitis C virus (HCV) has population-specific genotypes and provides valuable epidemiological and therapeutic information. Genotyping and assessment of viral load in HCV patients are important for planning the therapeutic strategies. The aim of the study is to study the changing trends of prevalence and genotypic distribution of hepatitis C virus in a tertiary care hospital in Western India. Methods: It is a retrospective study; blood samples were collected and tested for anti HCV antibodies by ELISA in Dept. of Microbiology. In seropositive Hepatitis C patients, quantification of HCV-RNA was done by real-time PCR and in HCV-RNA positive samples, genotyping was conducted. Results: A total of 114 patients who were seropositive for Anti HCV were recruited in the study, out of which 79 (69.29%) were HCV-RNA positive. Out of these positive samples, 54 were further subjected to genotype determination using real-time PCR. Genotype was not detected in 24 samples due to low viral load; 30 samples were positive for genotype. Conclusion: Knowledge of genotype is crucial for the management of HCV infection and prediction of prognosis. Patients infected with HCV genotype 1 and 4 will have to receive Interferon and Ribavirin for 48 weeks. Patients with these genotypes show a poor sustained viral response when tested 24 weeks after completion of therapy. On the contrary, patients infected with HCV genotype 2 and 3 are reported to have a better response to therapy.Keywords: hepatocellular, genotype, ribavarin, seropositive
Procedia PDF Downloads 1274087 Two-Dimensional CFD Simulation of the Behaviors of Ferromagnetic Nanoparticles in Channel
Authors: Farhad Aalizadeh, Ali Moosavi
Abstract:
This paper presents a two-dimensional Computational Fluid Dynamics (CFDs) simulation for the steady, particle tracking. The purpose of this paper is applied magnetic field effect on Magnetic Nanoparticles velocities distribution. It is shown that the permeability of the particles determines the effect of the magnetic field on the deposition of the particles and the deposition of the particles is inversely proportional to the Reynolds number. Using MHD and its property it is possible to control the flow velocity, remove the fouling on the walls and return the system to its original form. we consider a channel 2D geometry and solve for the resulting spatial distribution of particles. According to obtained results when only magnetic fields are applied perpendicular to the flow, local particles velocity is decreased due to the direct effect of the magnetic field return the system to its original fom. In the method first, in order to avoid mixing with blood, the ferromagnetic particles are covered with a gel-like chemical composition and are injected into the blood vessels. Then, a magnetic field source with a specified distance from the vessel is used and the particles are guided to the affected area. This paper presents a two-dimensional Computational Fluid Dynamics (CFDs) simulation for the steady, laminar flow of an incompressible magnetorheological (MR) fluid between two fixed parallel plates in the presence of a uniform magnetic field. The purpose of this study is to develop a numerical tool that is able to simulate MR fluids flow in valve mode and determineB0, applied magnetic field effect on flow velocities and pressure distributions.Keywords: MHD, channel clots, magnetic nanoparticles, simulations
Procedia PDF Downloads 3684086 Studying the Value-Added Chain for the Fish Distribution Process at Quang Binh Fishing Port in Vietnam
Authors: Van Chung Nguyen
Abstract:
The purpose of this study is to study the current status of the value chain for fish distribution at Quang Binh Fishing Port with 360 research samples in which the research subjects are fishermen, traders, retailers, and businesses. The research uses the approach of applying the value chain theoretical framework of Kaplinsky and Morris to quantify and describe market channels and actors participating in the value chain and analyze the value-added process of these companies according to market channels. The analysis results show that fishermen directly catch fish with high economic efficiency, but processing enterprises and, especially retailers, are the agents to obtain higher added value. Processing enterprises play a role that is not really clear due to outdated processing technology; in contrast, retailers have the highest added value. This shows that the added value of the fish supply chain at Quang Binh fishing port is still limited, leading to low output quality. Therefore, the selling price of fish to the market is still high compared to the abundant fish resources, leading to low consumption and limiting exports due to the quality of processing enterprises. This reduces demand and fishing capacity, and productivity is lower than potential. To improve the fish value chain at fishing ports, it is necessary to focus on improving product quality, strengthening linkages between actors, building brands and product consumption markets at the same time, improving the capacity of export processing enterprises.Keywords: Quang Binh fishing port, value chain, market, distributions channel
Procedia PDF Downloads 734085 Pharmacokinetic and Tissue Distribution of Etoposide Loaded Modified Glycol Chitosan Nanoparticles
Authors: Akhtar Aman, Abida Raza, Shumaila Bashir, Mehboob Alam
Abstract:
The development of efficient delivery systems remains a major concern in cancer chemotherapy as many efficacious anticancer drugs are hydrophobic and difficult to formulate. Nanomedicines based on drug-loaded amphiphilic glycol chitosan micelles offer potential advantages for the formulation of drugs such as etoposide that may improve the pharmacokinetics and reduce the formulation-related adverse effects observed with current formulations. Amphiphilic derivatives of glycol chitosan were synthesized by chemical grafting of palmitic acid N-hydroxysuccinimide and quaternization to glycol chitosan backbone. To this end, a 7.9 kDa glycol chitosan was modified by palmitoylation and quaternization, yielding a 13 kDa amphiphilic polymer. Micelles prepared from this amphiphilic polymer had a size of 162nm and were able to encapsulate up to 3 mg/ml etoposide. Pharmacokinetic results indicated that the GCPQ micelles transformed the biodistribution pattern and increased etoposide concentration in the brain significantly compared to free drugs after intravenous administration. AUC 0.5-24h showed statistically significant difference in ETP-GCPQ vs. Commercial preparation in liver (25 vs.70, p<0.001), spleen (27 vs.36, p<0.05), lungs (42 vs.136,p<0.001),kidneys(25 vs.70,p< 0.05),and brain(19 vs.9,p<0.001). ETP-GCPQ crossed the blood-brain barrier, and 4, 3.5, 2.6, 1.8, 1.7, 1.5, and 2.5 fold higher levels of etoposide were observed at 0.5, 1, 2, 4, 6, 12, and 24hrs; respectively suggesting these systems could deliver hydrophobic anticancer drugs such as etoposide to tumors but also increased their transport through the biological barriers, thus making it a good delivery systemKeywords: glycol chitosan, micelles, pharmacokinetics, tissue distribution
Procedia PDF Downloads 1044084 Assessing Effects of an Intervention on Bottle-Weaning and Reducing Daily Milk Intake from Bottles in Toddlers Using Two-Part Random Effects Models
Authors: Yungtai Lo
Abstract:
Two-part random effects models have been used to fit semi-continuous longitudinal data where the response variable has a point mass at 0 and a continuous right-skewed distribution for positive values. We review methods proposed in the literature for analyzing data with excess zeros. A two-part logit-log-normal random effects model, a two-part logit-truncated normal random effects model, a two-part logit-gamma random effects model, and a two-part logit-skew normal random effects model were used to examine effects of a bottle-weaning intervention on reducing bottle use and daily milk intake from bottles in toddlers aged 11 to 13 months in a randomized controlled trial. We show in all four two-part models that the intervention promoted bottle-weaning and reduced daily milk intake from bottles in toddlers drinking from a bottle. We also show that there are no differences in model fit using either the logit link function or the probit link function for modeling the probability of bottle-weaning in all four models. Furthermore, prediction accuracy of the logit or probit link function is not sensitive to the distribution assumption on daily milk intake from bottles in toddlers not off bottles.Keywords: two-part model, semi-continuous variable, truncated normal, gamma regression, skew normal, Pearson residual, receiver operating characteristic curve
Procedia PDF Downloads 3494083 Identification of Flooding Attack (Zero Day Attack) at Application Layer Using Mathematical Model and Detection Using Correlations
Authors: Hamsini Pulugurtha, V.S. Lakshmi Jagadmaba Paluri
Abstract:
Distributed denial of service attack (DDoS) is one altogether the top-rated cyber threats presently. It runs down the victim server resources like a system of measurement and buffer size by obstructing the server to supply resources to legitimate shoppers. Throughout this text, we tend to tend to propose a mathematical model of DDoS attack; we discuss its relevancy to the choices like inter-arrival time or rate of arrival of the assault customers accessing the server. We tend to tend to further analyze the attack model in context to the exhausting system of measurement and buffer size of the victim server. The projected technique uses an associate in nursing unattended learning technique, self-organizing map, to make the clusters of identical choices. Lastly, the abstract applies mathematical correlation and so the standard likelihood distribution on the clusters and analyses their behaviors to look at a DDoS attack. These systems not exclusively interconnect very little devices exchanging personal data, but to boot essential infrastructures news standing of nuclear facilities. Although this interconnection brings many edges and blessings, it to boot creates new vulnerabilities and threats which might be conversant in mount attacks. In such sophisticated interconnected systems, the power to look at attacks as early as accomplishable is of paramount importance.Keywords: application attack, bandwidth, buffer correlation, DDoS distribution flooding intrusion layer, normal prevention probability size
Procedia PDF Downloads 2254082 Optimization of Bifurcation Performance on Pneumatic Branched Networks in next Generation Soft Robots
Authors: Van-Thanh Ho, Hyoungsoon Lee, Jaiyoung Ryu
Abstract:
Efficient pressure distribution within soft robotic systems, specifically to the pneumatic artificial muscle (PAM) regions, is essential to minimize energy consumption. This optimization involves adjusting reservoir pressure, pipe diameter, and branching network layout to reduce flow speed and pressure drop while enhancing flow efficiency. The outcome of this optimization is a lightweight power source and reduced mechanical impedance, enabling extended wear and movement. To achieve this, a branching network system was created by combining pipe components and intricate cross-sectional area variations, employing the principle of minimal work based on a complete virtual human exosuit. The results indicate that modifying the cross-sectional area of the branching network, gradually decreasing it, reduces velocity and enhances momentum compensation, preventing flow disturbances at separation regions. These optimized designs achieve uniform velocity distribution (uniformity index > 94%) prior to entering the connection pipe, with a pressure drop of less than 5%. The design must also consider the length-to-diameter ratio for fluid dynamic performance and production cost. This approach can be utilized to create a comprehensive PAM system, integrating well-designed tube networks and complex pneumatic models.Keywords: pneumatic artificial muscles, pipe networks, pressure drop, compressible turbulent flow, uniformity flow, murray's law
Procedia PDF Downloads 844081 Conditionality in the European Union as a New Instrument to Guarantee the Principle of Separation of Powers
Authors: Ana Neves
Abstract:
The European Union’s multi-level constitutionalism is grounded in an intricate network of vertical and horizontal legal relationships among different levels and types of public authorities. In a very significant way since the 2008 crisis, evolving institutional arrangements and institutional dynamics in the European Union have been progressively impacting Member States and the terms under which national public authorities are organised, interact and exercise their powers. This impact occurs in both macro and micro dimensions. Several examples are relevant here, such as the involvement of national Parliaments in the activities of the European Union, the enhanced integration of public administrations, the side effects of the Council framework decision on the European Arrest Warrant, the European Union Justice Scoreboard, the protection of whistle-blowers regulation, the enhanced cooperation on the establishment of the European Public Prosecutor’s Office, the regime for the protection of the Union budget and the European Rule of Law Mechanism. A common trend or denominator underlies the deepening of institutional interdependence and the increased interactions between the European Union, Member States, and public authorities at different levels. This seems to be conditionality as a general principle. The European multi-level constitutionalism must be considered in the light of this conditionality principle, which does not “imply a relationship of command and obedience”. Nevertheless, it might be more effective or be a very compelling principle. It is as if the extension of the shared rule is being accompanied by a contrapuntal dialogue. The different public authorities at various levels are being called to rethink and readjust themselves within a broader and more plural framework concerning understanding the limitation of power.Keywords: european union -, multi-level hierarchy, conditionality, separation of powers
Procedia PDF Downloads 1074080 Overview on Sustainable Coastal Protection Structures
Authors: Suresh Reddi, Mathew Leslie, Vishnu S. Das
Abstract:
Sustainable design is a prominent concept across all sectors of engineering and its importance is widely recognized within the Arabian Gulf region. Despite that sustainable or soft engineering options are not widely deployed in coastal engineering projects and a preference for utilizing ‘hard engineering’ solutions remain. The concept of soft engineering lies in “working together” with the nature to manage the coastline. This approach allows hard engineering options, such as breakwaters or sea walls, to be minimized or even eliminated altogether. Hard structures provide a firm barrier to wave energy or flooding, but in doing so they often have a significant impact on the natural processes of the coastline. This may affect the area locally or impact on neighboring zones. In addition, they often have a negative environmental impact and may create a sense of disconnect between the marine environment and local users. Soft engineering options, seek to protect the coastline by working in harmony with the natural process of sediment transport/budget. They often consider new habitat creation and creating usable spaces that will increase the sense of connection with nature. Often soft engineering options, where appropriately deployed can provide a low-maintenance, aesthetically valued, natural line of coastal protection. This paper deals with an overview of the following: The widely accepted soft engineering practices across the world; How this approach has been considered by Ramboll in some recent projects in Middle East and Asia; Challenges and barriers to use in using soft engineering options in the region; Way forward towards more widespread adoption.Keywords: coastline, hard engineering, low maintenance, soft engineering options
Procedia PDF Downloads 1374079 Big Data in Construction Project Management: The Colombian Northeast Case
Authors: Sergio Zabala-Vargas, Miguel Jiménez-Barrera, Luz VArgas-Sánchez
Abstract:
In recent years, information related to project management in organizations has been increasing exponentially. Performance data, management statistics, indicator results have forced the collection, analysis, traceability, and dissemination of project managers to be essential. In this sense, there are current trends to facilitate efficient decision-making in emerging technology projects, such as: Machine Learning, Data Analytics, Data Mining, and Big Data. The latter is the most interesting in this project. This research is part of the thematic line Construction methods and project management. Many authors present the relevance that the use of emerging technologies, such as Big Data, has taken in recent years in project management in the construction sector. The main focus is the optimization of time, scope, budget, and in general mitigating risks. This research was developed in the northeastern region of Colombia-South America. The first phase was aimed at diagnosing the use of emerging technologies (Big-Data) in the construction sector. In Colombia, the construction sector represents more than 50% of the productive system, and more than 2 million people participate in this economic segment. The quantitative approach was used. A survey was applied to a sample of 91 companies in the construction sector. Preliminary results indicate that the use of Big Data and other emerging technologies is very low and also that there is interest in modernizing project management. There is evidence of a correlation between the interest in using new data management technologies and the incorporation of Building Information Modeling BIM. The next phase of the research will allow the generation of guidelines and strategies for the incorporation of technological tools in the construction sector in Colombia.Keywords: big data, building information modeling, tecnology, project manamegent
Procedia PDF Downloads 1284078 Reorientation of Anisotropic Particles in Free Liquid Microjets
Authors: Mathias Schlenk, Susanne Seibt, Sabine Rosenfeldt, Josef Breu, Stephan Foerster
Abstract:
Thin liquid jets on micrometer scale play an important role in processing such as in fiber fabrication, inkjet printing, but also for sample delivery in modern synchrotron X-ray devices. In all these cases the liquid jets contain solvents and dissolved materials such as polymers, nanoparticles, fibers pigments or proteins. As liquid flow in liquid jets differs significantly from flow in capillaries and microchannels, particle localization and orientation will also be different. This is of critical importance for applications, which depend on well-defined homogeneous particle and fiber distribution and orientation in liquid jets. Investigations of particle orientation in liquid microjets of diluted solutions have been rare, despite their importance. With the arise of micro-focused X-ray beams it has become possible to scan across samples with micrometer resolution to locally analyse structure and orientation of the samples. In the present work, we used this method to scan across liquid microjets to determine the local distribution and orientation of anisotropic particles. The compromise wormlike block copolymer micelles as an example of long flexible fibrous structures, hectorite materials as a model of extended nanosheet structures, and gold nanorods as an illustration of short stiff cylinders to comprise all relevant anisotropic geometries. We find that due to the different velocity profile in the liquid jet, which resembles plug flow, the orientation of the particles which was generated in the capillary is lost or changed into non-oriented or bi-axially orientations depending on the geometrical shape of the particle.Keywords: anisotropic particles, liquid microjets, reorientation, SAXS
Procedia PDF Downloads 3394077 Examining Litter Distributions in Lethbridge, Alberta, Canada, Using Citizen Science and GIS Methods: OpenLitterMap App and Story Maps
Authors: Tali Neta
Abstract:
Humans’ impact on the environment has been incredibly brutal, with enormous plastic- and other pollutants (e.g., cigarette buds, paper cups, tires) worldwide. On land, litter costs taxpayers a fortune. Most of the litter pollution comes from the land, yet it is one of the greatest hazards to marine environments. Due to spatial and temporal limitations, previous litter data covered very small areas. Currently, smartphones can be used to obtain information on various pollutants (through citizen science), and they can greatly assist in acknowledging and mitigating the environmental impact of litter. Litter app data, such as the Litterati, are available for study through a global map only; these data are not available for download, and it is not clear whether irrelevant hashtags have been eliminated. Instagram and Twitter open-source geospatial data are available for download; however, these are considered inaccurate, computationally challenging, and impossible to quantify. Therefore, the resulting data are of poor quality. Other downloadable geospatial data (e.g., Marine Debris Tracker8 and Clean Swell10) are focused on marine- rather than terrestrial litter. Therefore, accurate terrestrial geospatial documentation of litter distribution is needed to improve environmental awareness. The current research employed citizen science to examine litter distribution in Lethbridge, Alberta, Canada, using the OpenLitterMap (OLM) app. The OLM app is an application used to track litter worldwide, and it can mark litter locations through photo georeferencing, which can be presented through GIS-designed maps. The OLM app provides open-source data that can be downloaded. It also offers information on various litter types and “hot-spots” areas where litter accumulates. In this study, Lethbridge College students collected litter data with the OLM app. The students produced GIS Story Maps (interactive web GIS illustrations) and presented these to school children to improve awareness of litter's impact on environmental health. Preliminary results indicate that towards the Lethbridge Coulees’ (valleys) East edges, the amount of litter significantly increased due to shrubs’ presence, that acted as litter catches. As wind generally travels from west to east in Lethbridge, litter in West-Lethbridge often finds its way down in the east part of the coulees. The students’ documented various litter types, while the majority (75%) included plastic and paper food packaging. The students also found metal wires, broken glass, plastic bottles, golf balls, and tires. Presentations of the Story Maps to school children had a significant impact, as the children voluntarily collected litter during school recess, and they were looking into solutions to reduce litter. Further litter distribution documentation through Citizen Science is needed to improve public awareness. Additionally, future research will be focused on Drone imagery of highly concentrated litter areas. Finally, a time series analysis of litter distribution will help us determine whether public education through Citizen Science and Story Maps can assist in reducing litter and reaching a cleaner and healthier environment.Keywords: citizen science, litter pollution, Open Litter Map, GIS Story Map
Procedia PDF Downloads 794076 High-Resolution Spatiotemporal Retrievals of Aerosol Optical Depth from Geostationary Satellite Using Sara Algorithm
Authors: Muhammad Bilal, Zhongfeng Qiu
Abstract:
Aerosols, suspended particles in the atmosphere, play an important role in the earth energy budget, climate change, degradation of atmospheric visibility, urban air quality, and human health. To fully understand aerosol effects, retrieval of aerosol optical properties such as aerosol optical depth (AOD) at high spatiotemporal resolution is required. Therefore, in the present study, hourly AOD observations at 500 m resolution were retrieved from the geostationary ocean color imager (GOCI) using the simplified aerosol retrieval algorithm (SARA) over the urban area of Beijing for the year 2016. The SARA requires top-of-the-atmosphere (TOA) reflectance, solar and sensor geometry information and surface reflectance observations to retrieve an accurate AOD. For validation of the GOCI retrieved AOD, AOD measurements were obtained from the aerosol robotic network (AERONET) version 3 level 2.0 (cloud-screened and quality assured) data. The errors and uncertainties were reported using the root mean square error (RMSE), relative percent mean error (RPME), and the expected error (EE = ± (0.05 + 0.15AOD). Results showed that the high spatiotemporal GOCI AOD observations were well correlated with the AERONET AOD measurements with a correlation coefficient (R) of 0.92, RMSE of 0.07, and RPME of 5%, and 90% of the observations were within the EE. The results suggested that the SARA is robust and has the ability to retrieve high-resolution spatiotemporal AOD observations over the urban area using the geostationary satellite.Keywords: AEORNET, AOD, SARA, GOCI, Beijing
Procedia PDF Downloads 1714075 A Study on Exploring and Prioritizing Critical Risks in Construction Project Assessment
Authors: A. Swetha
Abstract:
This study aims to prioritize and explore critical risks in construction project assessment, employing the Weighted Average Index method and Principal Component Analysis (PCA). Through extensive literature review and expert interviews, project assessment risk factors were identified across Budget and Cost Management Risk, Schedule and Time Management Risk, Scope and Planning Risk, Safety and Regulatory Compliance Risk, Resource Management Risk, Communication and Stakeholder Management Risk, and Environmental and Sustainability Risk domains. A questionnaire was distributed to stakeholders involved in construction activities in Hyderabad, India, with 180 completed responses analyzed using the Weighted Average Index method to prioritize risk factors. Subsequently, PCA was used to understand relationships between these factors and uncover underlying patterns. Results highlighted dependencies on critical resources, inadequate risk assessment, cash flow constraints, and safety concerns as top priorities, while factors like currency exchange rate fluctuations and delayed information dissemination ranked lower but remained significant. These insights offer valuable guidance for stakeholders to mitigate risks effectively and enhance project outcomes. By adopting systematic risk assessment and management approaches, construction projects in Hyderabad and beyond can navigate challenges more efficiently, ensuring long-term viability and resilience.Keywords: construction project assessment risk factor, risk prioritization, weighted average index, principal component analysis, project risk factors
Procedia PDF Downloads 404074 Power Production Performance of Different Wave Energy Converters in the Southwestern Black Sea
Authors: Ajab G. Majidi, Bilal Bingölbali, Adem Akpınar
Abstract:
This study aims to investigate the amount of energy (economic wave energy potential) that can be obtained from the existing wave energy converters in the high wave energy potential region of the Black Sea in terms of wave energy potential and their performance at different depths in the region. The data needed for this purpose were obtained using the calibrated nested layered SWAN wave modeling program version 41.01AB, which was forced with Climate Forecast System Reanalysis (CFSR) winds from 1979 to 2009. The wave dataset at a time interval of 2 hours was accumulated for a sub-grid domain for around Karaburun beach in Arnavutkoy, a district of Istanbul city. The annual sea state characteristic matrices for the five different depths along with a vertical line to the coastline were calculated for 31 years. According to the power matrices of different wave energy converter systems and characteristic matrices for each possible installation depth, the probability distribution tables of the specified mean wave period or wave energy period and significant wave height were calculated. Then, by using the relationship between these distribution tables, according to the present wave climate, the energy that the wave energy converter systems at each depth can produce was determined. Thus, the economically feasible potential of the relevant coastal zone was revealed, and the effect of different depths on energy converter systems is presented. The Oceantic at 50, 75 and 100 m depths and Oyster at 5 and 25 m depths presents the best performance. In the 31-year long period 1998 the most and 1989 is the least dynamic year.Keywords: annual power production, Black Sea, efficiency, power production performance, wave energy converter
Procedia PDF Downloads 1334073 Optimization of Economic Order Quantity of Multi-Item Inventory Control Problem through Nonlinear Programming Technique
Authors: Prabha Rohatgi
Abstract:
To obtain an efficient control over a huge amount of inventory of drugs in pharmacy department of any hospital, generally, the medicines are categorized on the basis of their cost ‘ABC’ (Always Better Control), first and then categorize on the basis of their criticality ‘VED’ (Vital, Essential, desirable) for prioritization. About one-third of the annual expenditure of a hospital is spent on medicines. To minimize the inventory investment, the hospital management may like to keep the medicines inventory low, as medicines are perishable items. The main aim of each and every hospital is to provide better services to the patients under certain limited resources. To achieve the satisfactory level of health care services to outdoor patients, a hospital has to keep eye on the wastage of medicines because expiry date of medicines causes a great loss of money though it was limited and allocated for a particular period of time. The objectives of this study are to identify the categories of medicines requiring incentive managerial control. In this paper, to minimize the total inventory cost and the cost associated with the wastage of money due to expiry of medicines, an inventory control model is used as an estimation tool and then nonlinear programming technique is used under limited budget and fixed number of orders to be placed in a limited time period. Numerical computations have been given and shown that by using scientific methods in hospital services, we can give more effective way of inventory management under limited resources and can provide better health care services. The secondary data has been collected from a hospital to give empirical evidence.Keywords: ABC-VED inventory classification, multi item inventory problem, nonlinear programming technique, optimization of EOQ
Procedia PDF Downloads 2554072 Modeling Core Flooding Experiments for Co₂ Geological Storage Applications
Authors: Avinoam Rabinovich
Abstract:
CO₂ geological storage is a proven technology for reducing anthropogenic carbon emissions, which is paramount for achieving the ambitious net zero emissions goal. Core flooding experiments are an important step in any CO₂ storage project, allowing us to gain information on the flow of CO₂ and brine in the porous rock extracted from the reservoir. This information is important for understanding basic mechanisms related to CO₂ geological storage as well as for reservoir modeling, which is an integral part of a field project. In this work, a different method for constructing accurate models of CO₂-brine core flooding will be presented. Results for synthetic cases and real experiments will be shown and compared with numerical models to exhibit their predictive capabilities. Furthermore, the various mechanisms which impact the CO₂ distribution and trapping in the rock samples will be discussed, and examples from models and experiments will be provided. The new method entails solving an inverse problem to obtain a three-dimensional permeability distribution which, along with the relative permeability and capillary pressure functions, constitutes a model of the flow experiments. The model is more accurate when data from a number of experiments are combined to solve the inverse problem. This model can then be used to test various other injection flow rates and fluid fractions which have not been tested in experiments. The models can also be used to bridge the gap between small-scale capillary heterogeneity effects (sub-core and core scale) and large-scale (reservoir scale) effects, known as the upscaling problem.Keywords: CO₂ geological storage, residual trapping, capillary heterogeneity, core flooding, CO₂-brine flow
Procedia PDF Downloads 704071 Diversity and Distribution of Butterflies (Lepidoptera-Rhopalocera) along with Altitudinal Gradient and Vegetation Types at Lahoul Valley, Trans-Himalaya Region, India
Authors: Saveena Bogtapa, Jagbir Singh Kirti
Abstract:
Himalaya is one of the most fascinating ranges in the world. In India, it comprises 18 percent of the land area. Lahoul valley which is a part of Trans-Himalaya region is well known for its unique, diverse flora and fauna. It lies in the North-Eastern corner of the state Himachal Pradesh where its altitude ranges between 2500m to 5000m. Vegetation of this region is dry-temperate to alpine type. The diversity of the area is very less, rare, unique and highly endemic. But today, as a lot of environmental degradation has taken place in this hot spot of biodiversity because of frequent developmental and commercial activities which lead to the diversity of this area comes under a real threat. Therefore, as part of the research, butterflies which are known for their attractiveness as well as usefulness to the ecosystem, are used for the study. The diversity of butterflies of a particular area not only provides a healthy environment but also serves as the first step of conservation to the biodiversity. Their distribution in different habitats and altitude type helps us to understand the species richness and abundance in an area. Moreover, different environmental parameters which affect the butterfly community has also recorded. Hence, the present study documents the butterfly diversity in an unexplored habitat and altitude types at Lahoul valley. The valley has been surveyed along with altitudinal gradients (from 2500m to 4500m) and in various habitats like agriculture land, grassland, scrubland, riverine and in different types of forests. Very rare species of butterflies have been explored, and these will be discussed along with different parameters during the presentation.Keywords: butterflies, diversity, Lahoul valley, altitude, vegetation
Procedia PDF Downloads 2464070 Analysis of Productivity and Poverty Status among Users of Improved Sorghum Varieties in Kano State, Nigeria
Authors: Temitope Adefunsho Olatoye, Julius Olabode Elega
Abstract:
Raising agricultural productivity is an important policy goal for governments and development agencies, and this is central to growth, income distribution, improved food security, and poverty alleviation among practitioners. This study analyzed the productivity and poverty status among users of improved sorghum varieties in Kano State, Nigeria. A multistage sampling technique was adopted in the selection of 131 sorghum farmers who were users of improved sorghum varieties. Data collected were analyzed using both descriptive (frequency distribution and percentage) and inferential (productivity index and FGT model) statistics. The result of the socioeconomic characteristics of the sorghum farmers showed a mean age of 40 years, with about 93.13% of the sorghum farmers being male. Also, as indicated by the result, the majority (82.44%) of the farmers were married, with most of them having qur’anic education with a mean farm size of 3.6 ha, as reported in the study area. Furthermore, the result showed that the mean farming experience of the sorghum farmers in the study area was 19 years, with an average monthly income of about ₦48,794, as reported in the study area. The result of the productivity index showed a ratio of 192,977kg/ha, while the result of poverty status shows that 62.88% were in the non-poor category, 21.21% were poor, and 15.91% were very poor, respectively. The result also showed that the incidence of poverty for sorghum farmers was 16%, indicating that the incidence of poverty was prevalent in the study area. Based on the findings of this study, it was therefore recommended that seed companies should facilitate the spread of improved sorghum varieties as it has an impact on the productivity and poverty status of sorghum farmers in the study area.Keywords: Foster Greer Thorbecke model, improved sorghum varieties, productivity, poverty status
Procedia PDF Downloads 734069 Design and Testing of Electrical Capacitance Tomography Sensors for Oil Pipeline Monitoring
Authors: Sidi M. A. Ghaly, Mohammad O. Khan, Mohammed Shalaby, Khaled A. Al-Snaie
Abstract:
Electrical capacitance tomography (ECT) is a valuable, non-invasive technique used to monitor multiphase flow processes, especially within industrial pipelines. This study focuses on the design, testing, and performance comparison of ECT sensors configured with 8, 12, and 16 electrodes, aiming to evaluate their effectiveness in imaging accuracy, resolution, and sensitivity. Each sensor configuration was designed to capture the spatial permittivity distribution within a pipeline cross-section, enabling visualization of phase distribution and flow characteristics such as oil and water interactions. The sensor designs were implemented and tested in closed pipes to assess their response to varying flow regimes. Capacitance data collected from each electrode configuration were reconstructed into cross-sectional images, enabling a comparison of image resolution, noise levels, and computational demands. Results indicate that the 16-electrode configuration yields higher image resolution and sensitivity to phase boundaries compared to the 8- and 12-electrode setups, making it more suitable for complex flow visualization. However, the 8 and 12-electrode sensors demonstrated advantages in processing speed and lower computational requirements. This comparative analysis provides critical insights into optimizing ECT sensor design based on specific industrial requirements, from high-resolution imaging to real-time monitoring needs.Keywords: capacitance tomography, modeling, simulation, electrode, permittivity, fluid dynamics, imaging sensitivity measurement
Procedia PDF Downloads 104068 Exploring the Travel Preferences of Generation Z: A Look into the Next Generation of Tourists
Authors: M. Panidou, F. Kilipiris, E. Christou, K. Alexandris
Abstract:
This study focuses on Generation Z, the next generation of tourists born between 1996 and 2012. Given their significant population size, Generation Z is expected to have a substantial impact on the travel and tourism sector. Therefore, understanding their travel preferences is crucial for businesses in the hospitality and tourism industry. By examining their travel preferences, this research aims to identify the unique characteristics and motivations of this generation when it comes to travel. This study used a quantitative method, and primary data was collected through a survey (online questionnaire), while secondary data was gathered from academic literature, industry reports, and online sources to provide a comprehensive analysis of the topic. The sample of the study was 100 Greek individuals aged between 18-26 years old. The data was analyzed with the support of SPSS software. The findings of the research indicated that technology, sustainability, and budget-friendly options are essential components for attracting and retaining Generation Z tourists. These preferences highlight the importance of incorporating innovative technologies, promoting sustainable practices, and offering affordable travel options to effectively engage this market niche. This research contributes to the field of hospitality and tourism businesses by providing valuable insights into the travel preferences of Generation Z. By understanding their distinct features and preferences; businesses can tailor their strategies and marketing efforts to effectively engage and retain this market segment. Considering the limitations of the sample size, future studies could aim for a larger and more diverse sample to enhance the generalizability of the findings.Keywords: gen Z, technology, travel preferences, sustainability
Procedia PDF Downloads 864067 Sequence Polymorphism and Haplogroup Distribution of Mitochondrial DNA Control Regions HVS1 and HVS2 in a Southwestern Nigerian Population
Authors: Ogbonnaya O. Iroanya, Samson T. Fakorede, Osamudiamen J. Edosa, Hadiat A. Azeez
Abstract:
The human mitochondrial DNA (mtDNA) is about 17 kbp circular DNA fragments found within the mitochondria together with smaller fragments of 1200 bp known as the control region. Knowledge of variation within populations has been employed in forensic and molecular anthropology studies. The study was aimed at investigating the polymorphic nature of the two hypervariable segments (HVS) of the mtDNA, i.e., HVS1 and HVS2, and to determine the haplogroup distribution among individuals resident in Lagos, Southwestern Nigeria. Peripheral blood samples were obtained from sixty individuals who are not related maternally, followed by DNA extraction and amplification of the extracted DNA using primers specific for the regions under investigation. DNA amplicons were sequenced, and sequenced data were aligned and compared to the revised Cambridge Reference Sequence (rCRS) GenBank Accession number: NC_012920.1) using BioEdit software. Results obtained showed 61 and 52 polymorphic nucleotide positions for HVS1 and HVS2, respectively. While a total of three indels mutation were recorded for HVS1, there were seven for HVS2. Also, transition mutations predominate nucleotide change observed in the study. Genetic diversity (GD) values for HVS1 and HVS2 were estimated to be 84.21 and 90.4%, respectively, while random match probability was 0.17% for HVS1 and 0.89% for HVS2. The study also revealed mixed haplogroups specific to the African (L1-L3) and the Eurasians (U and H) lineages. New polymorphic sites obtained from the study are promising for human identification purposes.Keywords: hypervariable region, indels, mitochondrial DNA, polymorphism, random match probability
Procedia PDF Downloads 115