Search results for: environmental performance.
98 A Frame Work for the Development of a Suitable Method to Find Shoot Length at Maturity of Mustard Plant Using Soft Computing Model
Authors: Satyendra Nath Mandal, J. Pal Choudhury, Dilip De, S. R. Bhadra Chaudhuri
Abstract:
The production of a plant can be measured in terms of seeds. The generation of seeds plays a critical role in our social and daily life. The fruit production which generates seeds, depends on the various parameters of the plant, such as shoot length, leaf number, root length, root number, etc When the plant is growing, some leaves may be lost and some new leaves may appear. It is very difficult to use the number of leaves of the tree to calculate the growth of the plant.. It is also cumbersome to measure the number of roots and length of growth of root in several time instances continuously after certain initial period of time, because roots grow deeper and deeper under ground in course of time. On the contrary, the shoot length of the tree grows in course of time which can be measured in different time instances. So the growth of the plant can be measured using the data of shoot length which are measured at different time instances after plantation. The environmental parameters like temperature, rain fall, humidity and pollution are also play some role in production of yield. The soil, crop and distance management are taken care to produce maximum amount of yields of plant. The data of the growth of shoot length of some mustard plant at the initial stage (7,14,21 & 28 days after plantation) is available from the statistical survey by a group of scientists under the supervision of Prof. Dilip De. In this paper, initial shoot length of Ken( one type of mustard plant) has been used as an initial data. The statistical models, the methods of fuzzy logic and neural network have been tested on this mustard plant and based on error analysis (calculation of average error) that model with minimum error has been selected and can be used for the assessment of shoot length at maturity. Finally, all these methods have been tested with other type of mustard plants and the particular soft computing model with the minimum error of all types has been selected for calculating the predicted data of growth of shoot length. The shoot length at the stage of maturity of all types of mustard plants has been calculated using the statistical method on the predicted data of shoot length.Keywords: Fuzzy time series, neural network, forecasting error, average error.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 159197 Ingenious Eco-Technology for Transforming Food and Tanneries Waste into a Soil Bio-Conditioner and Fertilizer Product Used for Recovery and Enhancement of the Productive Capacity of the Soil
Authors: Petre Voicu, Mircea Oaida, Radu Vasiu, Catalin Gheorghiu, Aurel Dumitru
Abstract:
The present work deals with the way in which food and tobacco waste can be used in agriculture. As a result of the lack of efficient technologies for their recycling, we are currently faced with the appearance of appreciable quantities of residual organic residues that find their use only very rarely and only after long storage in landfills. The main disadvantages of long storage of organic waste are the unpleasant smell, the high content of pathogenic agents, and the high content in the water. The release of these enormous amounts imperatively demands the finding of solutions to ensure the avoidance of environmental pollution. The measure practiced by us and presented in this paper consists of the processing of this waste in special installations, testing in pilot experimental perimeters, and later administration on agricultural lands without harming the quality of the soil, agricultural crops, and the environment. The current crisis of raw materials and energy also raises special problems in the field of organic waste valorization, an activity that takes place with low energy consumption. At the same time, their composition recommends them as useful secondary sources in agriculture. The transformation of food scraps and other residues concentrated organics thus acquires a new orientation, in which these materials are seen as important secondary resources. The utilization of food and tobacco waste in agriculture is also stimulated by the increasing lack of chemical fertilizers and the continuous increase in their price, under the conditions that the soil requires increased amounts of fertilizers in order to obtain high, stable, and profitable production. The need to maintain and increase the humus content of the soil is also taken into account, as an essential factor of its fertility, as a source and reserve of nutrients and microelements, as an important factor in increasing the buffering capacity of the soil, and the more reserved use of chemical fertilizers, improving the structure and permeability for water with positive effects on the quality of agricultural works and preventing the excess and/or deficit of moisture in the soil.
Keywords: Organic residue, food and tannery waste, fertilizer, soil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17496 Microstructure and Mechanical Characterization of Heat Treated Stir Cast Silica (Sea Sand) Reinforced 7XXX Al Alloy MMCs
Authors: S. S. Sharma, Jagannath K, P. R. Prabhu
Abstract:
Metal matrix composites consists of a metallic matrix combined with dispersed particulate phase as reinforcement. Aluminum alloys have been the primary material of choice for structural components of aircraft since about 1930. Well known performance characteristics, known fabrication costs, design experience, and established manufacturing methods and facilities, are just a few of the reasons for the continued confidence in 7XXX Al alloys that will ensure their use in significant quantities for the time to come. Particulate MMCs are of special interest owing to the low cost of their raw materials (primarily natural river sand here) and their ease of fabrication, making them suitable for applications requiring relatively high volume production. 7XXX Al alloys are precipitation hardenable and therefore amenable for thermomechanical treatment. Al–Zn alloys reinforced with particulate materials are used in aerospace industries in spite of the drawbacks of susceptibility to stress corrosion, poor wettability, poor weldability and poor fatigue resistance. The resistance offered by these particulates for the moving dislocations impart secondary hardening in turn contributes strain hardening. Cold deformation increases lattice defects, which in turn improves the properties of solution treated alloy. In view of this, six different Al–Zn–Mg alloy composites reinforced with silica (3 wt. % and 5 wt. %) are prepared by conventional semisolid synthesizing process. The cast alloys are solution treated and aged. The solution treated alloys are further severely cold rolled to enhance the properties. The hardness and strength values are analyzed and compared with silica free Al – Zn-Mg alloys. Precipitation hardening phenomena is accelerated due to the increased number of potential sites for precipitation. Higher peak hardness and lesser aging time are the characteristics of thermo mechanically treated samples. For obtaining maximum hardness, optimum number and volume of precipitate particles are required. The Al-5Zn-1Mg with 5% SiO2 alloy composite shows better result.
Keywords: Dislocation, hardness, matrix, thermomechanical, precipitation hardening, reinforcement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 184895 Association between Single Nucleotide Polymorphism of Calpain1 Gene and Meat Tenderness Traits in Different Genotypes of Chicken: Malaysian Native and Commercial Broiler Line
Authors: Abtehal Y. Anaas, Mohd. Nazmi Bin Abd. Manap
Abstract:
Meat Tenderness is one of the most important factors affecting consumers' assessment of meat quality. Variation in meat tenderness is genetically controlled and varies among breeds, and it is also influenced by environmental factors that can affect its creation during rigor mortis and postmortem. The final postmortem meat tenderization relies on the extent of proteolysis of myofibrillar proteins caused by the endogenous activity of the proteolytic calpain system. This calpain system includes different calcium-dependent cysteine proteases, and an inhibitor, calpastatin. It is widely accepted that in farm animals including chickens, the μ-calpain gene (CAPN1) is a physiological candidate gene for meat tenderness. This study aimed to identify the association of single nucleotide polymorphism (SNP) markers in the CAPN1 gene with the tenderness of chicken breast meat from two Malaysian native and commercial broiler breed crosses. Ten, five months old native chickens and ten, 42 days commercial broilers were collected from the local market and breast muscles were removed two hours after slaughter, packed separately in plastic bags and kept at -20ºC for 24 h. The tenderness phenotype for all chickens’ breast meats was determined by Warner-Bratzler Shear Force (WBSF). Thawing and cooking losses were also measured in the same breast samples before using in WBSF determination. Polymerase chain reaction (PCR) was used to identify the previously reported C7198A and G9950A SNPs in the CAPN1 gene and assess their associations with meat tenderness in the two breeds. The broiler breast meat showed lower shear force values and lower thawing loss rates than the native chickens (p<0.05), whereas there were similar in the rates of cooking loss. The study confirms some previous results that the markers CAPN1 C7198A and G9950A were not significantly associated with the variation in meat tenderness in chickens. Therefore, further study is needed to confirm the functional molecular mechanism of these SNPs and evaluate their associations in different chicken populations.
Keywords: CAPNl, chicken, meat tenderness, meat quality, SNPs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 160094 Effect of Submaximal Eccentric versus Maximal Isometric Contraction on Delayed Onset Muscle Soreness
Authors: Mohamed M. Ragab, Neveen A. Abdel Raoof, Reham H. Diab
Abstract:
Background: Delayed onset muscle soreness (DOMS) is the most common symptom when ordinary individuals and athletes are exposed to unaccustomed physical activity, especially eccentric contraction which impairs athletic performance, ordinary people work ability and physical functioning. Multitudes of methods have been investigated to reduce DOMS. One of the valuable methods to control DOMS is repeated bout effect (RBE) as a prophylactic method. Purpose: To compare the repeated bout effect of submaximal eccentric with maximal isometric contraction on induced DOMS. Methods: Sixty normal male volunteers were assigned randomly into three equal groups: Group A (first study group): 20 subjects received submaximal eccentric contraction on non-dominant elbow flexors as a prophylactic exercise. Group B (second study group): 20 subjects received maximal isometric contraction on nondominant elbow flexors as a prophylactic exercise. Group C (control group): 20 subjects did not receive any prophylactic exercises. Maximal isometric peak torque of elbow flexors and patient related elbow evaluation (PREE) scale were measured for each subject 3 times before, immediately after, and 48 hours after induction of DOMS. Results: Post-hoc test for maximal isometric peak torque and PREE scale immediately and 48 hours after induction of DOMS revealed that group (A) and group (B) resulted in significant decrease in maximal isometric strength loss and elbow pain and disability rather than control group (C), but submaximal eccentric group (A) was more effective than maximal isometric group (B) as it showed more rapid recovery of functional strength and less degrees of elbow pain and disability. Conclusion: Both submaximal eccentric contraction and maximal isometric contraction were effective in prevention of DOMS but submaximal eccentric contraction produced a greater protective effect against muscle damage induced by maximal eccentric exercise performed 2 days later.Keywords: Delayed onset muscle soreness, maximal isometric peak torque, patient related elbow evaluation scale, repeated bout effect.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 209093 Achieving Design-Stage Elemental Cost Planning Accuracy: Case Study of New Zealand
Authors: Johnson Adafin, James O. B. Rotimi, Suzanne Wilkinson, Abimbola O. Windapo
Abstract:
An aspect of client expenditure management that requires attention is the level of accuracy achievable in design-stage elemental cost planning. This has been a major concern for construction clients and practitioners in New Zealand (NZ). Pre-tender estimating inaccuracies are significantly influenced by the level of risk information available to estimators. Proper cost planning activities should ensure the production of a project’s likely construction costs (initial and final), and subsequent cost control activities should prevent unpleasant consequences of cost overruns, disputes and project abandonment. If risks were properly identified and priced at the design stage, observed variance between design-stage elemental cost plans (ECPs) and final tender sums (FTS) (initial contract sums) could be reduced. This study investigates the variations between design-stage ECPs and FTS of construction projects, with a view to identifying risk factors that are responsible for the observed variance. Data were sourced through interviews, and risk factors were identified by using thematic analysis. Access was obtained to project files from the records of study participants (consultant quantity surveyors), and document analysis was employed in complementing the responses from the interviews. Study findings revealed the discrepancies between ECPs and FTS in the region of -14% and +16%. It is opined in this study that the identified risk factors were responsible for the variability observed. The values obtained from the analysis would enable greater accuracy in the forecast of FTS by Quantity Surveyors. Further, whilst inherent risks in construction project developments are observed globally, these findings have important ramifications for construction projects by expanding existing knowledge on what is needed for reasonable budgetary performance and successful delivery of construction projects. The findings contribute significantly to the study by providing quantitative confirmation to justify the theoretical conclusions generated in the literature from around the world. This therefore adds to and consolidates existing knowledge.
Keywords: Accuracy, design-stage, elemental cost plan, final tender sum, New Zealand.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 180492 C-LNRD: A Cross-Layered Neighbor Route Discovery for Effective Packet Communication in Wireless Sensor Network
Authors: K. Kalaikumar, E. Baburaj
Abstract:
One of the problems to be addressed in wireless sensor networks is the issues related to cross layer communication. Cross layer architecture shares the information across the layer, ensuring Quality of Services (QoS). With this shared information, MAC protocol adapts effective functionality maintenance such as route selection on changeable sensor network environment. However, time slot assignment and neighbour route selection time duration for cross layer have not been carried out. The time varying physical layer communication over cross layer causes high traffic load in the sensor network. Though, the traffic load was reduced using cross layer optimization procedure, the computational cost is high. To improve communication efficacy in the sensor network, a self-determined time slot based Cross-Layered Neighbour Route Discovery (C-LNRD) method is presented in this paper. In the presented work, the initial process is to discover the route in the sensor network using Dynamic Source Routing based Medium Access Control (MAC) sub layers. This process considers MAC layer operation with dynamic route neighbour table discovery. Then, the discovered route path for packet communication employs Broad Route Distributed Time Slot Assignment method on Cross-Layered Sensor Network system. Broad Route means time slotting on varying length of the route paths. During packet communication in this sensor network, transmission of packets is adjusted over the different time with varying ranges for controlling the traffic rate. Finally, Rayleigh fading model is developed in C-LNRD to identify the performance of the sensor network communication structure. The main task of Rayleigh Fading is to measure the power level of each communication under MAC sub layer. The minimized power level helps to easily reduce the computational cost of packet communication in the sensor network. Experiments are conducted on factors such as power factor, on packet communication, neighbour route discovery time, and information (i.e., packet) propagation speed.
Keywords: Medium access control, neighbour route discovery, wireless sensor network, Rayleigh fading, distributed time slot assignment
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 77491 Reconsidering the Palaeo-Environmental Reconstruction of the Wet Zone of Sri Lanka: A Zooarchaeological Perspective
Authors: Kalangi Rodrigo, Kelum Manamendra-Arachchi
Abstract:
Bones, teeth, and shells have been acknowledged over the last two centuries as evidence of chronology, Palaeo-environment, and human activity. Faunal traces are valid evidence of past situations because they have properties that have not changed over long periods. Sri Lanka has been known as an Island, which has a diverse variety of prehistoric occupation among ecological zones. Defining the Paleoecology of the past societies has been an archaeological thought developed in the 1960s. It is mainly concerned with the reconstruction from available geological and biological evidence of past biota, populations, communities, landscapes, environments, and ecosystems. This early and persistent human fossil, technical, and cultural florescence, as well as a collection of well-preserved tropical-forest rock shelters with associated 'on-site ' Palaeoenvironmental records, makes Sri Lanka a central and unusual case study to determine the extent and strength of early human tropical forest encounters. Excavations carried out in prehistoric caves in the low country wet zone has shown that in the last 50,000 years, the temperature in the lowland rainforests has not exceeded 5 degrees. Based on Semnopithecus Priam (Gray Langur) remains unearthed from wet zone prehistoric caves, it has been argued periods of momentous climate changes during the Last Glacial Maximum (LGM) and Terminal Pleistocene/Early Holocene boundary, with a recognizable preference for semi-open ‘Intermediate’ rainforest or edges. Continuous genus Acavus and Oligospira occupation along with uninterrupted horizontal pervasive of Canarium sp. (‘kekuna’ nut) have proven that temperatures in the lowland rain forests have not changed by at least 5 °C over the last 50,000 years. Site catchment or territorial analysis cannot be any longer defensible, due to time-distance based factors as well as optimal foraging theory failed as a consequence of prehistoric people were aware of the decrease in cost-benefit ratio and located sites, and generally played out a settlement strategy that minimized the ratio of energy expended to energy produced.Keywords: Palaeo-environment, palaeo-ecology, palaeo-climate, prehistory, zooarchaeology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 73890 Detecting Tomato Flowers in Greenhouses Using Computer Vision
Authors: Dor Oppenheim, Yael Edan, Guy Shani
Abstract:
This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint.Keywords: Agricultural engineering, computer vision, image processing, flower detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 236789 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture
Authors: Charbel Geryes Aoun, Loic Lagadec
Abstract:
A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g. Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple-views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.
Keywords: Smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25788 Motor Coordination and Body Mass Index in Primary School Children
Authors: Ingrid Ruzbarska, Martin Zvonar, Piotr Oleśniewicz, Julita Markiewicz-Patkowska, Krzysztof Widawski, Daniel Puciato
Abstract:
Obese children will probably become obese adults, consequently exposed to an increased risk of comorbidity and premature mortality. Body weight may be indirectly determined by continuous development of coordination and motor skills. The level of motor skills and abilities is an important factor that promotes physical activity since early childhood. The aim of the study is to thoroughly understand the internal relations between motor coordination abilities and the somatic development of prepubertal children and to determine the effect of excess body weight on motor coordination by comparing the motor ability levels of children with different body mass index (BMI) values. The data were collected from 436 children aged 7–10 years, without health limitations, fully participating in school physical education classes. Body height was measured with portable stadiometers (Harpenden, Holtain Ltd.), and body mass—with a digital scale (HN-286, Omron). Motor coordination was evaluated with the Kiphard-Schilling body coordination test, Körperkoordinationstest für Kinder. The normality test by Shapiro-Wilk was used to verify the data distribution. The correlation analysis revealed a statistically significant negative association between the dynamic balance and BMI, as well as between the motor quotient and BMI (p<0.01) for both boys and girls. The results showed no effect of gender on the difference in the observed trends. The analysis of variance proved statistically significant differences between normal weight children and their overweight or obese counterparts. Coordination abilities probably play an important role in preventing or moderating the negative trajectory leading to childhood overweight and obesity. At this age, the development of coordination abilities should become a key strategy, targeted at long-term prevention of obesity and the promotion of an active lifestyle in adulthood. Motor performance is essential for implementing a healthy lifestyle in childhood already. Physical inactivity apparently results in motor deficits and a sedentary lifestyle in children, which may be accompanied by excess energy intake and overweight.
Keywords: Childhood, KTK test, Physical education, Psychomotor competence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 136487 Review of the Model-Based Supply Chain Management Research in the Construction Industry
Authors: Aspasia Koutsokosta, Stefanos Katsavounis
Abstract:
This paper reviews the model-based qualitative and quantitative Operations Management research in the context of Construction Supply Chain Management (CSCM). Construction industry has been traditionally blamed for low productivity, cost and time overruns, waste, high fragmentation and adversarial relationships. The construction industry has been slower than other industries to employ the Supply Chain Management (SCM) concept and develop models that support the decision-making and planning. However the last decade there is a distinct shift from a project-based to a supply-based approach of construction management. CSCM comes up as a new promising management tool of construction operations and improves the performance of construction projects in terms of cost, time and quality. Modeling the Construction Supply Chain (CSC) offers the means to reap the benefits of SCM, make informed decisions and gain competitive advantage. Different modeling approaches and methodologies have been applied in the multi-disciplinary and heterogeneous research field of CSCM. The literature review reveals that a considerable percentage of the CSC modeling research accommodates conceptual or process models which present general management frameworks and do not relate to acknowledged soft Operations Research methods. We particularly focus on the model-based quantitative research and categorize the CSCM models depending on their scope, objectives, modeling approach, solution methods and software used. Although over the last few years there has been clearly an increase of research papers on quantitative CSC models, we identify that the relevant literature is very fragmented with limited applications of simulation, mathematical programming and simulation-based optimization. Most applications are project-specific or study only parts of the supply system. Thus, some complex interdependencies within construction are neglected and the implementation of the integrated supply chain management is hindered. We conclude this paper by giving future research directions and emphasizing the need to develop optimization models for integrated CSCM. We stress that CSC modeling needs a multi-dimensional, system-wide and long-term perspective. Finally, prior applications of SCM to other industries have to be taken into account in order to model CSCs, but not without translating the generic concepts to the context of construction industry.Keywords: Construction supply chain management, modeling, operations research, optimization and simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 282586 The Transfer of Energy Technologies in a Developing Country Context Towards Improved Practice from Past Successes and Failures
Authors: Lindiwe O. K. Mabuza, Alan C. Brent, Maxwell Mapako
Abstract:
Technology transfer of renewable energy technologies is very often unsuccessful in the developing world. Aside from challenges that have social, economic, financial, institutional and environmental dimensions, technology transfer has generally been misunderstood, and largely seen as mere delivery of high tech equipment from developed to developing countries or within the developing world from R&D institutions to society. Technology transfer entails much more, including, but not limited to: entire systems and their component parts, know-how, goods and services, equipment, and organisational and managerial procedures. Means to facilitate the successful transfer of energy technologies, including the sharing of lessons are subsequently extremely important for developing countries as they grapple with increasing energy needs to sustain adequate economic growth and development. Improving the success of technology transfer is an ongoing process as more projects are implemented, new problems are encountered and new lessons are learnt. Renewable energy is also critical to improve the quality of lives of the majority of people in developing countries. In rural areas energy is primarily traditional biomass. The consumption activities typically occur in an inefficient manner, thus working against the notion of sustainable development. This paper explores the implementation of technology transfer in the developing world (sub-Saharan Africa). The focus is necessarily on RETs since most rural energy initiatives are RETs-based. Additionally, it aims to highlight some lessons drawn from the cited RE projects and identifies notable differences where energy technology transfer was judged to be successful. This is done through a literature review based on a selection of documented case studies which are judged against the definition provided for technology transfer. This paper also puts forth research recommendations that might contribute to improved technology transfer in the developing world. Key findings of this paper include: Technology transfer cannot be complete without satisfying pre-conditions such as: affordability, maintenance (and associated plans), knowledge and skills transfer, appropriate know how, ownership and commitment, ability to adapt technology, sound business principles such as financial viability and sustainability, project management, relevance and many others. It is also shown that lessons are learnt in both successful and unsuccessful projects.
Keywords: Technology transfer, technology management, renewable energy, sustainable development.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 164285 Production, Characterisation and Assessment of Biomixture Fuels for Compression Ignition Engine Application
Authors: K. Masera, A. K. Hossain
Abstract:
Hardly any neat biodiesel satisfies the European EN14214 standard for compression ignition engine application. To satisfy the EN14214 standard, various additives are doped into biodiesel; however, biodiesel additives might cause other problems such as increase in the particular emission and increased specific fuel consumption. In addition, the additives could be expensive. Considering the increasing level of greenhouse gas GHG emissions and fossil fuel depletion, it is forecasted that the use of biodiesel will be higher in the near future. Hence, the negative aspects of the biodiesel additives will likely to gain much more importance and need to be replaced with better solutions. This study aims to satisfy the European standard EN14214 by blending the biodiesels derived from sustainable feedstocks. Waste Cooking Oil (WCO) and Animal Fat Oil (AFO) are two sustainable feedstocks in the EU (including the UK) for producing biodiesels. In the first stage of the study, these oils were transesterified separately and neat biodiesels (W100 & A100) were produced. Secondly, the biodiesels were blended together in various ratios: 80% WCO biodiesel and 20% AFO biodiesel (W80A20), 60% WCO biodiesel and 40% AFO biodiesel (W60A40), 50% WCO biodiesel and 50% AFO biodiesel (W50A50), 30% WCO biodiesel and 70% AFO biodiesel (W30A70), 10% WCO biodiesel and 90% AFO biodiesel (W10A90). The prepared samples were analysed using Thermo Scientific Trace 1300 Gas Chromatograph and ISQ LT Mass Spectrometer (GC-MS). The GS-MS analysis gave Fatty Acid Methyl Ester (FAME) breakdowns of the fuel samples. It was found that total saturation degree of the samples was linearly increasing (from 15% for W100 to 54% for A100) as the percentage of the AFO biodiesel was increased. Furthermore, it was found that WCO biodiesel was mainly (82%) composed of polyunsaturated FAMEs. Cetane numbers, iodine numbers, calorific values, lower heating values and the densities (at 15 oC) of the samples were estimated by using the mass percentages data of the FAMEs. Besides, kinematic viscosities (at 40 °C and 20 °C), densities (at 15 °C), heating values and flash point temperatures of the biomixture samples were measured in the lab. It was found that estimated and measured characterisation results were comparable. The current study concluded that biomixture fuel samples W60A40 and W50A50 were perfectly satisfying the European EN 14214 norms without any need of additives. Investigation on engine performance, exhaust emission and combustion characteristics will be conducted to assess the full feasibility of the proposed biomixture fuels.
Keywords: Biodiesel, blending, characterisation, CI Engine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 80484 Advanced Compound Coating for Delaying Corrosion of Fast-Dissolving Alloy in High Temperature and Corrosive Environment
Authors: Lei Zhao, Yi Song, Tim Dunne, Jiaxiang (Jason) Ren, Wenhan Yue, Lei Yang, Li Wen, Yu Liu
Abstract:
Fasting dissolving magnesium (DM) alloy technology has contributed significantly to the “Shale Revolution” in oil and gas industry. This application requires DM downhole tools dissolving initially at a slow rate, rapidly accelerating to a high rate after certain period of operation time (typically 8 h to 2 days), a contradicting requirement that can hardly be addressed by traditional Mg alloying or processing itself. Premature disintegration has been broadly reported in downhole DM tool from field trials. To address this issue, “temporary” thin polymers of various formulations are currently coated onto DM surface to delay its initial dissolving. Due to conveying parts, harsh downhole condition, and high dissolving rate of the base material, the current delay coatings relying on pure polymers are found to perform well only at low temperature (typical < 100 ℃) and parts without sharp edges or corners, as severe geometries prevent high quality thin film coatings from forming effectively. In this study, a coating technology combining Plasma Electrolytic Oxide (PEO) coatings with advanced thin film deposition has been developed, which can delay DM complex parts (with sharp corners) in corrosive fluid at 150 ℃ for over 2 days. Synergistic effects between porous hard PEO coating and chemical inert elastic-polymer sealing leads to its delaying dissolution improvement, and strong chemical/physical bonding between these two layers has been found to play essential role. Microstructure of this advanced coating and compatibility between PEO and various polymer selections has been thoroughly investigated and a model is also proposed to explain its delaying performance. This study could not only benefit oil and gas industry to unplug their High Temperature High Pressure (HTHP) unconventional resources inaccessible before, but also potentially provides a technical route for other industries (e.g., bio-medical, automobile, aerospace) where primer anti-corrosive protection on light Mg alloy is highly demanded.
Keywords: Dissolvable magnesium, coating, plasma electrolytic oxide, sealer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 58083 Urban Heat Island Intensity Assessment through Comparative Study on Land Surface Temperature and Normalized Difference Vegetation Index: A Case Study of Chittagong, Bangladesh
Authors: Tausif A. Ishtiaque, Zarrin T. Tasin, Kazi S. Akter
Abstract:
Current trend of urban expansion, especially in the developing countries has caused significant changes in land cover, which is generating great concern due to its widespread environmental degradation. Energy consumption of the cities is also increasing with the aggravated heat island effect. Distribution of land surface temperature (LST) is one of the most significant climatic parameters affected by urban land cover change. Recent increasing trend of LST is causing elevated temperature profile of the built up area with less vegetative cover. Gradual change in land cover, especially decrease in vegetative cover is enhancing the Urban Heat Island (UHI) effect in the developing cities around the world. Increase in the amount of urban vegetation cover can be a useful solution for the reduction of UHI intensity. LST and Normalized Difference Vegetation Index (NDVI) have widely been accepted as reliable indicators of UHI and vegetation abundance respectively. Chittagong, the second largest city of Bangladesh, has been a growth center due to rapid urbanization over the last several decades. This study assesses the intensity of UHI in Chittagong city by analyzing the relationship between LST and NDVI based on the type of land use/land cover (LULC) in the study area applying an integrated approach of Geographic Information System (GIS), remote sensing (RS), and regression analysis. Land cover map is prepared through an interactive supervised classification using remotely sensed data from Landsat ETM+ image along with NDVI differencing using ArcGIS. LST and NDVI values are extracted from the same image. The regression analysis between LST and NDVI indicates that within the study area, UHI is directly correlated with LST while negatively correlated with NDVI. It interprets that surface temperature reduces with increase in vegetation cover along with reduction in UHI intensity. Moreover, there are noticeable differences in the relationship between LST and NDVI based on the type of LULC. In other words, depending on the type of land usage, increase in vegetation cover has a varying impact on the UHI intensity. This analysis will contribute to the formulation of sustainable urban land use planning decisions as well as suggesting suitable actions for mitigation of UHI intensity within the study area.
Keywords: Land cover change, land surface temperature, normalized difference vegetation index, urban heat island.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 145882 Construction Noise Management: Hong Kong Reviews and International Best Practices
Authors: Morgan Cheng, Wilson Ho, Max Yiu, Dragon Tsui, Wylog Wong, Yasir A. Naveed, C. S. Loong, Richard Kwan, K. C. Lam, Hannah Lo, C. L. Wong
Abstract:
Hong Kong is known worldwide for high density living and the ability to thrive under trying circumstances. The 7.5 million residents of this busy metropolis live primarily in high-rise buildings which are built and demolished incessantly. Hong Kong residents are therefore affected continuously by numerous construction activities. In 2020, the Hong Kong Environmental Protection Department (EPD) commissioned a feasibility study on the management of construction noise, including those associated with renovation of domestic premises. A key component of the study focused on the review of practices concerning the management and control of construction noise in metropolitans in other parts of the world. To benefit from international best practices, this extensive review aimed at identifying possible areas of improvement in Hong Kong. The study first referred to the United Nations “The World’s Cities in 2016” Report and examined the top 100 cities therein. The 20 most suitable cities were then chosen for further review. Upon further screening, 12 cities with more relevant management practices were selected for further scrutiny. These 12 cities include: Asia – Tokyo, Seoul, Taipei, Guangzhou, Singapore; Europe – City of Westminster (London), Berlin; North America – Toronto, New York City, San Francisco; Oceania – Sydney, Melbourne. Subsequently, three cities, namely Sydney, City of Westminster, and New York City, were selected for in-depth review. These three were chosen primarily because of the maturity, success, and effectiveness of their construction noise management and control measures, as well as their similarity to Hong Kong in certain key aspects. One of the more important findings of the review is the importance of early focus on potential noise issues, with the objective of designing the noise away wherever practicable. The study examined the similar yet different construction noise early focus mechanisms of these three cities. This paper describes this landmark, worldwide and extensive review on international best construction noise management and control practices at the source, along the noise transmission path and at the receiver end. The methodology, approach, and key findings are presented succinctly in this paper. By sharing the findings with the acoustics professionals worldwide, it is hoped that more advanced and mature construction noise management practices can be developed to attain urban sustainability.
Keywords: construction noise, international best practices, noise control and noise management
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 54381 Towards End-To-End Disease Prediction from Raw Metagenomic Data
Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker
Abstract:
Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.Keywords: Metagenomics, phenotype prediction, deep learning, embeddings, multiple instance learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 91080 Sensitivity Analysis of the Heat Exchanger Design in Net Power Oxy-Combustion Cycle for Carbon Capture
Authors: Hirbod Varasteh, Hamidreza Gohari Darabkhani
Abstract:
The global warming and its impact on climate change is one of main challenges for current century. Global warming is mainly due to the emission of greenhouse gases (GHG) and carbon dioxide (CO2) is known to be the major contributor to the GHG emission profile. Whilst the energy sector is the primary source for CO2 emission, Carbon Capture and Storage (CCS) are believed to be the solution for controlling this emission. Oxyfuel combustion (Oxy-combustion) is one of the major technologies for capturing CO2 from power plants. For gas turbines, several Oxy-combustion power cycles (Oxyturbine cycles) have been investigated by means of thermodynamic analysis. NetPower cycle is one of the leading oxyturbine power cycles with almost full carbon capture capability from a natural gas fired power plant. In this manuscript, sensitivity analysis of the heat exchanger design in NetPower cycle is completed by means of process modelling. The heat capacity variation and supercritical CO2 with gaseous admixtures are considered for multi-zone analysis with Aspen Plus software. It is found that the heat exchanger design has a major role to increase the efficiency of NetPower cycle. The pinch-point analysis is done to extract the composite and grand composite curve for the heat exchanger. In this paper, relationship between the cycle efficiency and the minimum approach temperature (∆Tmin) of the heat exchanger has also been evaluated. Increase in ∆Tmin causes a decrease in the temperature of the recycle flue gases (RFG) and an overall decrease in the required power for the recycled gas compressor. The main challenge in the design of heat exchangers in power plants is a tradeoff between the capital and operational costs. To achieve lower ∆Tmin, larger size of heat exchanger is required. This means a higher capital cost but leading to a better heat recovery and lower operational cost. To achieve this, ∆Tmin is selected from the minimum point in the diagrams of capital and operational costs. This study provides an insight into the NetPower Oxy-combustion cycle’s performance analysis and operational condition based on its heat exchanger design.
Keywords: Carbon capture and storage, oxy-combustion, netpower cycle, oxyturbine power cycles, heat exchanger design, supercritical carbon dioxide, pinch point analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 168879 Aircraft Selection Using Multiple Criteria Decision Making Analysis Method with Different Data Normalization Techniques
Authors: C. Ardil
Abstract:
This paper presents an original application of multiple criteria decision making analysis theory to the evaluation of aircraft selection problem. The selection of an optimal, efficient and reliable fleet, network and operations planning policy is one of the most important factors in aircraft selection problem. Given that decision making in aircraft selection involves the consideration of a number of opposite criteria and possible solutions, such a selection can be considered as a multiple criteria decision making analysis problem. This study presents a new integrated approach to decision making by considering the multiple criteria utility theory and the maximal regret minimization theory methods as well as aircraft technical, economical, and environmental aspects. Multiple criteria decision making analysis method uses different normalization techniques to allow criteria to be aggregated with qualitative and quantitative data of the decision problem. Therefore, selecting a suitable normalization technique for the model is also a challenge to provide data aggregation for the aircraft selection problem. To compare the impact of different normalization techniques on the decision problem, the vector, linear (sum), linear (max), and linear (max-min) data normalization techniques were identified to evaluate aircraft selection problem. As a logical implication of the proposed approach, it enhances the decision making process through enabling the decision maker to: (i) use higher level knowledge regarding the selection of criteria weights and the proposed technique, (ii) estimate the ranking of an alternative, under different data normalization techniques and integrated criteria weights after a posteriori analysis of the final rankings of alternatives. A set of commercial passenger aircraft were considered in order to illustrate the proposed approach. The obtained results of the proposed approach were compared using Spearman's rho tests. An analysis of the final rank stability with respect to the changes in criteria weights was also performed so as to assess the sensitivity of the alternative rankings obtained by the application of different data normalization techniques and the proposed approach.
Keywords: Normalization Techniques, Aircraft Selection, Multiple Criteria Decision Making, Multiple Criteria Decision Making Analysis, MCDMA
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 58678 The Threats of Deforestation, Forest Fire, and CO2 Emission toward Giam Siak Kecil Bukit Batu Biosphere Reserve in Riau, Indonesia
Authors: S. B. Rushayati, R. Meilani, R. Hermawan
Abstract:
A biosphere reserve is developed to create harmony amongst economic development, community development, and environmental protection, through partnership between human and nature. Giam Siak Kecil Bukit Batu Biosphere Reserve (GSKBB BR) in Riau Province, Indonesia, is unique in that it has peat soil dominating the area, many springs essential for human livelihood, high biodiversity. Furthermore, it is the only biosphere reserve covering privately managed production forest areas. In this research, we aimed at analyzing the threat of deforestation and forest fire, and the potential of CO2 emission at GSKBB BR. We used Landsat image, arcView software, and ERDAS IMAGINE 8.5 Software to conduct spatial analysis of land cover and land use changes, calculated CO2 emission based on emission potential from each land cover and land use type, and exercised simple linear regression to demonstrate the relation between CO2 emission potential and deforestation. The result showed that, beside in the buffer zone and transition area, deforestation also occurred in the core area. Spatial analysis of land cover and land use changes from years 2010, 2012, and 2014 revealed that there were changes of land cover and land use from natural forest and industrial plantation forest to other land use types, such as garden, mixed garden, settlement, paddy fields, burnt areas, and dry agricultural land. Deforestation in core area, particularly at the Giam Siak Kecil Wildlife Reserve and Bukit Batu Wildlife Reserve, occurred in the form of changes from natural forest in to garden, mixed garden, shrubs, swamp shrubs, dry agricultural land, open area, and burnt area. In the buffer zone and transition area, changes also happened, what once swamp forest changed into garden, mixed garden, open area, shrubs, swamp shrubs, and dry agricultural land. Spatial analysis on land cover and land use changes indicated that deforestation rate in the biosphere reserve from 2010 to 2014 had reached 16 119 ha/year. Beside deforestation, threat toward the biosphere reserve area also came from forest fire. The occurrence of forest fire in 2014 had burned 101 723 ha of the area, in which 9 355 ha of core area, and 92 368 ha of buffer zone and transition area. Deforestation and forest fire had increased CO2 emission as much as 24 903 855 ton/year.Keywords: Biosphere reserve, CO2 emission, deforestation, forest fire.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 214577 Detailed Sensitive Detection of Impurities in Waste Engine Oils Using Laser Induced Breakdown Spectroscopy, Rotating Disk Electrode Optical Emission Spectroscopy and Surface Plasmon Resonance
Authors: Cherry Dhiman, Ayushi Paliwal, Mohd. Shahid Khan, M. N. Reddy, Vinay Gupta, Monika Tomar
Abstract:
The laser based high resolution spectroscopic experimental techniques such as Laser Induced Breakdown Spectroscopy (LIBS), Rotating Disk Electrode Optical Emission spectroscopy (RDE-OES) and Surface Plasmon Resonance (SPR) have been used for the study of composition and degradation analysis of used engine oils. Engine oils are mainly composed of aliphatic and aromatics compounds and its soot contains hazardous components in the form of fine, coarse and ultrafine particles consisting of wear metal elements. Such coarse particulates matter (PM) and toxic elements are extremely dangerous for human health that can cause respiratory and genetic disorder in humans. The combustible soot from thermal power plants, industry, aircrafts, ships and vehicles can lead to the environmental and climate destabilization. It contributes towards global pollution for land, water, air and global warming for environment. The detection of such toxicants in the form of elemental analysis is a very serious issue for the waste material management of various organic, inorganic hydrocarbons and radioactive waste elements. In view of such important points, the current study on used engine oils was performed. The fundamental characterization of engine oils was conducted by measuring water content and kinematic viscosity test that proves the crude analysis of the degradation of used engine oils samples. The microscopic quantitative and qualitative analysis was presented by RDE-OES technique which confirms the presence of elemental impurities of Pb, Al, Cu, Si, Fe, Cr, Na and Ba lines for used waste engine oil samples in few ppm. The presence of such elemental impurities was confirmed by LIBS spectral analysis at various transition levels of atomic line. The recorded transition line of Pb confirms the maximum degradation which was found in used engine oil sample no. 3 and 4. Apart from the basic tests, the calculations for dielectric constants and refractive index of the engine oils were performed via SPR analysis.
Keywords: Laser induced breakdown spectroscopy, rotating disk electrode optical emission spectroscopy, surface plasmon resonance, ICCD spectrometer, Nd:YAG laser, engine oil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 75276 Evaluation of Buckwheat Genotypes to Different Planting Geometries and Fertility Levels in Northern Transition Zone of Karnataka
Authors: U. K. Hulihalli, Shantveerayya
Abstract:
Buckwheat (Fagopyrum esculentum Moench) is an annual crop belongs to family Poligonaceae. The cultivated buckwheat species are notable for their exceptional nutritive values. It is an important source of carbohydrates, fibre, macro, and microelements such as K, Ca, Mg, Na and Mn, Zn, Se, and Cu. It also contains rutin, flavonoids, riboflavin, pyridoxine and many amino acids which have beneficial effects on human health, including lowering both blood lipid and sugar levels. Rutin, quercetin and some other polyphenols are potent carcinogens against colon and other cancers. Buckwheat has significant nutritive value and plenty of uses. Cultivation of buckwheat in Sothern part of India is very meager. Hence, a study was planned with an objective to know the performance of buckwheat genotypes to different planting geometries and fertility levels. The field experiment was conducted at Main Agriculture Research Station, University of Agriculture Sciences, Dharwad, India, during 2017 Kharif. The experiment was laid-out in split-plot design with three replications having three planting geometries as main plots, two genotypes as sub plots and three fertility levels as sub-sub plot treatments. The soil of the experimental site was vertisol. The standard procedures are followed to record the observations. The planting geometry of 30*10 cm was recorded significantly higher seed yield (893 kg/ha⁻¹), stover yield (1507 kg ha⁻¹), clusters plant⁻¹ (7.4), seeds clusters⁻¹ (7.9) and 1000 seed weight (26.1 g) as compared to 40*10 cm and 20*10 cm planting geometries. Between the genotypes, significantly higher seed yield (943 kg ha⁻¹) and harvest index (45.1) was observed with genotype IC-79147 as compared to PRB-1 genotype (687 kg ha⁻¹ and 34.2, respectively). However, the genotype PRB-1 recorded significantly higher stover yield (1344 kg ha⁻¹) as compared to genotype IC-79147 (1173 kg ha⁻¹). The genotype IC-79147 was recorded significantly higher clusters plant⁻¹ (7.1), seeds clusters⁻¹ (7.9) and 1000 seed weight (24.5 g) as compared PRB-1 (5.4, 5.8 and 22.3 g, respectively). Among the fertility levels tried, the fertility level of 60:30 NP kg ha⁻¹ recorded significantly higher seed yield (845 kg ha-1) and stover yield (1359 kg ha⁻¹) as compared to 40:20 NP kg ha-1 (808 and 1259 kg ha⁻¹ respectively) and 20:10 NP kg ha-1 (793 and 1144 kg ha⁻¹ respectively). Within the treatment combinations, IC 79147 genotype having 30*10 cm planting geometry with 60:30 NP kg ha⁻¹ recorded significantly higher seed yield (1070 kg ha⁻¹), clusters plant⁻¹ (10.3), seeds clusters⁻¹ (9.9) and 1000 seed weight (27.3 g) compared to other treatment combinations.
Keywords: Buckwheat, fertility levels, genotypes, geometry, polyphenols, rutin.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83775 Prospects of Iraq’s Maritime Openness and Their Effect on Its Economy
Authors: Mohanad Hammad
Abstract:
Port institutions serve as a link connecting the land areas that receive the goods and the areas from where ships sail. These areas hold great significance for the conversion of goods into commodities of economic value, capable of meeting the needs of the society. Development of ports constitutes a fundamental component of the comprehensive economic development process. Recognizing this fact, developing countries have always resorted to this infrastructural element to resolve the numerous problems they face, taking into account its contribution to the reformation of their economic conditions. Iraqi ports have played a major role in boosting the commercial movement in Iraq, as they are the starting point of its oil exports and a key constituent in fulfilling the consumer and production needs of the various economic sectors of Iraq. With the Gulf wars and the economic blockade, Iraqi ports have continued to deteriorate and become unable to perform their functions as first-generation ports, prompting Iraq to use the ports of neighboring countries such as Jordan's Aqaba commercial port. Meanwhile, Iraqi ports face strong competition from the ports of neighboring countries, which have achieved progress and advancement as opposed to the declining performance and efficiency of Iraqi ports. The great developments in the economic conditions of Iraq lay a too great burden on the Iraqi maritime transport and ports, which require development in order to be able to meet the challenges arising from the fierce international and regional competition in the markets. Therefore, it is necessary to find appropriate solutions in support of the role that can be played by Iraqi ports in serving Iraq's foreign trade transported by sea and in keeping up with the development of foreign trade. Thus, this research aims at tackling the current situation of the Iraqi ports and their commercial activity and studying the problems and obstacles they face. The research also studies the future prospects of these ports, the potentials of maritime openness to Iraq under the fierce competition of neighboring ports, and the possibility of enhancing Iraqi ports’ competitiveness. Among the results produced by this research is the future scenario it proposes for Iraqi ports, mainly represented in the establishment of Al-Faw Port, which will contribute to a greater openness of maritime transport in Iraq, and the rehabilitation and expansion of existing ports. This research seeks to develop solutions to Iraq ports so that they can be repositioned as a vital means of promoting economic development.
Keywords: Transport, port, regional openness, development.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 67674 Work-Related Shoulder Lesions and Labor Lawsuits in Brazil: Cross-Sectional Study on Worker Health Actions Developed by Employers
Authors: Reinaldo Biscaro, Luciano R. Ferreira, Leonardo C. Biscaro, Raphael C. Biscaro, Isabela S. Vasconcelos, Laura C. R. Ferreira, Cristiano M. Galhardi, Erica P. Baciuk
Abstract:
Introduction: The present study had the objective to present the profile of workers with shoulder disorders related to labor lawsuits in Brazil. The study analyzed the association between the worker’s health and the actions performed by the companies related to injured professional. The research method performed a retrospective, cross-sectional and quantitative database analysis. The documents of labor lawsuits with shoulder injury registered at the Regional Labor Court in the 15th region (Campinas - São Paulo) were submitted to the medical examination and evaluated during the period from 2012 until 2015. The data collected were age, gender, onset of symptoms, length of service, current occupation, type of shoulder injury, referred complaints, type of acromion, associated or related diseases, company actions as CAT (workplace accident communication), compliance of NR7 by the organization (Environmental Risk Prevention Program - PPRA and Medical Coordination Program in Occupational Health - PCMSO). Results: From the 93 workers evaluated, there was a prevalence of men (58.1%), with a mean age of 42.6 y-o, and 54.8% were included in the age group 35-49 years. Regarding the length of work time in the company, 66.7% have worked for more than 5 years. There was an association between gender and current occupational status (p < 0.005), with predominance of women in household occupation (13 vs. 2) and predominance of unemployed men in job search situation (24 vs. 10) and reintegrated to work by judicial decision (8 vs. 2). There was also a correlation between pain and functional limitation (p < 0.01). There was a positive association of PPRA with the complaint of functional limitation and negative association with pain (p < 0.04). There was also a correlation between the sedentary lifestyle and the presence of PCMSO and PPRA (p < 0.04), and the absence of CAT in the companies (p < 0.001). It was concluded that the appearance or aggravation of osseous and articular shoulder pathologies in workers who have undertaken labor law suits seem to be associated with individual habits or inadequate labor practices. These data can help preventing the occurrence of these lesions by implementing local health promotion policies at work.Keywords: Work-related accidents, cross-sectional study, shoulder lesions, labor lawsuits.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 90073 The Proposal of a Shared Mobility City Index to Support Investment Decision Making for Carsharing
Authors: S. Murr, S. Phillips
Abstract:
One of the biggest challenges entering a market with a carsharing or any other shared mobility (SM) service is sound investment decision-making. To support this process, the authors think that a city index evaluating different criteria is necessary. The goal of such an index is to benchmark cities along a set of external measures to answer the main two challenges: financially viability and the understanding of its specific requirements. The authors have consulted several shared mobility projects and industry experts to create such a Shared Mobility City Index (SMCI). The current proposal of the SMCI consists of 11 individual index measures: general data (demographics, geography, climate and city culture), shared mobility landscape (current SM providers, public transit options, commuting patterns and driving culture) and political vision and goals (vision of the Mayor, sustainability plan, bylaws/tenders supporting SM). To evaluate the suitability of the index, 16 cities on the East Coast of North America were selected and secondary research was conducted. The main sources of this study were census data, organisational records, independent press releases and informational websites. Only non-academic sources where used because the relevant data for the chosen cities is not published in academia. Applying the index measures to the selected cities resulted in three major findings. Firstly, density (city area divided by number of inhabitants) is not an indicator for the number of SM services offered: the city with the lowest density has five bike and carsharing options. Secondly, there is a direct correlation between commuting patterns and how many shared mobility services are offered. New York, Toronto and Washington DC have the highest public transit ridership and the most shared mobility providers. Lastly, except one, all surveyed cities support shared mobility with their sustainability plan. The current version of the shared mobility index is proving a practical tool to evaluate cities, and to understand functional, political, social and environmental considerations. More cities will have to be evaluated to refine the criteria further. However, the current version of the index can be used to assess cities on their suitability for shared mobility services and will assist investors deciding which city is a financially viable market.
Keywords: Carsharing, transportation, urban planning, shared mobility city index.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 231472 Engineering Topology of Construction Ecology for Dynamic Integration of Sustainability Outcomes to Functions in Urban Environments: Spatial Modeling
Authors: Moustafa Osman Mohammed
Abstract:
Integration sustainability outcomes give attention to construction ecology in the design review of urban environments to comply with Earth’s System that is composed of integral parts of the (i.e., physical, chemical and biological components). Naturally, exchange patterns of industrial ecology have consistent and periodic cycles to preserve energy flows and materials in Earth’s System. When engineering topology is affecting internal and external processes in system networks, it postulated the valence of the first-level spatial outcome (i.e., project compatibility success). These instrumentalities are dependent on relating the second-level outcome (i.e., participant security satisfaction). The construction ecology-based topology (i.e., as feedback energy system) flows from biotic and abiotic resources in the entire Earth’s ecosystems. These spatial outcomes are providing an innovation, as entails a wide range of interactions to state, regulate and feedback “topology” to flow as “interdisciplinary equilibrium” of ecosystems. The interrelation dynamics of ecosystems are performing a process in a certain location within an appropriate time for characterizing their unique structure in “equilibrium patterns”, such as biosphere and collecting a composite structure of many distributed feedback flows. These interdisciplinary systems regulate their dynamics within complex structures. These dynamic mechanisms of the ecosystem regulate physical and chemical properties to enable a gradual and prolonged incremental pattern to develop a stable structure. The engineering topology of construction ecology for integration sustainability outcomes offers an interesting tool for ecologists and engineers in the simulation paradigm as an initial form of development structure within compatible computer software. This approach argues from ecology, resource savings, static load design, financial other pragmatic reasons, while an artistic/architectural perspective, these are not decisive. The paper described an attempt to unify analytic and analogical spatial modeling in developing urban environments as a relational setting, using optimization software and applied as an example of integrated industrial ecology where the construction process is based on a topology optimization approach.
Keywords: Construction ecology, industrial ecology, urban topology, environmental planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 63771 AI-Based Techniques for Online Social Media Network Sentiment Analysis: A Methodical Review
Authors: A. M. John-Otumu, M. M. Rahman, O. C. Nwokonkwo, M. C. Onuoha
Abstract:
Online social media networks have long served as a primary arena for group conversations, gossip, text-based information sharing and distribution. The use of natural language processing techniques for text classification and unbiased decision making has not been far-fetched. Proper classification of these textual information in a given context has also been very difficult. As a result, a systematic review was conducted from previous literature on sentiment classification and AI-based techniques. The study was done in order to gain a better understanding of the process of designing and developing a robust and more accurate sentiment classifier that could correctly classify social media textual information of a given context between hate speech and inverted compliments with a high level of accuracy using the knowledge gain from the evaluation of different artificial intelligence techniques reviewed. The study evaluated over 250 articles from digital sources like ACM digital library, Google Scholar, and IEEE Xplore; and whittled down the number of research to 52 articles. Findings revealed that deep learning approaches such as Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Bidirectional Encoder Representations from Transformer (BERT), and Long Short-Term Memory (LSTM) outperformed various machine learning techniques in terms of performance accuracy. A large dataset is also required to develop a robust sentiment classifier. Results also revealed that data can be obtained from places like Twitter, movie reviews, Kaggle, Stanford Sentiment Treebank (SST), and SemEval Task4 based on the required domain. The hybrid deep learning techniques like CNN+LSTM, CNN+ Gated Recurrent Unit (GRU), CNN+BERT outperformed single deep learning techniques and machine learning techniques. Python programming language outperformed Java programming language in terms of development simplicity and AI-based library functionalities. Finally, the study recommended the findings obtained for building robust sentiment classifier in the future.
Keywords: Artificial Intelligence, Natural Language Processing, Sentiment Analysis, Social Network, Text.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 59370 Exercise and Cognitive Function: Time Course of the Effects
Authors: Simon B. Cooper, Stephan Bandelow, Maria L. Nute, John G. Morris, Mary E. Nevill
Abstract:
Previous research has indicated a variable effect of exercise on adolescents’ cognitive function. However, comparisons between studies are difficult to make due to differences in: the mode, intensity and duration of exercise employed; the components of cognitive function measured (and the tests used to assess them); and the timing of the cognitive function tests in relation to the exercise. Therefore, the aim of the present study was to assess the time course (10 and 60min post-exercise) of the effects of 15min intermittent exercise on cognitive function in adolescents. 45 adolescents were recruited to participate in the study and completed two main trials (exercise and resting) in a counterbalanced crossover design. Participants completed 15min of intermittent exercise (in cycles of 1 min exercise, 30s rest). A battery of computer based cognitive function tests (Stroop test, Sternberg paradigm and visual search test) were completed 30 min pre- and 10 and 60min post-exercise (to assess attention, working memory and perception respectively).The findings of the present study indicate that on the baseline level of the Stroop test, 10min following exercise response times were slower than at any other time point on either trial (trial by session time interaction, p = 0.0308). However, this slowing of responses also tended to produce enhanced accuracy 10min post-exercise on the baseline level of the Stroop test (trial by session time interaction, p = 0.0780). Similarly, on the complex level of the visual search test there was a slowing of response times 10 min post-exercise (trial by session time interaction, p = 0.0199). However, this was not coupled with an improvement in accuracy (trial by session time interaction, p = 0.2349). The mid-morning bout of exercise did not affect response times or accuracy across the morning on the Sternberg paradigm. In conclusion, the findings of the present study suggest an equivocal effect of exercise on adolescents' cognitive function. The mid-morning bout of exercise appears to cause a speed-accuracy trade off immediately following exercise on the Stroop test (participants become slower but more accurate), whilst slowing response times on the visual search test and having no effect on performance on the Sternberg paradigm. Furthermore, this work highlights the importance of the timing of the cognitive function tests relative to the exercise and the components of cognitive function examined in future studies.
Keywords: Adolescents, cognitive function, exercise.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 313769 Logistical Optimization of Nuclear Waste Flows during Decommissioning
Authors: G. Dottavio, M. F. Andrade, F. Renard, V. Cheutet, A.-L. L. S. Vercraene, P. Hoang, S. Briet, R. Dachicourt, Y. Baizet
Abstract:
An important number of technological equipment and high-skilled workers over long periods of time have to be mobilized during nuclear decommissioning processes. The related operations generate complex flows of waste and high inventory levels, associated to information flows of heterogeneous types. Taking into account that more than 10 decommissioning operations are on-going in France and about 50 are expected toward 2025: A big challenge is addressed today. The management of decommissioning and dismantling of nuclear installations represents an important part of the nuclear-based energy lifecycle, since it has an environmental impact as well as an important influence on the electricity cost and therefore the price for end-users. Bringing new technologies and new solutions into decommissioning methodologies is thus mandatory to improve the quality, cost and delay efficiency of these operations. The purpose of our project is to improve decommissioning management efficiency by developing a decision-support framework dedicated to plan nuclear facility decommissioning operations and to optimize waste evacuation by means of a logistic approach. The target is to create an easy-to-handle tool capable of i) predicting waste flows and proposing the best decommissioning logistics scenario and ii) managing information during all the steps of the process and following the progress: planning, resources, delays, authorizations, saturation zones, waste volume, etc. In this article we present our results from waste nuclear flows simulation during decommissioning process, including discrete-event simulation supported by FLEXSIM 3-D software. This approach was successfully tested and our works confirms its ability to improve this type of industrial process by identifying the critical points of the chain and optimizing it by identifying improvement actions. This type of simulation, executed before the start of the process operations on the basis of a first conception, allow ‘what-if’ process evaluation and help to ensure quality of the process in an uncertain context. The simulation of nuclear waste flows before evacuation from the site will help reducing the cost and duration of the decommissioning process by optimizing the planning and the use of resources, transitional storage and expensive radioactive waste containers. Additional benefits are expected for the governance system of the waste evacuation since it will enable a shared responsibility of the waste flows.
Keywords: Nuclear decommissioning, logistical optimization, decision-support framework, waste management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1556