Search results for: deep soil mixing column
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6127

Search results for: deep soil mixing column

457 Enhancing Athlete Training using Real Time Pose Estimation with Neural Networks

Authors: Jeh Patel, Chandrahas Paidi, Ahmed Hambaba

Abstract:

Traditional methods for analyzing athlete movement often lack the detail and immediacy required for optimal training. This project aims to address this limitation by developing a Real-time human pose estimation system specifically designed to enhance athlete training across various sports. This system leverages the power of convolutional neural networks (CNNs) to provide a comprehensive and immediate analysis of an athlete’s movement patterns during training sessions. The core architecture utilizes dilated convolutions to capture crucial long-range dependencies within video frames. Combining this with the robust encoder-decoder architecture to further refine pose estimation accuracy. This capability is essential for precise joint localization across the diverse range of athletic poses encountered in different sports. Furthermore, by quantifying movement efficiency, power output, and range of motion, the system provides data-driven insights that can be used to optimize training programs. Pose estimation data analysis can also be used to develop personalized training plans that target specific weaknesses identified in an athlete’s movement patterns. To overcome the limitations posed by outdoor environments, the project employs strategies such as multi-camera configurations or depth sensing techniques. These approaches can enhance pose estimation accuracy in challenging lighting and occlusion scenarios, where pose estimation accuracy in challenging lighting and occlusion scenarios. A dataset is collected From the labs of Martin Luther King at San Jose State University. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing different poses, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced pose detection model and lays the groundwork for future innovations in assistive enhancement technologies.

Keywords: computer vision, deep learning, human pose estimation, U-NET, CNN

Procedia PDF Downloads 15
456 Utilizing Temporal and Frequency Features in Fault Detection of Electric Motor Bearings with Advanced Methods

Authors: Mohammad Arabi

Abstract:

The development of advanced technologies in the field of signal processing and vibration analysis has enabled more accurate analysis and fault detection in electrical systems. This research investigates the application of temporal and frequency features in detecting faults in electric motor bearings, aiming to enhance fault detection accuracy and prevent unexpected failures. The use of methods such as deep learning algorithms and neural networks in this process can yield better results. The main objective of this research is to evaluate the efficiency and accuracy of methods based on temporal and frequency features in identifying faults in electric motor bearings to prevent sudden breakdowns and operational issues. Additionally, the feasibility of using techniques such as machine learning and optimization algorithms to improve the fault detection process is also considered. This research employed an experimental method and random sampling. Vibration signals were collected from electric motors under normal and faulty conditions. After standardizing the data, temporal and frequency features were extracted. These features were then analyzed using statistical methods such as analysis of variance (ANOVA) and t-tests, as well as machine learning algorithms like artificial neural networks and support vector machines (SVM). The results showed that using temporal and frequency features significantly improves the accuracy of fault detection in electric motor bearings. ANOVA indicated significant differences between normal and faulty signals. Additionally, t-tests confirmed statistically significant differences between the features extracted from normal and faulty signals. Machine learning algorithms such as neural networks and SVM also significantly increased detection accuracy, demonstrating high effectiveness in timely and accurate fault detection. This study demonstrates that using temporal and frequency features combined with machine learning algorithms can serve as an effective tool for detecting faults in electric motor bearings. This approach not only enhances fault detection accuracy but also simplifies and streamlines the detection process. However, challenges such as data standardization and the cost of implementing advanced monitoring systems must also be considered. Utilizing temporal and frequency features in fault detection of electric motor bearings, along with advanced machine learning methods, offers an effective solution for preventing failures and ensuring the operational health of electric motors. Given the promising results of this research, it is recommended that this technology be more widely adopted in industrial maintenance processes.

Keywords: electric motor, fault detection, frequency features, temporal features

Procedia PDF Downloads 21
455 The Debureaucratization Strategy for the Portuguese Health Service through Effective Communication

Authors: Fernando Araujo, Sandra Cardoso, Fátima Fonseca, Sandra Cavaca

Abstract:

A debureaucratization strategy for the Portuguese Health Service was assumed by the Executive Board of the SNS, in deep articulation with the Shared Services of the Ministry of Health. Two of the main dimensions were focused on sick leaves (SL), that transform primary health care (PHC) in administrative institutions, limiting access to patients. The self-declaration of illness (SDI) project, through the National Health Service Contact Centre (SNS24), began on May 1, 2023, and has already resulted in the issuance of more than 300,000 SDI without the need to allocate resources from the National Health Service (NHS). This political decision allows each citizen, in a maximum 2 times/year, and 3 days each time, if ill, through their own responsibility, report their health condition in a dematerialized way, and by this way justified the absence to work, although by Portuguese law in these first three days, there is no payment of salary. Using a digital approach, it is now feasible without the need to go to the PHC and occupy the time of the PHC only to obtain an SL. Through this measure, bureaucracy has been reduced, and the system has been focused on users, improving the lives of citizens and reducing the administrative burden on PHC, which now has more consultation times for users who need it. The second initiative, which began on March 1, 2024, allows the SL to be issued in emergency departments (ED) of public hospitals and in the health institutions of the social and private sectors. This project is intended to allow the user who has suffered a situation of acute urgent illness and who has been observed in an ED of a public hospital or in a private or social entity no longer need to go to PHC only to apply for the respective SL. Since March 1, 54,453 SLs have been issued, 242 in private or social sector institutions and 6,918 in public hospitals, of which 134 were in ED and 47,292 in PHC. This approach has proven to be technically robust, allows immediate resolution of problems and differentiates the performance of doctors. However, it is important to continue to qualify the proper functioning of the ED, preventing non-urgent users from going there only to obtain SL. Thus, in order to make better use of existing resources, it was operationalizing this extension of its issuance in a balanced way, allowing SL to be issued in the ED of hospitals only to critically ill patients or patients referred by INEM, SNS24, or PHC. In both cases, an intense public campaign was implemented to explain the way it works and the benefits for patients. In satisfaction surveys, more than 95% of patients and doctors were satisfied with the solutions, asking for extensions to other areas. The administrative simplification agenda of the NHS continues its effective development. For the success of this debureaucratization agenda, the key factors are effective communication and the ability to reach patients and health professionals in order to increase health literacy and the correct use of NHS.

Keywords: debureaucratization strategy, self-declaration of illness, sick leaves, SNS24

Procedia PDF Downloads 39
454 Shark Detection and Classification with Deep Learning

Authors: Jeremy Jenrette, Z. Y. C. Liu, Pranav Chimote, Edward Fox, Trevor Hastie, Francesco Ferretti

Abstract:

Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation. We created a database of shark images by sourcing 24,546 images covering 219 species of sharks from the web application spark pulse and the social network Instagram. We used object detection to extract shark features and inflate this database to 53,345 images. We packaged object-detection and image classification models into a Shark Detector bundle. We developed the Shark Detector to recognize and classify sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: boosting training datasets, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity. The Shark Detector located sharks in baited remote footage and YouTube videos with an average accuracy of 89\%, and classified located subjects to the species level with 69\% accuracy (n =\ eight species). The Shark Detector sorted heterogeneous datasets of images sourced from Instagram with 91\% accuracy and classified species with 70\% accuracy (n =\ 17 species). Data-mining Instagram can inflate training datasets and increase the Shark Detector’s accuracy as well as facilitate archiving of historical and novel shark observations. Base accuracy of genus prediction was 68\% across 25 genera. The average base accuracy of species prediction within each genus class was 85\%. The Shark Detector can classify 45 species. All data-generation methods were processed without manual interaction. As media-based remote monitoring strives to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.

Keywords: classification, data mining, Instagram, remote monitoring, sharks

Procedia PDF Downloads 98
453 The Effects of Extreme Precipitation Events on Ecosystem Services

Authors: Szu-Hua Wang, Yi-Wen Chen

Abstract:

Urban ecosystems are complex coupled human-environment systems. They contain abundant natural resources for producing natural assets and attract urban assets to consume natural resources for urban development. Urban ecosystems provide several ecosystem services, including provisioning services, regulating services, cultural services, and supporting services. Rapid global climate change makes urban ecosystems and their ecosystem services encountering various natural disasters. Lots of natural disasters have occurred around the world under the constant changes in the frequency and intensity of extreme weather events in the past two decades. In Taiwan, hydrological disasters have been paid more attention due to the potential high sensitivity of Taiwan’s cities to climate change, and it impacts. However, climate change not only causes extreme weather events directly but also affects the interactions among human, ecosystem services and their dynamic feedback processes indirectly. Therefore, this study adopts a systematic method, solar energy synthesis, based on the concept of the eco-energy analysis. The Taipei area, the most densely populated area in Taiwan, is selected as the study area. The changes of ecosystem services between 2015 and Typhoon Soudelor have been compared in order to investigate the impacts of extreme precipitation events on ecosystem services. The results show that the forest areas are the largest contributions of energy to ecosystem services in the Taipei area generally. Different soil textures of different subsystem have various upper limits of water contents or substances. The major contribution of ecosystem services of the study area is natural hazard regulation provided by the surface water resources areas. During the period of Typhoon Soudelor, the freshwater supply in the forest areas had become the main contribution. Erosion control services were the main ecosystem service affected by Typhoon Soudelor. The second and third main ecosystem services were hydrologic regulation and food supply. Due to the interactions among ecosystem services, fresh water supply, water purification, and waste treatment had been affected severely.

Keywords: ecosystem, extreme precipitation events, ecosystem services, solar energy synthesis

Procedia PDF Downloads 131
452 Insecticidal and Repellent Efficacy of Clove and Lemongrass Oils Against Museum Pest, Lepisma Saccharina (Zygentoma: Lepismatidae)

Authors: Suboohi Nasrin, MHD. Shahid, Abduraheem K.

Abstract:

India is a tropical country, and it is estimated that biological and abiological agents are the major factors in the destruction and deterioration of archival materials like herbarium, paper, cellulose, bookbinding, etc. Silverfish, German Cockroaches, Termites, Booklice, Tobacco beetle and Carpet beetles are the common insect's pests in the museum, which causes deterioration to collections of museum specimens. Among them, silverfish is one of the most notorious pests and primarily responsible for the deterioration of Archival materials. So far, the investigation has been carried to overcome this existing problem as different management strategies such as chemical insecticides, fungicides, herbicides, nematicides, etc., have been applied. Moreover, Synthetic molecules lead to affect the ecological balance, have a detrimental effects on human health, reduce the beneficial microbial flora and fauna, etc. With a view, numbers of chemicals have been banned and advised not to be used due to their long-lasting persistency in soil ecosystem, water and carcinogenic. That’s why the authors used natural products with biocidal activity, cost-effective and eco-friendly approaches. In this study, various concentrations (30, 60 and 90 ml/L) of clove and lemongrass essential oil at different treatment duration (30, 60, 90 and 120-minutes) were investigated to test its properties as a silverfish repellent and insecticidal effect. The result of two ways ANOVA revealed that the mortality was significantly influenced by oil concentration, treatment duration and interaction between two independent factors was also found significant. The mortality rate increased with increasing the oil concentration in clove oil, and 100 % mortality was recorded in 0.9 ml at 120-minute. It was also observed that the treatment duration has the highest effect on the mortality rate of silverfish. The clove oil had the greatest effect on the silverfish in comparison to lemongrass. While in the case of percentage, repellency of adult silverfish was oil concentration and treatment duration-dependent, i.e., increase in concentration and treatment duration resulted in higher repellency percentage. The clove oil was found more effective, showing maximum repellency of 80.00% at 0.9ml/cm2 (highest) concentration, and in lemongrass highest repellency was observed at 33.4% at 0.9 ml/cm2 concentration in the treated area.

Keywords: adult silverfish, oils, oil concentration, treatment duration, mortality (%) and repellency

Procedia PDF Downloads 152
451 Florida’s Groundwater and Surface Water System Reliability in Terms of Climate Change and Sea-Level Rise

Authors: Rahman Davtalab

Abstract:

Florida is one of the most vulnerable states to natural disasters among the 50 states of the USA. The state exposed by tropical storms, hurricanes, storm surge, landslide, etc. Besides, the mentioned natural phenomena, global warming, sea-level rise, and other anthropogenic environmental changes make a very complicated and unpredictable system for decision-makers. In this study, we tried to highlight the effects of climate change and sea-level rise on surface water and groundwater systems for three different geographical locations in Florida; Main Canal of Jacksonville Beach (in the northeast of Florida adjacent to the Atlantic Ocean), Grace Lake in central Florida, far away from surrounded coastal line, and Mc Dill in Florida and adjacent to Tampa Bay and Mexican Gulf. An integrated hydrologic and hydraulic model was developed and simulated for all three cases, including surface water, groundwater, or a combination of both. For the case study of Main Canal-Jacksonville Beach, the investigation showed that a 76 cm sea-level rise in time horizon 2060 could increase the flow velocity of the tide cycle for the main canal's outlet and headwater. This case also revealed how the sea level rise could change the tide duration, potentially affecting the coastal ecosystem. As expected, sea-level rise can raise the groundwater level. Therefore, for the Mc Dill case, the effect of groundwater rise on soil storage and the performance of stormwater retention ponds is investigated. The study showed that sea-level rise increased the pond’s seasonal high water up to 40 cm by time horizon 2060. The reliability of the retention pond is dropped from 99% for the current condition to 54% for the future. The results also proved that the retention pond could not retain and infiltrate the designed treatment volume within 72 hours, which is a significant indication of increasing pollutants in the future. Grace Lake case study investigates the effects of climate change on groundwater recharge. This study showed that using the dynamically downscaled data of the groundwater recharge can decline up to 24% by the mid-21st century.

Keywords: groundwater, surface water, Florida, retention pond, tide, sea level rise

Procedia PDF Downloads 165
450 An Analytical Metric and Process for Critical Infrastructure Architecture System Availability Determination in Distributed Computing Environments under Infrastructure Attack

Authors: Vincent Andrew Cappellano

Abstract:

In the early phases of critical infrastructure system design, translating distributed computing requirements to an architecture has risk given the multitude of approaches (e.g., cloud, edge, fog). In many systems, a single requirement for system uptime / availability is used to encompass the system’s intended operations. However, when architected systems may perform to those availability requirements only during normal operations and not during component failure, or during outages caused by adversary attacks on critical infrastructure (e.g., physical, cyber). System designers lack a structured method to evaluate availability requirements against candidate system architectures through deep degradation scenarios (i.e., normal ops all the way down to significant damage of communications or physical nodes). This increases risk of poor selection of a candidate architecture due to the absence of insight into true performance for systems that must operate as a piece of critical infrastructure. This research effort proposes a process to analyze critical infrastructure system availability requirements and a candidate set of systems architectures, producing a metric assessing these architectures over a spectrum of degradations to aid in selecting appropriate resilient architectures. To accomplish this effort, a set of simulation and evaluation efforts are undertaken that will process, in an automated way, a set of sample requirements into a set of potential architectures where system functions and capabilities are distributed across nodes. Nodes and links will have specific characteristics and based on sampled requirements, contribute to the overall system functionality, such that as they are impacted/degraded, the impacted functional availability of a system can be determined. A machine learning reinforcement-based agent will structurally impact the nodes, links, and characteristics (e.g., bandwidth, latency) of a given architecture to provide an assessment of system functional uptime/availability under these scenarios. By varying the intensity of the attack and related aspects, we can create a structured method of evaluating the performance of candidate architectures against each other to create a metric rating its resilience to these attack types/strategies. Through multiple simulation iterations, sufficient data will exist to compare this availability metric, and an architectural recommendation against the baseline requirements, in comparison to existing multi-factor computing architectural selection processes. It is intended that this additional data will create an improvement in the matching of resilient critical infrastructure system requirements to the correct architectures and implementations that will support improved operation during times of system degradation due to failures and infrastructure attacks.

Keywords: architecture, resiliency, availability, cyber-attack

Procedia PDF Downloads 82
449 Servitization in Machine and Plant Engineering: Leveraging Generative AI for Effective Product Portfolio Management Amidst Disruptive Innovations

Authors: Till Gramberg

Abstract:

In the dynamic world of machine and plant engineering, stagnation in the growth of new product sales compels companies to reconsider their business models. The increasing shift toward service orientation, known as "servitization," along with challenges posed by digitalization and sustainability, necessitates an adaptation of product portfolio management (PPM). Against this backdrop, this study investigates the current challenges and requirements of PPM in this industrial context and develops a framework for the application of generative artificial intelligence (AI) to enhance agility and efficiency in PPM processes. The research approach of this study is based on a mixed-method design. Initially, qualitative interviews with industry experts were conducted to gain a deep understanding of the specific challenges and requirements in PPM. These interviews were analyzed using the Gioia method, painting a detailed picture of the existing issues and needs within the sector. This was complemented by a quantitative online survey. The combination of qualitative and quantitative research enabled a comprehensive understanding of the current challenges in the practical application of machine and plant engineering PPM. Based on these insights, a specific framework for the application of generative AI in PPM was developed. This framework aims to assist companies in implementing faster and more agile processes, systematically integrating dynamic requirements from trends such as digitalization and sustainability into their PPM process. Utilizing generative AI technologies, companies can more quickly identify and respond to trends and market changes, allowing for a more efficient and targeted adaptation of the product portfolio. The study emphasizes the importance of an agile and reactive approach to PPM in a rapidly changing environment. It demonstrates how generative AI can serve as a powerful tool to manage the complexity of a diversified and continually evolving product portfolio. The developed framework offers practical guidelines and strategies for companies to improve their PPM processes by leveraging the latest technological advancements while maintaining ecological and social responsibility. This paper significantly contributes to deepening the understanding of the application of generative AI in PPM and provides a framework for companies to manage their product portfolios more effectively and adapt to changing market conditions. The findings underscore the relevance of continuous adaptation and innovation in PPM strategies and demonstrate the potential of generative AI for proactive and future-oriented business management.

Keywords: servitization, product portfolio management, generative AI, disruptive innovation, machine and plant engineering

Procedia PDF Downloads 55
448 Association between G2677T/A MDR1 Polymorphism with the Clinical Response to Disease Modifying Anti-Rheumatic Drugs in Rheumatoid Arthritis

Authors: Alan Ruiz-Padilla, Brando Villalobos-Villalobos, Yeniley Ruiz-Noa, Claudia Mendoza-Macías, Claudia Palafox-Sánchez, Miguel Marín-Rosales, Álvaro Cruz, Rubén Rangel-Salazar

Abstract:

Introduction: In patients with rheumatoid arthritis, resistance or poor response to disease modifying antirheumatic drugs (DMARD) may be a reflection of the increase in g-P. The expression of g-P may be important in mediating the effluence of DMARD from the cell. In addition, P-glycoprotein is involved in the transport of cytokines, IL-1, IL-2 and IL-4, from normal lymphocytes activated to the surrounding extracellular matrix, thus influencing the activity of RA. The involvement of P-glycoprotein in the transmembrane transport of cytokines can serve as a modulator of the efficacy of DMARD. It was shown that a number of lymphocytes with glycoprotein P activity is increased in patients with RA; therefore, P-glycoprotein expression could be related to the activity of RA and could be a predictor of poor response to therapy. Objective: To evaluate in RA patients, if the G2677T/A MDR1 polymorphisms is associated with differences in the rate of therapeutic response to disease-modifying antirheumatic agents in patients with rheumatoid arthritis. Material and Methods: A prospective cohort study was conducted. Fifty seven patients with RA were included. They had an active disease according to DAS-28 (score >3.2). We excluded patients receiving biological agents. All the patients were followed during 6 months in order to identify the rate of therapeutic response according to the American College of Rheumatology (ACR) criteria. At the baseline peripheral blood samples were taken in order to identify the G2677T/A MDR1 polymorphisms using PCR- Specific allele. The fragment was identified by electrophoresis in polyacrylamide gels stained with ethidium bromide. For statistical analysis, the genotypic and allelic frequencies of MDR1 gene polymorphism between responders and non-responders were determined. Chi-square tests as well as, relative risks with 95% confidence intervals (95%CI) were computed to identify differences in the risk for achieving therapeutic response. Results: RA patients had a mean age of 47.33 ± 12.52 years, 87.7% were women with a mean for DAS-28 score of 6.45 ± 1.12. At the 6 months, the rate of therapeutic response was 68.7 %. The observed genotype frequencies were: for G/G 40%, T/T 32%, A/A 19%, G/T 7% and for A/A genotype 2%. Patients with G allele developed at 6 months of treatment, higher rate for therapeutic response assessed by ACR20 compared to patients with others alleles (p=0.039). Conclusions: Patients with G allele of the - G2677T/A MDR1 polymorphisms had a higher rate of therapeutic response at 6 months with DMARD. These preliminary data support the requirement for a deep evaluation of these and other genotypes as factors that may influence the therapeutic response in RA.

Keywords: pharmacogenetics, MDR1, P-glycoprotein, therapeutic response, rheumatoid arthritis

Procedia PDF Downloads 190
447 Fake Importers Behavior in the Algerian City – The Case of the City of Eulma

Authors: Mohamed Gherbi

Abstract:

The informal trade has invaded the Algerian cities, especially in their peripherals. About 1368 informal markets have been registrated during 2013 where the important ones are known by Doubaï Markets. They appeared since the adoption of the new system of the economy market in 1990. It permitted the intervention of new actors: the importers but also the fake ones. The majority of them were 'ex-Trabendistes' who have chosen to settle and invest in big and small cities of center and east of Algeria, mainly Algiers, El Eulma, Aïn El Fekroun, Tadjnenent, and Aïn M’lila. This study will focus on the case of the city of El Eulma which contains more of 1000 importers (most of them are fake). They have changed the image and architecture of some important streets of the city, without respecting rules of urbanism such as those included in the building permit for instance. The case of 'Doubaï' place in El Eulma illustrates this situation. This area is not covered by a Soil Occupation Plan (responsible of the design of urban spaces), even if this last covers other zones nearby surrounding of it. These importers helped by the wholesale and retail traders installed in 'Doubaï' place, have converted spaces inside and outside of residential buildings in deposits and sales of goods. They have squatted sidewalks to expose their goods imported predominantly from the South-East Asian countries. The scenery that reigns resembles partly to the bazaar of the Middle East and Chinese cities like Yiwu. These signs characterize the local ambiance and give the particularity to this part of the city. A customer tide from different cities and outside of Algeria comes daily to visit this district. The other zones surrounding have underwent the same change and have followed the model of 'Doubaï' place. Consequently, the mechanical movement has finished by stifling an important part of the city and the prices of land and real estate have reached exorbitant values and can be compared to prices charged in Paris due to the rampant speculation that has reached alarming dimensions. Similarly, renting commercial premises did not escape this logic. This paper will explain the reasons responsible of this change, the logic of importers through their acts in different spaces of the city.

Keywords: Doubaï place, design of urban spaces, fake importers, informal trade

Procedia PDF Downloads 399
446 European Hinterland and Foreland: Impact of Accessibility, Connectivity, Inter-Port Competition on Containerization

Authors: Dial Tassadit Rania, Figueiredo De Oliveira Gabriel

Abstract:

In this paper, we investigate the relationship between ports and their hinterland and foreland environments and the competitive relationship between the ports themselves. These two environments are changing, evolving and introducing new challenges for commercial and economic development at the regional, national and international levels. Because of the rise of the containerization phenomenon, shipping costs and port handling costs have considerably decreased due to economies of scale. The volume of maritime trade has increased substantially and the markets served by the ports have expanded. On these bases, overlapping hinterlands can give rise to the phenomenon of competition between ports. Our main contribution comparing to the existing literature on this issue, is to build a set of hinterland, foreland and competition indicators. Using these indicators? we investigate the effect of hinterland accessibility, foreland connectivity and inter-ports competition on containerized traffic of Europeans ports. For this, we have a 10-year panel database from 2004 to 2014. Our hinterland indicators are given by two indicators of accessibility; they describe the market potential of a port and are calculated using information on population and wealth (GDP). We then calculate population and wealth for different neighborhoods within a distance from a port ranging from 100 to 1000km. For the foreland, we produce two indicators: port connectivity and number of partners for each port. Finally, we compute the two indicators of inter-port competition and a market concentration indicator (Hirshmann-Herfindhal) for different neighborhood-distances around the port. We then apply a fixed-effect model to test the relationship above. Again, with a fixed effects model, we do a sensitivity analysis for each of these indicators to support the results obtained. The econometric results of the general model given by the regression of the accessibility indicators, the LSCI for port i, and the inter-port competition indicator on the containerized traffic of European ports show a positive and significant effect for accessibility to wealth and not to the population. The results are positive and significant for the two indicators of connectivity and competition as well. One of the main results of this research is that the port development given here by the increase of its containerized traffic is strongly related to the development of its hinterland and foreland environment. In addition, it is the market potential, given by the wealth of the hinterland that has an impact on the containerized traffic of a port. However, accessibility to a large population pool is not important for understanding the dynamics of containerized port traffic. Furthermore, in order to continue to develop, a port must penetrate its hinterland at a deep level exceeding 100 km around the port and seek markets beyond this perimeter. The port authorities could focus their marketing efforts on the immediate hinterland, which can, as the results shows, not be captive and thus engage new approaches of port governance to make it more attractive.

Keywords: accessibility, connectivity, European containerization, European hinterland and foreland, inter-port competition

Procedia PDF Downloads 178
445 Assessment of Climate Change Impacts on the Hydrology of Upper Guder Catchment, Upper Blue Nile

Authors: Fikru Fentaw Abera

Abstract:

Climate changes alter regional hydrologic conditions and results in a variety of impacts on water resource systems. Such hydrologic changes will affect almost every aspect of human well-being. The goal of this paper is to assess the impact of climate change on the hydrology of Upper Guder catchment located in northwest of Ethiopia. The GCM derived scenarios (HadCM3 A2a & B2a SRES emission scenarios) experiments were used for the climate projection. The statistical downscaling model (SDSM) was used to generate future possible local meteorological variables in the study area. The down-scaled data were then used as input to the soil and water assessment tool (SWAT) model to simulate the corresponding future stream flow regime in Upper Guder catchment of the Abay River Basin. A semi distributed hydrological model, SWAT was developed and Generalized Likelihood Uncertainty Estimation (GLUE) was utilized for uncertainty analysis. GLUE is linked with SWAT in the Calibration and Uncertainty Program known as SWAT-CUP. Three benchmark periods simulated for this study were 2020s, 2050s and 2080s. The time series generated by GCM of HadCM3 A2a and B2a and Statistical Downscaling Model (SDSM) indicate a significant increasing trend in maximum and minimum temperature values and a slight increasing trend in precipitation for both A2a and B2a emission scenarios in both Gedo and Tikur Inch stations for all three bench mark periods. The hydrologic impact analysis made with the downscaled temperature and precipitation time series as input to the hydrological model SWAT suggested for both A2a and B2a emission scenarios. The model output shows that there may be an annual increase in flow volume up to 35% for both emission scenarios in three benchmark periods in the future. All seasons show an increase in flow volume for both A2a and B2a emission scenarios for all time horizons. Potential evapotranspiration in the catchment also will increase annually on average 3-15% for the 2020s and 7-25% for the 2050s and 2080s for both A2a and B2a emissions scenarios.

Keywords: climate change, Guder sub-basin, GCM, SDSM, SWAT, SWAT-CUP, GLUE

Procedia PDF Downloads 343
444 Management of Caverno-Venous Leakage: A Series of 133 Patients with Symptoms, Hemodynamic Workup, and Results of Surgery

Authors: Allaire Eric, Hauet Pascal, Floresco Jean, Beley Sebastien, Sussman Helene, Virag Ronald

Abstract:

Background: Caverno-venous leakage (CVL) is devastating, although barely known disease, the first cause of major physical impairment in men under 25, and responsible for 50% of resistances to phosphodiesterase 5-inhibitors (PDE5-I), affecting 30 to 40% of users in this medication class. In this condition, too early blood drainage from corpora cavernosa prevents penile rigidity and penetration during sexual intercourse. The role of conservative surgery in this disease remains controversial. Aim: Assess complications and results of combined open surgery and embolization for CVL. Method: Between June 2016 and September 2021, 133 consecutive patients underwent surgery in our institution for CVL, causing severe erectile dysfunction (ED) resistance to oral medical treatment. Procedures combined vein embolization and ligation with microsurgical techniques. We performed a pre-and post-operative clinical (Erection Harness Scale: EHS) hemodynamic evaluation by duplex sonography in all patients. Before surgery, the CVL network was visualized by computed tomography cavernography. Penile EMG was performed in case of diabetes or suspected other neurological conditions. All patients were optimized for hormonal status—data we prospectively recorded. Results: Clinical signs suggesting CVL were ED since age lower than 25, loss of erection when changing position, penile rigidity varying according to the position. Main complications were minor pulmonary embolism in 2 patients, one after airline travel, one with Factor V Leiden heterozygote mutation, one infection and three hematomas requiring reoperation, one decreased gland sensitivity lasting for more than one year. Mean pre-operative pharmacologic EHS was 2.37+/-0.64, mean pharmacologic post-operative EHS was 3.21+/-0.60, p<0.0001 (paired t-test). The mean EHS variation was 0.87+/-0.74. After surgery, 81.5% of patients had a pharmacologic EHS equal to or over 3, allowing for intercourse with penetration. Three patients (2.2%) experienced lower post-operative EHS. The main cause of failure was leakage from the deep dorsal aspect of the corpus cavernosa. In a 14 months follow-up, 83.2% of patients had a clinical EHS equal to or over 3, allowing for sexual intercourse with penetration, one-third of them without any medication. 5 patients had a penile implant after unsuccessful conservative surgery. Conclusion: Open surgery combined with embolization for CVL is an efficient approach to CVL causing severe erectile dysfunction.

Keywords: erectile dysfunction, cavernovenous leakage, surgery, embolization, treatment, result, complications, penile duplex sonography

Procedia PDF Downloads 130
443 Comparison of On-Site Stormwater Detention Real Performance and Theoretical Simulations

Authors: Pedro P. Drumond, Priscilla M. Moura, Marcia M. L. P. Coelho

Abstract:

The purpose of On-site Stormwater Detention (OSD) system is to promote the detention of addition stormwater runoff caused by impervious areas, in order to maintain the peak flow the same as the pre-urbanization condition. In recent decades, these systems have been built in many cities around the world. However, its real efficiency continues to be unknown due to the lack of research, especially with regard to monitoring its real performance. Thus, this study aims to compare the water level monitoring data of an OSD built in Belo Horizonte/Brazil with the results of theoretical methods simulations, usually adopted in OSD design. There were made two theoretical simulations, one using the Rational Method and Modified Puls method and another using the Soil Conservation Service (SCS) method and Modified Puls method. The monitoring data were obtained with a water level sensor, installed inside the reservoir and connected to a data logger. The comparison of OSD performance was made for 48 rainfall events recorded from April/2015 to March/2017. The comparison of maximum water levels in the OSD showed that the results of the simulations with Rational/Puls and SCS/Puls methods were, on average 33% and 73%, respectively, lower than those monitored. The Rational/Puls results were significantly higher than the SCS/Puls results, only in the events with greater frequency. In the events with average recurrence interval of 5, 10 and 200 years, the maximum water heights were similar in both simulations. Also, the results showed that the duration of rainfall events was close to the duration of monitored hydrograph. The rising time and recession time of the hydrographs calculated with the Rational Method represented better the monitored hydrograph than SCS Method. The comparison indicates that the real discharge coefficient value could be higher than 0.61, adopted in Puls simulations. New researches evaluating OSD real performance should be developed. In order to verify the peak flow damping efficiency and the value of the discharge coefficient is necessary to monitor the inflow and outflow of an OSD, in addition to monitor the water level inside it.

Keywords: best management practices, on-site stormwater detention, source control, urban drainage

Procedia PDF Downloads 169
442 The Effect of Annual Weather and Sowing Date on Different Genotype of Maize (Zea mays L.) in Germination and Yield

Authors: Ákos Tótin

Abstract:

In crop production the most modern hybrids are available for us, therefore the yield and yield stability is determined by the agro-technology. The purpose of the experiment is to adapt the modern agrotechnology to the new type of hybrids. The long-term experiment was set up in 2015-2016 on chernozem soil in the Hajdúság (eastern Hungary). The plots were set up in 75 thousand ha-1 plant density. We examined some mainly use hybrids of Hungary. The conducted studies are: germination dynamic, growing dynamic and the effect of annual weather for the yield. We use three different sowing date as early, average and late, and measure how many plant germinated during the germination process. In the experiment, we observed the germination dynamics in 6 hybrid in 4 replication. In each replication, we counted the germinated plants in 2m long 2 row wide area. Data will be shown in the average of the 6 hybrid and 4 replication. Growing dynamics were measured from the 10cm (4-6 leaf) plant highness. We measured 10 plants’ height in two weeks replication. The yield was measured buy a special plot harvester - the Sampo Rosenlew 2010 – what measured the weight of the harvested plot and also took a sample from it. We determined the water content of the samples for the water release dynamics. After it, we calculated the yield (t/ha) of each plot at 14% of moisture content to compare them. We evaluated the data using Microsoft Excel 2015. The annual weather in each crop year define the maize germination dynamics because the amount of heat is determinative for the plants. In cooler crop year the weather is prolonged the germination. At the 2015 crop year the weather was cold in the beginning what prolonged the first sowing germination. But the second and third sowing germinated faster. In the 2016 crop year the weather was much favorable for plants so the first sowing germinated faster than in the previous year. After it the weather cooled down, therefore the second and third sowing germinated slower than the last year. The statistical data analysis program determined that there is a significant difference between the early and late sowing date growing dynamics. In 2015 the first sowing date had the highest amount of yield. The second biggest yield was in the average sowing time. The late sowing date has lowest amount of yield.

Keywords: germination, maize, sowing date, yield

Procedia PDF Downloads 215
441 Application of Geosynthetics for the Recovery of Located Road on Geological Failure

Authors: Rideci Farias, Haroldo Paranhos

Abstract:

The present work deals with the use of drainage geo-composite as a deep drainage and geogrid element to reinforce the base of the body of the landfill destined to the road pavement on geological faults in the stretch of the TO-342 Highway, between the cities of Miracema and Miranorte, in the State of Tocantins / TO, Brazil, which for many years was the main link between TO-010 and BR-153, after the city of Palmas, also in the state of Tocantins / TO, Brazil. For this application, geotechnical and geological studies were carried out by means of SPT percussion drilling, drilling and rotary drilling, to understand the problem, identifying the type of faults, filling material and the definition of the water table. According to the geological and geotechnical studies carried out, the area where the route was defined, passes through a zone of longitudinal fault to the runway, with strong breaking / fracturing, with presence of voids, intense alteration and with advanced argilization of the rock and with the filling up parts of the faults by organic and compressible soils leachate from other horizons. This geology presents as a geotechnical aggravating agent a medium of high hydraulic load and very low resistance to penetration. For more than 20 years, the region presented constant excessive deformations in the upper layers of the pavement, which after routine services of regularization, reconformation, re-compaction of the layers and application of the asphalt coating. The faults were quickly propagated to the surface of the asphalt pavement, generating a longitudinal shear, forming steps (unevenness), close to 40 cm, causing numerous accidents and discomfort to the drivers, since the geometric positioning was in a horizontal curve. Several projects were presented to the region's highway department to solve the problem. Due to the need for partial closure of the runway, the short time for execution, the use of geosynthetics was proposed and the most adequate solution for the problem was taken into account the movement of existing geological faults and the position of the water level in relation to several Layers of pavement and failure. In order to avoid any flow of water in the body of the landfill and in the filling material of the faults, a drainage curtain solution was used, carried out at 4.0 meters depth, with drainage geo-composite and as reinforcement element and inhibitor of the possible A geogrid of 200 kN / m of resistance was inserted at the base of the reconstituted landfill. Recent evaluations, after 13 years of application of the solution, show the efficiency of the technique used, supported by the geotechnical studies carried out in the area.

Keywords: geosynthetics, geocomposite, geogrid, road, recovery, geological failure

Procedia PDF Downloads 156
440 Machine Learning in Patent Law: How Genetic Breeding Algorithms Challenge Modern Patent Law Regimes

Authors: Stefan Papastefanou

Abstract:

Artificial intelligence (AI) is an interdisciplinary field of computer science with the aim of creating intelligent machine behavior. Early approaches to AI have been configured to operate in very constrained environments where the behavior of the AI system was previously determined by formal rules. Knowledge was presented as a set of rules that allowed the AI system to determine the results for specific problems; as a structure of if-else rules that could be traversed to find a solution to a particular problem or question. However, such rule-based systems typically have not been able to generalize beyond the knowledge provided. All over the world and especially in IT-heavy industries such as the United States, the European Union, Singapore, and China, machine learning has developed to be an immense asset, and its applications are becoming more and more significant. It has to be examined how such products of machine learning models can and should be protected by IP law and for the purpose of this paper patent law specifically, since it is the IP law regime closest to technical inventions and computing methods in technical applications. Genetic breeding models are currently less popular than recursive neural network method and deep learning, but this approach can be more easily described by referring to the evolution of natural organisms, and with increasing computational power; the genetic breeding method as a subset of the evolutionary algorithms models is expected to be regaining popularity. The research method focuses on patentability (according to the world’s most significant patent law regimes such as China, Singapore, the European Union, and the United States) of AI inventions and machine learning. Questions of the technical nature of the problem to be solved, the inventive step as such, and the question of the state of the art and the associated obviousness of the solution arise in the current patenting processes. Most importantly, and the key focus of this paper is the problem of patenting inventions that themselves are developed through machine learning. The inventor of a patent application must be a natural person or a group of persons according to the current legal situation in most patent law regimes. In order to be considered an 'inventor', a person must actually have developed part of the inventive concept. The mere application of machine learning or an AI algorithm to a particular problem should not be construed as the algorithm that contributes to a part of the inventive concept. However, when machine learning or the AI algorithm has contributed to a part of the inventive concept, there is currently a lack of clarity regarding the ownership of artificially created inventions. Since not only all European patent law regimes but also the Chinese and Singaporean patent law approaches include identical terms, this paper ultimately offers a comparative analysis of the most relevant patent law regimes.

Keywords: algorithms, inventor, genetic breeding models, machine learning, patentability

Procedia PDF Downloads 98
439 Encoding the Design of the Memorial Park and the Family Network as the Icon of 9/11 in Amy Waldman's the Submission

Authors: Masami Usui

Abstract:

After 9/11, the American literary scene was confronted with new perspectives that enabled both writers and readers to recognize the hidden aspects of their political, economic, legal, social, and cultural phenomena. There appeared an argument over new and challenging multicultural aspects after 9/11 and this argument is presented by a tension of space related to 9/11. In Amy Waldman’s the Submission (2011), designing both the memorial park and the family network has a significant meaning in establishing the progress of understanding from multiple perspectives. The most intriguing and controversial topic of racism is reflected in the Submission, where one young architect’s blind entry to the competition for the memorial of Ground Zero is nominated, yet he is confronted with strong objections and hostility as soon as he turns out to be a Muslim named Mohammad Khan. This ‘Khan’ issue, immediately enlarged into a social controversial issue on American soil, causes repeated acts of hostility to Muslim women by ignorant citizens all over America. His idea of the park is to design a new concept of tracing the cultural background of the open space. Against his will, his name is identified as the ‘ingredient’ of the networking of the resistant community with his supporters: on the other hand, the post 9/11 hysteria and victimization is presented in such family associations as the Angry Family Members and Grieving Family Members. These rapidly expanding networks, whether political or not, constructed by the internet, embody the contemporary societal connection and representation. The contemporary quest for the significance of human relationships is recognized as a quest for global peace. Designing both the memorial park and the communication networks strengthens a process of facing the shared conflicts and healing the survivors’ trauma. The tension between the idea and networking of the Garden for the memorial site and the collapse of Ground Zero signifies the double mission of the site: to establish the space to ease the wounded and to remember the catastrophe. Reading the design of these icons of 9/11 in the Submission means that decoding the myth of globalization and its representations in this century.

Keywords: American literature, cultural studies, globalization, literature of catastrophe

Procedia PDF Downloads 512
438 Optimization of Fermentation Conditions for Extracellular Production of the Oncolytic Enzyme, L-Asparaginase, by New Subsp. Streptomyces Rochei Subsp. Chromatogenes NEAE-K Using Response Surface Methodology under Solid State Fermentation

Authors: Noura El-Ahmady El-Naggar

Abstract:

L-asparaginase is an important enzyme as therapeutic agents used in combination therapy with other drugs in the treatment of acute lymphoblastic leukemia in children. L-asparaginase producing actinomycete strain, NEAE-K, was isolated from soil sample and identified on the basis of morphological, cultural, physiological and biochemical properties, together with 16S rDNA sequence as new subsp. Streptomyces rochei subsp. chromatogenes NEAE-K and sequencing product (1532 bp) was deposited in the GenBank database under accession number KJ200343. The study was conducted to screen parameters affecting the production of L-asparaginase by Streptomyces rochei subsp. chromatogenes NEAE-K on solid state fermentation using Plackett–Burman experimental design. Sixteen different independent variables including incubation time, moisture content, inoculum size, temperature, pH, soybean meal+ wheat bran, dextrose, fructose, L-asparagine, yeast extract, KNO3, K2HPO4, MgSO4.7H2O, NaCl, FeSO4. 7H2O, CaCl2, and three dummy variables were screened in Plackett–Burman experimental design of 20 trials. The most significant independent variables affecting enzyme production (dextrose, L-asparagine and K2HPO4) were further optimized by the central composite design. As a result, a medium of the following formula is the optimum for producing an extracellular L-asparaginase by Streptomyces rochei subsp. chromatogenes NEAE-K from solid state fermentation: g/L (soybean meal+ wheat bran 15, dextrose 3, fructose 4, L-asparagine 8, yeast extract 2, KNO3 1, K2HPO4 2, MgSO4.7H2O 0.5, NaCl 0.1, FeSO4. 7H2O 0.02, CaCl2 0.01), incubation time 7 days, moisture content 50%, inoculum size 3 mL, temperature 30°C, pH 8.5.

Keywords: streptomyces rochei subsp. chromatogenes neae-k, 16s rrna, identification, solid state fermentation, l-asparaginase production, plackett-burman design, central composite design

Procedia PDF Downloads 393
437 Geographic Information System-Based Map for Best Suitable Place for Cultivating Permanent Trees in South-Lebanon

Authors: Allaw Kamel, Al-Chami Leila

Abstract:

It is important to reduce the human influence on natural resources by identifying an appropriate land use. Moreover, it is essential to carry out the scientific land evaluation. Such kind of analysis allows identifying the main factors of agricultural production and enables decision makers to develop crop management in order to increase the land capability. The key is to match the type and intensity of land use with its natural capability. Therefore; in order to benefit from these areas and invest them to obtain good agricultural production, they must be organized and managed in full. Lebanon suffers from the unorganized agricultural use. We take south Lebanon as a study area, it is the most fertile ground and has a variety of crops. The study aims to identify and locate the most suitable area to cultivate thirteen type of permanent trees which are: apples, avocados, stone fruits in coastal regions and stone fruits in mountain regions, bananas, citrus, loquats, figs, pistachios, mangoes, olives, pomegranates, and grapes. Several geographical factors are taken as criterion for selection of the best location to cultivate. Soil, rainfall, PH, temperature, and elevation are main inputs to create the final map. Input data of each factor is managed, visualized and analyzed using Geographic Information System (GIS). Management GIS tools are implemented to produce input maps capable of identifying suitable areas related to each index. The combination of the different indices map generates the final output map of the suitable place to get the best permanent tree productivity. The output map is reclassified into three suitability classes: low, moderate, and high suitability. Results show different locations suitable for different kinds of trees. Results also reflect the importance of GIS in helping decision makers finding a most suitable location for every tree to get more productivity and a variety in crops.

Keywords: agricultural production, crop management, geographical factors, Geographic Information System, GIS, land capability, permanent trees, suitable location

Procedia PDF Downloads 129
436 Effects of Learner-Content Interaction Activities on the Context of Verbal Learning Outcomes in Interactive Courses

Authors: Alper Tolga Kumtepe, Erdem Erdogdu, M. Recep Okur, Eda Kaypak, Ozlem Kaya, Serap Ugur, Deniz Dincer, Hakan Yildirim

Abstract:

Interaction is one of the most important components of open and distance learning. According to Moore, who proposed one of the keystones on interaction types, there are three basic types of interaction: learner-teacher, learner-content, and learner-learner. From these interaction types, learner-content interaction, without doubt, can be identified as the most fundamental one on which all education is based. Efficacy, efficiency, and attraction of open and distance learning systems can be achieved by the practice of effective learner-content interaction. With the development of new technologies, interactive e-learning materials have been commonly used as a resource in open and distance learning, along with the printed books. The intellectual engagement of the learners with the content that is course materials may also affect their satisfaction for the open and distance learning practices in general. Learner satisfaction holds an important place in open and distance learning since it will eventually contribute to the achievement of learning outcomes. Using the learner-content interaction activities in course materials, Anadolu University, by its Open Education system, tries to involve learners in deep and meaningful learning practices. Especially, during the e-learning material design and production processes, identifying appropriate learner-content interaction activities within the context of learning outcomes holds a big importance. Considering the lack of studies adopting this approach, as well as its being a study on the use of e-learning materials in Open Education system, this research holds a big value in open and distance learning literature. In this respect, the present study aimed to investigate a) which learner-content interaction activities included in interactive courses are the most effective in learners’ achievement of verbal information learning outcomes and b) to what extent distance learners are satisfied with these learner-content interaction activities. For this study, the quasi-experimental research design was adopted. The 120 participants of the study were from Anadolu University Open Education Faculty students living in Eskişehir. The students were divided into 6 groups randomly. While 5 of these groups received different learner-content interaction activities as a part of the experiment, the other group served as the control group. The data were collected mainly through two instruments: pre-test and post-test. In addition to those tests, learners’ perceived learning was assessed with an item at the end of the program. The data collected from pre-test and post-test were analyzed by ANOVA, and in the light of the findings of this approximately 24-month study, suggestions for the further design of e-learning materials within the context of learner-content interaction activities will be provided at the conference. The current study is planned to be an antecedent for the following studies that will examine the effects of activities on other learning domains.

Keywords: interaction, distance education, interactivity, online courses

Procedia PDF Downloads 176
435 The Language of Science in Higher Education: Related Topics and Discussions

Authors: Gurjeet Singh, Harinder Singh

Abstract:

In this paper, we present "The Language of Science in Higher Education: Related Questions and Discussions". Linguists have written and researched in depth the role of language in science. On this basis, it is clear that language is not just a medium or vehicle for communicating knowledge and ideas. Nor are there mere signs of language knowledge and conversion of ideas into code. In the process of reading and writing, everyone thinks deeply and struggles to understand concepts and make sense. Linguistics play an important role in achieving concepts. In the context of such linguistic diversity, there is no straightforward and simple answer to the question of which language should be the language of advanced science and technology. Many important topics related to this issue are as follows: Involvement in practical or Deep theoretical issues. Languages for the study of science and other subjects. Language issues of science to be considered separate from the development of science, capitalism, colonial history, the worldview of the common man. The democratization of science and technology education in India is possible only by providing maximum reading/resource material in regional languages. The scientific research should be increase to chances of understanding the subject. Multilingual instead or monolingual. As far as deepening the understanding of the subject is concerned, we can shed light on it based on two or three experiences. An attempt was made to make the famous sociological journal Economic and Political Weekly Hindi almost three decades ago. There were many obstacles in this work. The original articles written in Hindi were not found, and the papers and articles of the English Journal were translated into Hindi, and a journal called Sancha was taken out. Equally important is the democratization of knowledge and the deepening of understanding of the subject. However, the question is that if higher education in science is in Hindi or other languages, then it would be a problem to get job. In fact, since independence, English has been dominant in almost every field except literature. There are historical reasons for this, which cannot be reversed. As mentioned above, due to colonial rule, even before independence, English was established as a language of communication, the language of power/status, the language of higher education, the language of administration, and the language of scholarly discourse. After independence, attempts to make Hindi or Hindustani the national language in India were unsuccessful. Given this history and current reality, higher education should be multilingual or at least bilingual. Translation limits should also be increased for those who choose the material for translation. Writing in regional languages on science, making knowledge of various international languages available in Indian languages, etc., is equally important for all to have opportunities to learn English.

Keywords: language, linguistics, literature, culture, ethnography, punjabi, gurmukhi, higher education

Procedia PDF Downloads 74
434 Risk and Reliability Based Probabilistic Structural Analysis of Railroad Subgrade Using Finite Element Analysis

Authors: Asif Arshid, Ying Huang, Denver Tolliver

Abstract:

Finite Element (FE) method coupled with ever-increasing computational powers has substantially advanced the reliability of deterministic three dimensional structural analyses of a structure with uniform material properties. However, railways trackbed is made up of diverse group of materials including steel, wood, rock and soil, while each material has its own varying levels of heterogeneity and imperfections. It is observed that the application of probabilistic methods for trackbed structural analysis while incorporating the material and geometric variabilities is deeply underworked. The authors developed and validated a 3-dimensional FE based numerical trackbed model and in this study, they investigated the influence of variability in Young modulus and thicknesses of granular layers (Ballast and Subgrade) on the reliability index (-index) of the subgrade layer. The influence of these factors is accounted for by changing their Coefficients of Variance (COV) while keeping their means constant. These variations are formulated using Gaussian Normal distribution. Two failure mechanisms in subgrade namely Progressive Shear Failure and Excessive Plastic Deformation are examined. Preliminary results of risk-based probabilistic analysis for Progressive Shear Failure revealed that the variations in Ballast depth are the most influential factor for vertical stress at the top of subgrade surface. Whereas, in case of Excessive Plastic Deformations in subgrade layer, the variations in its own depth and Young modulus proved to be most important while ballast properties remained almost indifferent. For both these failure moods, it is also observed that the reliability index for subgrade failure increases with the increase in COV of ballast depth and subgrade Young modulus. The findings of this work is of particular significance in studying the combined effect of construction imperfections and variations in ground conditions on the structural performance of railroad trackbed and evaluating the associated risk involved. In addition, it also provides an additional tool to supplement the deterministic analysis procedures and decision making for railroad maintenance.

Keywords: finite element analysis, numerical modeling, probabilistic methods, risk and reliability analysis, subgrade

Procedia PDF Downloads 126
433 Synthetic Method of Contextual Knowledge Extraction

Authors: Olga Kononova, Sergey Lyapin

Abstract:

Global information society requirements are transparency and reliability of data, as well as ability to manage information resources independently; particularly to search, to analyze, to evaluate information, thereby obtaining new expertise. Moreover, it is satisfying the society information needs that increases the efficiency of the enterprise management and public administration. The study of structurally organized thematic and semantic contexts of different types, automatically extracted from unstructured data, is one of the important tasks for the application of information technologies in education, science, culture, governance and business. The objectives of this study are the contextual knowledge typologization, selection or creation of effective tools for extracting and analyzing contextual knowledge. Explication of various kinds and forms of the contextual knowledge involves the development and use full-text search information systems. For the implementation purposes, the authors use an e-library 'Humanitariana' services such as the contextual search, different types of queries (paragraph-oriented query, frequency-ranked query), automatic extraction of knowledge from the scientific texts. The multifunctional e-library «Humanitariana» is realized in the Internet-architecture in WWS-configuration (Web-browser / Web-server / SQL-server). Advantage of use 'Humanitariana' is in the possibility of combining the resources of several organizations. Scholars and research groups may work in a local network mode and in distributed IT environments with ability to appeal to resources of any participating organizations servers. Paper discusses some specific cases of the contextual knowledge explication with the use of the e-library services and focuses on possibilities of new types of the contextual knowledge. Experimental research base are science texts about 'e-government' and 'computer games'. An analysis of the subject-themed texts trends allowed to propose the content analysis methodology, that combines a full-text search with automatic construction of 'terminogramma' and expert analysis of the selected contexts. 'Terminogramma' is made out as a table that contains a column with a frequency-ranked list of words (nouns), as well as columns with an indication of the absolute frequency (number) and the relative frequency of occurrence of the word (in %% ppm). The analysis of 'e-government' materials showed, that the state takes a dominant position in the processes of the electronic interaction between the authorities and society in modern Russia. The media credited the main role in these processes to the government, which provided public services through specialized portals. Factor analysis revealed two factors statistically describing the used terms: human interaction (the user) and the state (government, processes organizer); interaction management (public officer, processes performer) and technology (infrastructure). Isolation of these factors will lead to changes in the model of electronic interaction between government and society. In this study, the dominant social problems and the prevalence of different categories of subjects of computer gaming in science papers from 2005 to 2015 were identified. Therefore, there is an evident identification of several types of contextual knowledge: micro context; macro context; dynamic context; thematic collection of queries (interactive contextual knowledge expanding a composition of e-library information resources); multimodal context (functional integration of iconographic and full-text resources through hybrid quasi-semantic algorithm of search). Further studies can be pursued both in terms of expanding the resource base on which they are held, and in terms of the development of appropriate tools.

Keywords: contextual knowledge, contextual search, e-library services, frequency-ranked query, paragraph-oriented query, technologies of the contextual knowledge extraction

Procedia PDF Downloads 337
432 Evaluating the Effect of Climate Change and Land Use/Cover Change on Catchment Hydrology of Gumara Watershed, Upper Blue Nile Basin, Ethiopia

Authors: Gashaw Gismu Chakilu

Abstract:

Climate and land cover change are very important issues in terms of global context and their responses to environmental and socio-economic drivers. The dynamic of these two factors is currently affecting the environment in unbalanced way including watershed hydrology. In this paper individual and combined impacts of climate change and land use land cover change on hydrological processes were evaluated through applying the model Soil and Water Assessment Tool (SWAT) in Gumara watershed, Upper Blue Nile basin Ethiopia. The regional climate; temperature and rainfall data of the past 40 years in the study area were prepared and changes were detected by using trend analysis applying Mann-Kendall trend test. The land use land cover data were obtained from land sat image and processed by ERDAS IMAGIN 2010 software. Three land use land cover data; 1973, 1986, and 2013 were prepared and these data were used for base line, model calibration and change study respectively. The effects of these changes on high flow and low flow of the catchment have also been evaluated separately. The high flow of the catchment for these two decades was analyzed by using Annual Maximum (AM) model and the low flow was evaluated by seven day sustained low flow model. Both temperature and rainfall showed increasing trend; and then the extent of changes were evaluated in terms of monthly bases by using two decadal time periods; 1973-1982 was taken as baseline and 2004-2013 was used as change study. The efficiency of the model was determined by Nash-Sutcliffe (NS) and Relative Volume error (RVe) and their values were 0.65 and 0.032 for calibration and 0.62 and 0.0051 for validation respectively. The impact of climate change was higher than that of land use land cover change on stream flow of the catchment; the flow has been increasing by 16.86% and 7.25% due to climate and LULC change respectively, and the combined change effect accounted 22.13% flow increment. The overall results of the study indicated that Climate change is more responsible for high flow than low flow; and reversely the land use land cover change showed more significant effect on low flow than high flow of the catchment. From the result we conclude that the hydrology of the catchment has been altered because of changes of climate and land cover of the study area.

Keywords: climate, LULC, SWAT, Ethiopia

Procedia PDF Downloads 365
431 Yield Level, Variability and Yield Gap of Maize (Zea Mays L.) Under Variable Climate Condition of the Semi-arid Central Rift Valley of Ethiopia

Authors: Fitih Ademe, Kibebew Kibret, Sheleme Beyene, Mezgebu Getnet, Gashaw Meteke

Abstract:

Soil moisture and nutrient availability are the two key edaphic factors that affect crop yields and are directly or indirectly affected by climate variability and change. The study examined climate-induced yield level, yield variability and gap of maize during 1981-2010 main growing season in the Central Rift Valley (CRV) of Ethiopia. Pearson correlation test was employed to see the relationship between climate variables and yield. The coefficient of variation (CV) was used to analyze annual yield variability. Decision Support System for Agro-technology Transfer cropping system model (DSSAT-CSM) was used to simulate the growth and yield of maize for the study period. The result indicated that maize grain yield was strongly (P<0.01) and positively correlated with seasonal rainfall (r=0.67 at Melkassa and r = 0.69 at Ziway) in the CRV while day temperature affected grain yield negatively (r= -0.44) at Ziway (P<0.05) during the simulation period. Variations in total seasonal rainfall at Melkassa and Ziway explained 44.9 and 48.5% of the variation in yield, respectively, under optimum nutrition. Following variation in rainfall, high yield variability (CV=23.5%, Melkassa and CV=25.3%, Ziway) was observed for optimum nutrient simulation than the corresponding nutrient limited simulation (CV=16%, Melkassa and 24.1%, Ziway) in the study period. The observed farmers’ yield was 72, 52 and 43% of the researcher-managed, water-limited and potential yield of the crop, respectively, indicating a wide maize yield gap in the region. The study revealed rainfed crop production in the CRV is prone to yield variabilities due to its high dependence on seasonal rainfall and nutrient level. Moreover, the high coefficient of variation in the yield gap for the 30-year period also foretells the need for dependable water supply at both locations. Given the wide yield gap especially during lower rainfall years across the simulation periods, it signifies the requirement for a more dependable application of irrigation water and a potential shift to irrigated agriculture; hence, adopting options that can improve water availability and nutrient use efficiency would be crucial for crop production in the area.

Keywords: climate variability, crop model, water availability, yield gap, yield variability

Procedia PDF Downloads 51
430 The Effects of Drought and Nitrogen on Soybean (Glycine max (L.) Merrill) Physiology and Yield

Authors: Oqba Basal, András Szabó

Abstract:

Legume crops are able to fix atmospheric nitrogen by the symbiotic relation with specific bacteria, which allows the use of the mineral nitrogen-fertilizer to be reduced, or even excluded, resulting in more profit for the farmers and less pollution for the environment. Soybean (Glycine max (L.) Merrill) is one of the most important legumes with its high content of both protein and oil. However, it is recommended to combine the two nitrogen sources under stress conditions in order to overcome its negative effects. Drought stress is one of the most important abiotic stresses that increasingly limits soybean yields. A precise rate of mineral nitrogen under drought conditions is not confirmed, as it depends on many factors; soybean yield-potential and soil-nitrogen content to name a few. An experiment was conducted during 2017 growing season in Debrecen, Hungary to investigate the effects of nitrogen source on the physiology and the yield of the soybean cultivar 'Boglár'. Three N-fertilizer rates including no N-fertilizer (0 N), 35 kg ha-1 of N-fertilizer (35 N) and 105 kg ha-1 of N-fertilizer (105 N) were applied under three different irrigation regimes; severe drought stress (SD), moderate drought stress (MD) and control with no drought stress (ND). Half of the seeds in each treatment were pre-inoculated with Bradyrhizobium japonicum inoculant. The overall results showed significant differences associated with fertilization and irrigation, but not with inoculation. Increasing N rate was mostly accompanied with increased chlorophyll content and leaf area index, whereas it positively affected the plant height only when the drought was waived off. Plant height was the lowest under severe drought, regardless of inoculation and N-fertilizer application and rate. Inoculation increased the yield when there was no drought, and a low rate of N-fertilizer increased the yield furthermore; however, the high rate of N-fertilizer decreased the yield to a level even less than the inoculated control. On the other hand, the yield of non-inoculated plants increased as the N-fertilizer rate increased. Under drought conditions, adding N-fertilizer increased the yield of the non-inoculated plants compared to their inoculated counterparts; moreover, the high rate of N-fertilizer resulted in the best yield. Regardless of inoculation, the mean yield of the three fertilization rates was better when the water amount increased. It was concluded that applying N-fertilizer to provide the nitrogen needed by soybean plants, with the absence of N2-fixation process, is very important. Moreover, adding relatively high rate of N-fertilizer is very important under severe drought stress to alleviate the drought negative effects. Further research to recommend the best N-fertilizer rate to inoculated soybean under drought stress conditions should be executed.

Keywords: drought stress, inoculation, N-fertilizer, soybean physiology, yield

Procedia PDF Downloads 133
429 Human Interaction Skills and Employability in Courses with Internships: Report of a Decade of Success in Information Technology

Authors: Filomena Lopes, Miguel Magalhaes, Carla Santos Pereira, Natercia Durao, Cristina Costa-Lobo

Abstract:

The option to implement curricular internships with undergraduate students is a pedagogical option with some good results perceived by academic staff, employers, and among graduates in general and IT (Information Technology) in particular. Knowing that this type of exercise has never been so relevant, as one tries to give meaning to the future in a landscape of rapid and deep changes. We have as an example the potential disruptive impact on the jobs of advances in robotics, artificial intelligence and 3-D printing, which is a focus of fierce debate. It is in this context that more and more students and employers engage in the pursuit of career-promoting responses and business development, making their investment decisions of training and hiring. Three decades of experience and research in computer science degree and in information systems technologies degree at the Portucalense University, Portuguese private university, has provided strong evidence of its advantages. The Human Interaction Skills development as well as the attractiveness of such experiences for students are topics assumed as core in the Ccnception and management of the activities implemented in these study cycles. The objective of this paper is to gather evidence of the Human Interaction Skills explained and valued within the curriculum internship experiences of IT students employability. Data collection was based on the application of questionnaire to intern counselors and to students who have completed internships in these undergraduate courses in the last decade. The trainee supervisor, responsible for monitoring the performance of IT students in the evolution of traineeship activities, evaluates the following Human Interaction Skills: Motivation and interest in the activities developed, interpersonal relationship, cooperation in company activities, assiduity, ease of knowledge apprehension, Compliance with norms, insertion in the work environment, productivity, initiative, ability to take responsibility, creativity in proposing solutions, and self-confidence. The results show that these undergraduate courses promote the development of Human Interaction Skills and that these students, once they finish their degree, are able to initiate remunerated work functions, mainly by invitation of the institutions in which they perform curricular internships. Findings obtained from the present study contribute to widen the analysis of its effectiveness in terms of future research and actions in regard to the transition from Higher Education pathways to the Labour Market.

Keywords: human interaction skills, employability, internships, information technology, higher education

Procedia PDF Downloads 272
428 Development and Characterization of Novel Topical Formulation Containing Niacinamide

Authors: Sevdenur Onger, Ali Asram Sagiroglu

Abstract:

Hyperpigmentation is a cosmetically unappealing skin problem caused by an overabundance of melanin in the skin. Its pathophysiology is caused by melanocytes being exposed to paracrine melanogenic stimuli, which can upregulate melanogenesis-related enzymes (such as tyrosinase) and cause melanosome formation. Tyrosinase is linked to the development of melanosomes biochemically, and it is the main target of hyperpigmentation treatment. therefore, decreasing tyrosinase activity to reduce melanosomes has become the main target of hyperpigmentation treatment. Niacinamide (NA) is a natural chemical found in a variety of plants that is used as a skin-whitening ingredient in cosmetic formulations. NA decreases melanogenesis in the skin by inhibiting melanosome transfer from melanocytes to covering keratinocytes. Furthermore, NA protects the skin from reactive oxygen species and acts as a main barrier with the skin, reducing moisture loss by increasing ceramide and fatty acid synthesis. However, it is very difficult for hydrophilic compounds such as NA to penetrate deep into the skin. Furthermore, because of the nicotinic acid in NA, it is an irritant. As a result, we've concentrated on strategies to increase NA skin permeability while avoiding its irritating impacts. Since nanotechnology can affect drug penetration behavior by controlling the release and increasing the period of permanence on the skin, it can be a useful technique in the development of whitening formulations. Liposomes have become increasingly popular in the cosmetics industry in recent years due to benefits such as their lack of toxicity, high penetration ability in living skin layers, ability to increase skin moisture by forming a thin layer on the skin surface, and suitability for large-scale production. Therefore, liposomes containing NA were developed for this study. Different formulations were prepared by varying the amount of phospholipid and cholesterol and examined in terms of particle sizes, polydispersity index (PDI) and pH values. The pH values of the produced formulations were determined to be suitable with the pH value of the skin. Particle sizes were determined to be smaller than 250 nm and the particles were found to be of homogeneous size in the formulation (pdi<0.30). Despite the important advantages of liposomal systems, they have low viscosity and stability for topical use. For these reasons, in this study, liposomal cream formulations have been prepared for easy topical application of liposomal systems. As a result, liposomal cream formulations containing NA have been successfully prepared and characterized. Following the in-vitro release and ex-vivo diffusion studies to be conducted in the continuation of the study, it is planned to test the formulation that gives the most appropriate result on the volunteers after obtaining the approval of the ethics committee.

Keywords: delivery systems, hyperpigmentation, liposome, niacinamide

Procedia PDF Downloads 98