Search results for: steepest slope segment
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1076

Search results for: steepest slope segment

206 Urban Impervious and its Impact on Storm Water Drainage Systems

Authors: Ratul Das, Udit Narayan Das

Abstract:

Surface imperviousness in urban area brings significant changes in storm water drainage systems and some recent studies reveals that the impervious surfaces that passes the storm water runoff directly to drainage systems through storm water collection systems, called directly connected impervious area (DCIA) is an effective parameter rather than total impervious areas (TIA) for computation of surface runoff. In the present study, extension of DCIA and TIA were computed for a small sub-urban area of Agartala, the capital of state Tripura. Total impervious surfaces covering the study area were identified on the existing storm water drainage map from landuse map of the study area in association with field assessments. Also, DCIA assessed through field survey were compared to DCIA computed by empirical relationships provided by other investigators. For the assessment of DCIA in the study area two methods were adopted. First, partitioning the study area into four drainage sub-zones based on average basin slope and laying of existing storm water drainage systems. In the second method, the entire study area was divided into small grids. Each grid or parcel comprised of 20m× 20m area. Total impervious surfaces were delineated from landuse map in association with on-site assessments for efficient determination of DCIA within each sub-area and grid. There was a wide variation in percent connectivity of TIA across each sub-drainage zone and grid. In the present study, total impervious area comprises 36.23% of the study area, in which 21.85% of the total study area is connected to storm water collection systems. Total pervious area (TPA) and others comprise 53.20% and 10.56% of the total area, respectively. TIA recorded by field assessment (36.23%) was considerably higher than that calculated from the available land use map (22%). From the analysis of recoded data, it is observed that the average percentage of connectivity (% DCIA with respect to TIA) is 60.31 %. The analysis also reveals that the observed DCIA lies below the line of optimal impervious surface connectivity for a sub-urban area provided by other investigators and which indicate the probable reason of water logging conditions in many parts of the study area during monsoon period.

Keywords: Drainage, imperviousness, runoff, storm water.

Procedia PDF Downloads 347
205 Artificial Neural Networks and Hidden Markov Model in Landslides Prediction

Authors: C. S. Subhashini, H. L. Premaratne

Abstract:

Landslides are the most recurrent and prominent disaster in Sri Lanka. Sri Lanka has been subjected to a number of extreme landslide disasters that resulted in a significant loss of life, material damage, and distress. It is required to explore a solution towards preparedness and mitigation to reduce recurrent losses associated with landslides. Artificial Neural Networks (ANNs) and Hidden Markov Model (HMMs) are now widely used in many computer applications spanning multiple domains. This research examines the effectiveness of using Artificial Neural Networks and Hidden Markov Model in landslides predictions and the possibility of applying the modern technology to predict landslides in a prominent geographical area in Sri Lanka. A thorough survey was conducted with the participation of resource persons from several national universities in Sri Lanka to identify and rank the influencing factors for landslides. A landslide database was created using existing topographic; soil, drainage, land cover maps and historical data. The landslide related factors which include external factors (Rainfall and Number of Previous Occurrences) and internal factors (Soil Material, Geology, Land Use, Curvature, Soil Texture, Slope, Aspect, Soil Drainage, and Soil Effective Thickness) are extracted from the landslide database. These factors are used to recognize the possibility to occur landslides by using an ANN and HMM. The model acquires the relationship between the factors of landslide and its hazard index during the training session. These models with landslide related factors as the inputs will be trained to predict three classes namely, ‘landslide occurs’, ‘landslide does not occur’ and ‘landslide likely to occur’. Once trained, the models will be able to predict the most likely class for the prevailing data. Finally compared two models with regards to prediction accuracy, False Acceptance Rates and False Rejection rates and This research indicates that the Artificial Neural Network could be used as a strong decision support system to predict landslides efficiently and effectively than Hidden Markov Model.

Keywords: landslides, influencing factors, neural network model, hidden markov model

Procedia PDF Downloads 383
204 New Coating Materials Based on Mixtures of Shellac and Pectin for Pharmaceutical Products

Authors: M. Kumpugdee-Vollrath, M. Tabatabaeifar, M. Helmis

Abstract:

Shellac is a natural polyester resin secreted by insects. Pectins are natural, non-toxic and water-soluble polysaccharides extracted from the peels of citrus fruits or the leftovers of apples. Both polymers are allowed for the use in the pharmaceutical industry and as a food additive. SSB Aquagold® is the aqueous solution of shellac and can be used for a coating process as an enteric or controlled drug release polymer. In this study, tablets containing 10 mg methylene blue as a model drug were prepared with a rotary press. Those tablets were coated with mixtures of shellac and one of the pectin different types (i.e. CU 201, CU 501, CU 701 and CU 020) mostly in a 2:1 ratio or with pure shellac in a small scale fluidized bed apparatus. A stable, simple and reproducible three-stage coating process was successfully developed. The drug contents of the coated tablets were determined using UV-VIS spectrophotometer. The characterization of the surface and the film thickness were performed with the scanning electron microscopy (SEM) and the light microscopy. Release studies were performed in a dissolution apparatus with a basket. Most of the formulations were enteric coated. The dissolution profiles showed a delayed or sustained release with a lagtime of at least 4 h. Dissolution profiles of coated tablets with pure shellac had a very long lagtime ranging from 13 to 17.5 h and the slopes were quite high. The duration of the lagtime and the slope of the dissolution profiles could be adjusted by adding the proper type of pectin to the shellac formulation and by variation of the coating amount. In order to apply a coating formulation as a colon delivery system, the prepared film should be resistant against gastric fluid for at least 2 h and against intestinal fluid for 4-6 h. The required delay time was gained with most of the shellac-pectin polymer mixtures. The release profiles were fitted with the modified model of the Korsmeyer-Peppas equation and the Hixson-Crowell model. A correlation coefficient (R²) > 0.99 was obtained by Korsmeyer-Peppas equation.

Keywords: shellac, pectin, coating, fluidized bed, release, colon delivery system, kinetic, SEM, methylene blue

Procedia PDF Downloads 406
203 The Effectiveness of Energy-related Tax in Curbing Transport-related Carbon Emissions: The Role of Green Finance and Technology in OECD Economies

Authors: Hassan Taimoor, Piotr Krajewski, Piotr Gabrielzcak

Abstract:

Being responsible for the largest source of energy-related emissions, the transportation sector is driven by more than half of global oil demand and total energy consumption, making it a crucial factor in tackling climate change and environmental degradation. The present study empirically tests the effectives of the energy-related tax (TXEN) in curbing transport-related carbon emissions (CO2TRANSP) in Organization for Economic Cooperation and Development (OECD) economies over the period of 1990-2020. Moreover, Green Finance (GF), Technology (TECH), and Gross domestic product (GDP) have also been added as explanatory factors which might affect CO2TRANSP emissions. The study employs the Method of Moment Quantile Regression (MMQR), an advance econometric technique to observe the variations along each quantile. Based on the results of the preliminary test, we confirm the presence of cross-sectional dependence and slope heterogeneity. Whereas the result of the panel unit root test report mixed order of variables’ integration. The findings reveal that rise in income level activates CO2TRANSP, confirming the first stage of Environmental Kuznet Hypothesis. Surprisingly, the present TXEN policies of OECD member states are not mature enough to tackle the CO2TRANSP emissions. However, the findings confirm that GF and TECH are solely responsible for the reduction in the CO2TRANSP. The outcomes of Bootstrap Quantile Regression (BSQR) further validate and support the earlier findings of MMQR. Based on the findings of this study, it is revealed that the current TXEN policies are too moderate, and an incremental and progressive rise in TXEN may help in a transition toward a cleaner and sustainable transportation sector in the study region.

Keywords: transport-related CO2 emissions, energy-related tax, green finance, technological development, oecd member states

Procedia PDF Downloads 75
202 Defining the Customers' Color Preference for the Apparel Industry in Terms of Chromaticity Coordinates

Authors: Banu Hatice Gürcüm, Pınar Arslan, Mahmut Yalçın

Abstract:

Fashion designers create lots of dresses, suits, shoes, and other clothing and accessories, which are purchased every year by consumers. Fashion trends, sketches of designs, accessories affect the apparel goods, but colors make the finishing touches to an outfit. In all fields of apparel men's, women's, and children's wear, including casual wear, suits, sportswear, formal wear, outerwear, maternity, and intimate apparel, color sells. Thus, specialization in color in apparel is a basic concern each season. The perception of color is the key to sales for every sector in textile business. Mechanism of color perception, cognition in brain and color emotion are unique subjects, which scientists have been investigating for many years. The parameters of color may not be corresponding to visual scales since human emotions induced by color are completely subjective. However, with a very few exception each manufacturer concern their top selling colors for each season through seasonal sales reports of apparel companies. This paper examines sensory and instrumental methods for quantifying color of fabrics and investigates the relationship between fabric color and sale numbers. 5 top selling colors for each season from 10 leading apparel companies in the same segment are taken. The compilation is based according to the sales of the companies for 5 to 10 years. The research’s main concern is the corelation with the magnitude of seasonal color selling figures and the CIE chromaticity coordinates. The colors are chosen from the globally accepted Pantone Textile Color System and the three-dimentional measurement system CIE L*a*b* (CIELAB) is used, L* representing the degree of lightness of color, a* the degree of color ranging from magenta to green, and b* the degree of color ranging from blue to yellow. The objective of this paper is to demonstrate the feasibility of relating color perceptance to a laboratory instrument yielding measurements in the CIELAB system. Our approach is to obtain a total of a hundred reference fabrics to be measured on a laboratory spectrophotometer calibrated to the CIELAB color system. Relationships between the CIE tristimulus (X, Y, Z) and CIELAB (L*, a*, b*) are examined and are reported herein.

Keywords: CIELAB, CIE tristimulus, color preference, fashion

Procedia PDF Downloads 334
201 Geospatial Analysis for Predicting Sinkhole Susceptibility in Greene County, Missouri

Authors: Shishay Kidanu, Abdullah Alhaj

Abstract:

Sinkholes in the karst terrain of Greene County, Missouri, pose significant geohazards, imposing challenges on construction and infrastructure development, with potential threats to lives and property. To address these issues, understanding the influencing factors and modeling sinkhole susceptibility is crucial for effective mitigation through strategic changes in land use planning and practices. This study utilizes geographic information system (GIS) software to collect and process diverse data, including topographic, geologic, hydrogeologic, and anthropogenic information. Nine key sinkhole influencing factors, ranging from slope characteristics to proximity to geological structures, were carefully analyzed. The Frequency Ratio method establishes relationships between attribute classes of these factors and sinkhole events, deriving class weights to indicate their relative importance. Weighted integration of these factors is accomplished using the Analytic Hierarchy Process (AHP) and the Weighted Linear Combination (WLC) method in a GIS environment, resulting in a comprehensive sinkhole susceptibility index (SSI) model for the study area. Employing Jenk's natural break classifier method, the SSI values are categorized into five distinct sinkhole susceptibility zones: very low, low, moderate, high, and very high. Validation of the model, conducted through the Area Under Curve (AUC) and Sinkhole Density Index (SDI) methods, demonstrates a robust correlation with sinkhole inventory data. The prediction rate curve yields an AUC value of 74%, indicating a 74% validation accuracy. The SDI result further supports the success of the sinkhole susceptibility model. This model offers reliable predictions for the future distribution of sinkholes, providing valuable insights for planners and engineers in the formulation of development plans and land-use strategies. Its application extends to enhancing preparedness and minimizing the impact of sinkhole-related geohazards on both infrastructure and the community.

Keywords: sinkhole, GIS, analytical hierarchy process, frequency ratio, susceptibility, Missouri

Procedia PDF Downloads 73
200 Landslide Susceptibility Mapping Using Soft Computing in Amhara Saint

Authors: Semachew M. Kassa, Africa M Geremew, Tezera F. Azmatch, Nandyala Darga Kumar

Abstract:

Frequency ratio (FR) and analytical hierarchy process (AHP) methods are developed based on past landslide failure points to identify the landslide susceptibility mapping because landslides can seriously harm both the environment and society. However, it is still difficult to select the most efficient method and correctly identify the main driving factors for particular regions. In this study, we used fourteen landslide conditioning factors (LCFs) and five soft computing algorithms, including Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), Artificial Neural Network (ANN), and Naïve Bayes (NB), to predict the landslide susceptibility at 12.5 m spatial scale. The performance of the RF (F1-score: 0.88, AUC: 0.94), ANN (F1-score: 0.85, AUC: 0.92), and SVM (F1-score: 0.82, AUC: 0.86) methods was significantly better than the LR (F1-score: 0.75, AUC: 0.76) and NB (F1-score: 0.73, AUC: 0.75) method, according to the classification results based on inventory landslide points. The findings also showed that around 35% of the study region was made up of places with high and very high landslide risk (susceptibility greater than 0.5). The very high-risk locations were primarily found in the western and southeastern regions, and all five models showed good agreement and similar geographic distribution patterns in landslide susceptibility. The towns with the highest landslide risk include Amhara Saint Town's western part, the Northern part, and St. Gebreal Church villages, with mean susceptibility values greater than 0.5. However, rainfall, distance to road, and slope were typically among the top leading factors for most villages. The primary contributing factors to landslide vulnerability were slightly varied for the five models. Decision-makers and policy planners can use the information from our study to make informed decisions and establish policies. It also suggests that various places should take different safeguards to reduce or prevent serious damage from landslide events.

Keywords: artificial neural network, logistic regression, landslide susceptibility, naïve Bayes, random forest, support vector machine

Procedia PDF Downloads 77
199 Children of Quarantine: A Post COVID-19 Mental Health Dilemma

Authors: Salman Abdul Majeed, Vidur Solanki, Ruqiya Shama Tareen

Abstract:

BACKGROUND: The COVID-19 pandemic has affected the way of living as we have known for all strata of society. While disease containment measures imposed by governmental agencies have been instrumental in controlling the spread of the virus, it has had profound collateral impacts on all populations. However, the disruption caused in the lives of one segment of population has been far more damaging than most others: the emotional wellbeing of our child and adolescent populations. This impact was even more pronounced in children who already suffered from neurodevelopmental or psychiatric disorders. In particular, school closures have not only led to profound social isolation, but also negative impacts on normal developmental opportunities and interruptions in mental health services obtained through school systems. It is too soon to understand the full impacts of quarantine, isolation, stress of social detachment and fear of pandemic, but we have started to see the devastating impact on C&A already. This review intends to shed light on the current understanding of psychiatric wellbeing of C&A during COVID-19 pandemic. METHOD: Literature search utilizing key words COVID-19 and children, quarantine and children, social isolation, Loneliness, pandemic stress and children, and mental health of children, disease containment measures was carried out. Over 200 articles were identified, out of which 81 articles were included in this review article. RESULTS: The disruption caused by COVID-19 in the lives of C&A is much more damaging and its impact is far reaching. The C&A ED visits for possible suicide attempts have jumped to 22.3% in 2020 and 39.1% during 2021. One study utilizing T1-weighted structural images, computed the thickness of cortical and subcortical structures including amygdala, hippocampus, and nucleus accumbens. The Peri-COVID group showed reduced cortical and subcortical thickness and more advanced brain aging compared to pre pandemic studies. CONCLUSION: Mental health resources for C&A remain under funded, neglected, and inaccessible to population that needs it most. Children with ongoing mental health disorders were impacted worst, along with those with predisposed biopsychosocial risk factors.

Keywords: COVID-19 and children, quarantine and children, social isolation, Loneliness, pandemic stress and children, disease containment measures, mental health of children

Procedia PDF Downloads 74
198 Appraisal of Different Levels of Soybean Meal in Diets on Growth, Digestive Enzyme Activity, Antioxidation and Gut Histology of Tilapia (Oreochromis niloticus)

Authors: Zakir Hossain, Arzu Pervin, Halima Jahan, Rabeya Akter, Abdel Omri

Abstract:

Replacement of fish meal with soybean meal is an effective way to relieve the pressure on fish meal as the supply of this feed ingredient is dwindling and certainly is not sustainable in long term at present levels in commercial feeds. This study was designed to determine the effect of fishmeal (FM) replacement with soybean meal (SBM) in diet on growth, digestive enzyme activity, antioxidation and gut histomorphology of tilapia (Oreochromis niloticus). Five diets were formulated where SBM0 contained 100% FM, FM substituted with graded levels of a mix of SBM to replace 25% (SBM25), 50% (SBM50), 75% (SBM75) and 100% (SBM100) of FM. Juvenile tilapia having weight and length of 6.60±0.13 g and 5.42±0.17 cm were randomly divided into five treatment groups having 40 individual each group and fed to visual satiation for 90 days. Diet with SBM was increased significant in body weight gain and specific growth rate in fish compared to the fish fed with SBM100. Fish having the similar weight (74.34±5.41 g) fed the diets SBM50, SBM75 and SBM100 containing higher level of SBM showed significantly longer intestine compared to SBM0. Villus height of stomach and intestine were significantly greater in the fish fed with the diets SBM0, SBM25 and SBM50 compared to SBM100. Muscular thickness was inversely changed with the increasing villus height. Protease activity was increased significantly in stomach, anterior and posterior intestine of fish fed with SBM0 and SBM25 compared to SBM100. In anterior and posterior segment of intestine, significantly higher lipase activity was observed in fish fed with the diets SBM0 and SBM25 compared to diet SBM100. In stomach, amylase activity was also significantly greater in SBM0 compared to SBM100. The antioxidant enzymes including catalase and superoxide dismutase of liver were significantly (P < 0.05) higher in the O. niloticus fed SBM100 compared to the ones fed SBM0. These results suggest that the replacement of FM upto 75% with SBM could be possible considering the growth performances, gut health and activities digestive enzymes and antioxidant enzymes in O. niloticus.

Keywords: soybean meal, fish meal, digestive enzymes, anti-oxidant enzymes

Procedia PDF Downloads 170
197 Numerical Simulation of Encased Composite Column Bases Subjected to Cyclic Loading

Authors: Eman Ismail, Adnan Masri

Abstract:

Energy dissipation in ductile moment frames occurs mainly through plastic hinge rotations in its members (beams and columns). Generally, plastic hinge locations are pre-determined and limited to the beam ends, where columns are designed to remain elastic in order to avoid premature instability (aka story mechanisms) with the exception of column bases, where a base is 'fixed' in order to provide higher stiffness and stability and to form a plastic hinge. Plastic hinging at steel column bases in ductile moment frames using conventional base connection details is accompanied by several complications (thicker and heavily stiffened connections, larger embedment depths, thicker foundation to accommodate anchor rod embedment, etc.). An encased composite base connection is proposed where a segment of the column beginning at the base up to a certain point along its height is encased in reinforced concrete with headed shear studs welded to the column flanges used to connect the column to the concrete encasement. When the connection is flexurally loaded, stresses are transferred to a reinforced concrete encasement through the headed shear studs, and thereby transferred to the foundation by reinforced concrete mechanics, and axial column forces are transferred through the base-plate assembly. Horizontal base reactions are expected to be transferred by the direct bearing of the outer and inner faces of the flanges; however, investigation of this mechanism is not within the scope of this research. The inelastic and cyclic behavior of the connection will be investigated where it will be subjected to reversed cyclic loading, and rotational ductility will be observed in cases of yielding mechanisms where yielding occurs as flexural yielding in the beam-column, shear yielding in headed studs, and flexural yielding of the reinforced concrete encasement. The findings of this research show that the connection is capable of achieving satisfactory levels of ductility in certain conditions given proper detailing and proportioning of elements.

Keywords: seismic design, plastic mechanisms steel structure, moment frame, composite construction

Procedia PDF Downloads 125
196 Seamounts and Submarine Landslides: Study Case of Island Arcs Area in North of Sulawesi

Authors: Muhammad Arif Rahman, Gamma Abdul Jabbar, Enggar Handra Pangestu, Alfi Syahrin Qadri, Iryan Anugrah Putra, Rizqi Ramadhandi.

Abstract:

Indonesia lies above three major tectonic plates, Indo-Australia plate, Eurasia plate, and Pacific plate. Interactions between those plates resulted in high tectonic and volcanic activities that corelates into high risk of geological hazards in adjacent areas, one of the areas is in North of Sulawesi’s Islands. This case raises a problem in terms of infrastructure in order to mitigate existing infrastructure and various future infrastructures plan. One of the infrastructures that is essentials to enhance telecommunication aspect is submarine fiber optic cable, that has risk to geological hazard. This cable is essential that act as backbone in telecommunication. Damaged fiber optic cables can pose serious problem that make existing signal to be loss and have negative impact to people’s social and economic factor with also decreasing various governmental services performance. Submarine cables are facing challenges in terms of geological hazards, for instance are seamounts activity. Previous studies show that until 2023, five seamounts are identified in North of Sulawesi. Seamounts itself can damage and trigger many activities that can risks submarine cables, one of the examples is submarine landslide. Main focuses of this study are to identify new possible seamounts and submarine landslide path in area North of Sulawesi Islands to help minimize risks pose by those hazards, either to existing or future plan submarine cables. Using bathymetry data, this study conduct slope analysis and use distinctive morphological features to interpret possible seamounts. Then we mapped out valleys in between seamounts and determine where sediments might flow in case of landslide, and to finally, know how it affect submarine cables in the area.

Keywords: bathymetry, geological hazard, mitigation, seamount, submarine cable, submarine landslide, volcanic activity

Procedia PDF Downloads 65
195 Maximizing Giant Prawn Resource Utilization in Banjar Regency, Indonesia: A CPUE and MSY Analysis

Authors: Ahmadi, Iriansyah, Raihana Yahman

Abstract:

The giant freshwater prawn (Macrobrachium rosenbergii de Man, 1879) is a valuable species for fisheries and aquaculture, especially in Southeast Asia, including Indonesia due to their high market demand and potential for export. The growing demand for prawns is straining the sustainability of the Banjar Regency fishery. To ensure the long-term sustainability and economic viability of the prawn fishing in this region, it is imperative to implement evidence-based management practices. This requires comprehensive data on the Catch per Unit Effort (CPUE), Maximum Sustainable Yield (MSY) and the current rate of prawn resource exploitation. it analyzed five years of prawn catch data (2019-2023) obtained from South Kalimantan Marine and Fisheries Services. Fishing gears (e.g. hook & line and cast net) were first standardized with Fishing Power Index, and then calculated effort and MSY. The intercept (a) and the slope (b) values of regression curve were used to estimate the catch-maximum sustainable yield (CMSY) and optimal fishing effort (Fopt) levels within the framework of the Surplus Production Model. The estimated rates of resource utilization were then compared to the criteria of The National Commission of Marine Fish Stock Assessment. The findings showed that the CPUE value peaked in 2019 at 33.48 kg/trip, while the lowest value observed in 2022 at 5.12 kg/trip. The CMSY value was estimated to be 17,396 kg/year, corresponding to the Fopt level of 1,636 trips/year. The highest utilization rate was 56.90% recorded in 2020, while the lowest rate was observed in 2021 at 46.16%. The annual utilization rates were classified as “medium”, suggesting that increasing fishing effort by 45% could potentially maximize prawn catches at an optimum level. These findings provide a baseline for sustainable fisheries management in the region.

Keywords: giant prawns, CPUE, fishing power index, sustainable potential, utilization rate

Procedia PDF Downloads 15
194 The Relationship between Body Positioning and Badminton Smash Quality

Authors: Gongbing Shan, Shiming Li, Zhao Zhang, Bingjun Wan

Abstract:

Badminton originated in ancient civilizations in Europe and Asia more than 2000 years ago. Presently, it is played almost everywhere with estimated 220 million people playing badminton regularly, ranging from professionals to recreational players; and it is the second most played sport in the world after soccer. In Asia, the popularity of badminton and involvement of people surpass soccer. Unfortunately, scientific researches on badminton skills are hardly proportional to badminton’s popularity. A search of literature has shown that the literature body of biomechanical investigations is relatively small. One of the dominant skills in badminton is the forehand overhead smash, which consists of 1/5 attacks during games. Empirical evidences show that one has to adjust the body position in relation to the coming shuttlecock to produce a powerful and accurate smash. Therefore, positioning is a fundamental aspect influencing smash quality. A search of literature has shown that there is a dearth/lack of study on this fundamental aspect. The goals of this study were to determine the influence of positioning and training experience on smash quality in order to discover information that could help learn/acquire the skill. Using a 10-camera, 3D motion capture system (VICON MX, 200 frames/s) and 15-segment, full-body biomechanical model, 14 skilled and 15 novice players were measured and analyzed. Results have revealed that the body positioning has direct influence on the quality of a smash, especially on shuttlecock release angle and clearance height (passing over the net) of offensive players. The results also suggest that, for training a proper positioning, one could conduct a self-selected comfort position towards a statically hanged shuttlecock and then step one foot back – a practical reference marker for learning. This perceptional marker could be applied in guiding the learning and training of beginners. As one gains experience through repetitive training, improved limbs’ coordination would increase smash quality further. The researchers hope that the findings will benefit practitioners for developing effective training programs for beginners.

Keywords: 3D motion analysis, biomechanical modeling, shuttlecock release speed, shuttlecock release angle, clearance height

Procedia PDF Downloads 497
193 Quantitative and Qualitative Analysis of Randomized Controlled Trials in Physiotherapy from India

Authors: K. Hariohm, V. Prakash, J. Saravana Kumar

Abstract:

Introduction and Rationale: Increased scope of Physiotherapy (PT) practice also has contributed to research in the field of PT. It is essential to determine the production and quality of the clinical trials from India since, it may reflect the scientific growth of the profession. These trends can be taken as a baseline to measure our performance and also can be used as a guideline for the future trials. Objective: To quantify and analyze qualitatively the RCT’s from India from the period 2000-2013’ May, and classify data for the information process. Methods: Studies were searched in the Medline database using the key terms “India”, “Indian”, “Physiotherapy”. Clinical trials only with PT authors were included. Trials out of scope of PT practice and on animals were excluded. Retrieved valid articles were analyzed for published year, type of participants, area of study, PEDro score, outcome measure domains of impairment, activity, participation; ‘a priori’ sample size calculation, region, and explanation of the intervention. Result: 45 valid articles were retrieved from the year 2000-2013’ May. The majority of articles were done on symptomatic participants (81%). The frequencies of conditions repeated more were low back pain (n-7) and diabetes (n-4). PEDro score with mode 5 and upper limit of 8 and lower limit 4 was found. 97.2% of studies measure the outcome at the impairment level, 34% in activity level, and 27.8% in participation level. 29.7% of studies did ‘a priori’ sample size calculation. Correlation of year trend and PEDro score found to be not significant (p>.05). Individual PEDro item analysis showed, randomization (100%), concealment (33%) baseline (76%), blinding-subject, therapist, assessor (9.1%, 0%, 10%), follow-up (89%) ITT (15%), statistics between groups (100%), measures of variance (88 %). Conclusion: The trend shows an upward slope in terms of RCTs published from India which is a good indicator. The qualitative analysis showed some gaps in the clinical trial design, which can be expected to be, fulfilled by the future researchers.

Keywords: RCT, PEDro, physical therapy, rehabilitation

Procedia PDF Downloads 339
192 Effect of Non-Regulated pH on the Dynamics of Dark Fermentative Biohydrogen Production with Suspended and Immobilized Cell Culture

Authors: Joelle Penniston, E. B. Gueguim-Kana

Abstract:

Biohydrogen has been identified as a promising alternative to the use of non-renewable fossil reserves, owing to its sustainability and non-polluting nature. pH is considered as a key parameter in fermentative biohydrogen production processes, due to its effect on the hydrogenase activity, metabolic activity as well as substrate hydrolysis. The present study assesses the influence of regulating pH on dark fermentative biohydrogen production. Four experimental hydrogen production schemes were evaluated. Two were implemented using suspended cells under regulated pH growth conditions (Sus_R) and suspended and non-regulated pH (Sus_N). The two others regimes consisted of alginate immobilized cells under pH regulated growth conditions (Imm_R) and immobilized and non-pH regulated conditions (Imm_N). All experiments were carried out at 37.5°C with glucose as sole source of carbon. Sus_R showed a lag time of 5 hours and a peak hydrogen fraction of 36% and a glucose degradation of 37%, compared to Sus_N which showed a peak hydrogen fraction of 44% and complete glucose degradation. Both suspended culture systems showed a higher peak biohydrogen fraction compared to the immobilized cell system. Imm_R experiments showed a lag phase of 8 hours, a peak biohydrogen fraction of 35%, while Imm_N showed a lag phase of 5 hours, a peak biohydrogen fraction of 22%. 100% glucose degradation was observed in both pH regulated and non-regulated processes. This study showed that biohydrogen production in batch mode with suspended cells in a non-regulated pH environment results in a partial degradation of substrate, with lower yield. This scheme has been the culture mode of choice for most reported studies in biohydrogen research. The relatively lower slope in pH trend of the non-regulated pH experiment with immobilized cells (Imm_N) compared to Sus_N revealed that that immobilized systems have a better buffering capacity compared to suspended systems, which allows for the extended production of biohydrogen even under non-regulated pH conditions. However, alginate immobilized cultures in flask systems showed some drawbacks associated to high rate of gas production that leads to increased buoyancy of the immobilization beads. This ultimately impedes the release of gas out of the flask.

Keywords: biohydrogen, sustainability, suspended, immobilized

Procedia PDF Downloads 340
191 Satellite Photogrammetry for DEM Generation Using Stereo Pair and Automatic Extraction of Terrain Parameters

Authors: Tridipa Biswas, Kamal Pandey

Abstract:

A Digital Elevation Model (DEM) is a simple representation of a surface in 3 dimensional space with elevation as the third dimension along with X (horizontal coordinates) and Y (vertical coordinates) in rectangular coordinates. DEM has wide applications in various fields like disaster management, hydrology and watershed management, geomorphology, urban development, map creation and resource management etc. Cartosat-1 or IRS P5 (Indian Remote Sensing Satellite) is a state-of-the-art remote sensing satellite built by ISRO (May 5, 2005) which is mainly intended for cartographic applications.Cartosat-1 is equipped with two panchromatic cameras capable of simultaneous acquiring images of 2.5 meters spatial resolution. One camera is looking at +26 degrees forward while another looks at –5 degrees backward to acquire stereoscopic imagery with base to height ratio of 0.62. The time difference between acquiring of the stereopair images is approximately 52 seconds. The high resolution stereo data have great potential to produce high-quality DEM. The high-resolution Cartosat-1 stereo image data is expected to have significant impact in topographic mapping and watershed applications. The objective of the present study is to generate high-resolution DEM, quality evaluation in different elevation strata, generation of ortho-rectified image and associated accuracy assessment from CARTOSAT-1 data based Ground Control Points (GCPs) for Aglar watershed (Tehri-Garhwal and Dehradun district, Uttarakhand, India). The present study reveals that generated DEMs (10m and 30m) derived from the CARTOSAT-1 stereo pair is much better and accurate when compared with existing DEMs (ASTER and CARTO DEM) also for different terrain parameters like slope, aspect, drainage, watershed boundaries etc., which are derived from the generated DEMs, have better accuracy and results when compared with the other two (ASTER and CARTO) DEMs derived terrain parameters.

Keywords: ASTER-DEM, CARTO-DEM, CARTOSAT-1, digital elevation model (DEM), ortho-rectified image, photogrammetry, RPC, stereo pair, terrain parameters

Procedia PDF Downloads 306
190 Applications of Artificial Intelligence (AI) in Cardiac imaging

Authors: Angelis P. Barlampas

Abstract:

The purpose of this study is to inform the reader, about the various applications of artificial intelligence (AI), in cardiac imaging. AI grows fast and its role is crucial in medical specialties, which use large amounts of digital data, that are very difficult or even impossible to be managed by human beings and especially doctors.Artificial intelligence (AI) refers to the ability of computers to mimic human cognitive function, performing tasks such as learning, problem-solving, and autonomous decision making based on digital data. Whereas AI describes the concept of using computers to mimic human cognitive tasks, machine learning (ML) describes the category of algorithms that enable most current applications described as AI. Some of the current applications of AI in cardiac imaging are the follows: Ultrasound: Automated segmentation of cardiac chambers across five common views and consequently quantify chamber volumes/mass, ascertain ejection fraction and determine longitudinal strain through speckle tracking. Determine the severity of mitral regurgitation (accuracy > 99% for every degree of severity). Identify myocardial infarction. Distinguish between Athlete’s heart and hypertrophic cardiomyopathy, as well as restrictive cardiomyopathy and constrictive pericarditis. Predict all-cause mortality. CT Reduce radiation doses. Calculate the calcium score. Diagnose coronary artery disease (CAD). Predict all-cause 5-year mortality. Predict major cardiovascular events in patients with suspected CAD. MRI Segment of cardiac structures and infarct tissue. Calculate cardiac mass and function parameters. Distinguish between patients with myocardial infarction and control subjects. It could potentially reduce costs since it would preclude the need for gadolinium-enhanced CMR. Predict 4-year survival in patients with pulmonary hypertension. Nuclear Imaging Classify normal and abnormal myocardium in CAD. Detect locations with abnormal myocardium. Predict cardiac death. ML was comparable to or better than two experienced readers in predicting the need for revascularization. AI emerge as a helpful tool in cardiac imaging and for the doctors who can not manage the overall increasing demand, in examinations such as ultrasound, computed tomography, MRI, or nuclear imaging studies.

Keywords: artificial intelligence, cardiac imaging, ultrasound, MRI, CT, nuclear medicine

Procedia PDF Downloads 77
189 On the Survival of Individuals with Type 2 Diabetes Mellitus in the United Kingdom: A Retrospective Case-Control Study

Authors: Njabulo Ncube, Elena Kulinskaya, Nicholas Steel, Dmitry Pshezhetskiy

Abstract:

Life expectancy in the United Kingdom (UK) has been near constant since 2010, particularly for the individuals of 65 years and older. This trend has been also noted in several other countries. This slowdown in the increase of life expectancy was concurrent with the increase in the number of deaths caused by non-communicable diseases. Of particular concern is the world-wide exponential increase in the number of diabetes related deaths. Previous studies have reported increased mortality hazards among diabetics compared to non-diabetics, and on the differing effects of antidiabetic drugs on mortality hazards. This study aimed to estimate the all-cause mortality hazards and related life expectancies among type 2 diabetes (T2DM) patients in the UK using the time-variant Gompertz-Cox model with frailty. The study also aimed to understand the major causes of the change in life expectancy growth in the last decade. A total of 221 182 (30.8% T2DM, 57.6% Males) individuals aged 50 years and above, born between 1930 and 1960, inclusive, and diagnosed between 2000 and 2016, were selected from The Health Improvement Network (THIN) database of the UK primary care data and followed up to 31 December 2016. About 13.4% of participants died during the follow-up period. The overall all-cause mortality hazard ratio of T2DM compared to non-diabetic controls was 1.467 (1.381-1.558) and 1.38 (1.307-1.457) when diagnosed between 50 to 59 years and 60 to 74 years, respectively. The estimated life expectancies among T2DM individuals without further comorbidities diagnosed at the age of 60 years were 2.43 (1930-1939 birth cohort), 2.53 (1940-1949 birth cohort) and 3.28 (1950-1960 birth cohort) years less than those of non-diabetic controls. However, the 1950-1960 birth cohort had a steeper hazard function compared to the 1940-1949 birth cohort for both T2DM and non-diabetic individuals. In conclusion, mortality hazards for people with T2DM continue to be higher than for non-diabetics. The steeper mortality hazard slope for the 1950-1960 birth cohort might indicate the sub-population contributing to a slowdown in the growth of the life expectancy.

Keywords: T2DM, Gompetz-Cox model with frailty, all-cause mortality, life expectancy

Procedia PDF Downloads 118
188 Hydro-Meteorological Vulnerability and Planning in Urban Area: The Case of Yaoundé City in Cameroon

Authors: Ouabo Emmanuel Romaric, Amougou Armathe

Abstract:

Background and aim: The study of impacts of floods and landslides at a small scale, specifically in the urban areas of developing countries is done to provide tools and actors for a better management of risks in such areas, which are now being affected by climate change. The main objective of this study is to assess the hydrometeorological vulnerabilities associated with flooding and urban landslides to propose adaptation measures. Methods: Climatic data analyses were done by calculation of indices of climate change within 50 years (1960-2012). Analyses of field data to determine causes, the level of risk and its consequences on the area of study was carried out using SPSS 18 software. The cartographic analysis and GIS were used to refine the work in space. Then, spatial and terrain analyses were carried out to determine the morphology of field in relation with floods and landslide, and the diffusion on the field. Results: The interannual changes in precipitation has highlighted the surplus years (21), the deficit years (24) and normal years (7). Barakat method bring out evolution of precipitation by jerks and jumps. Floods and landslides are correlated to high precipitation during surplus and normal years. Data field analyses show that populations are conscious (78%) of the risks with 74% of them exposed, but their capacities of adaptation is very low (51%). Floods are the main risk. The soils are classed as feralitic (80%), hydromorphic (15%) and raw mineral (5%). Slope variation (5% to 15%) of small hills and deep valley with anarchic construction favor flood and landslide during heavy precipitation. Mismanagement of waste produce blocks free circulation of river and accentuate floods. Conclusion: Vulnerability of population to hydrometeorological risks in Yaoundé VI is the combination of variation of parameters like precipitation, temperature due to climate change, and the bad planning of construction in urban areas. Because of lack of channels for water to circulate due to saturation of soils, the increase of heavy precipitation and mismanagement of waste, the result are floods and landslides which causes many damages on goods and people.

Keywords: climate change, floods, hydrometeorological, vulnerability

Procedia PDF Downloads 465
187 Effect of Forests and Forest Cover Change on Rainfall in the Central Rift Valley of Ethiopia

Authors: Alemayehu Muluneh, Saskia Keesstra, Leo Stroosnijder, Woldeamlak Bewket, Ashenafi Burka

Abstract:

There are some scientific evidences and a belief by many that forests attract rain and deforestation contributes to a decline of rainfall. However, there is still a lack of concrete scientific evidence on the role of forests in rainfall amount. In this paper, we investigate the forest-rainfall relationships in the environmentally hot spot area of the Central Rift Valley (CRV) of Ethiopia. Specifically, we evaluate long term (1970-2009) rainfall variability and its relationship with historical forest cover and the relationship between existing forest cover and topographical variables and rainfall distribution. The study used 16 long term and 15 short term rainfall stations. The Mann-Kendall test, bi variate and multiple regression models were used. The results show forest and wood land cover continuously declined over the 40 years period (1970-2009), but annual rainfall in the rift valley floor increased by 6.42 mm/year. But, on the escarpment and highlands, annual rainfall decreased by 2.48 mm/year. The increase in annual rainfall in the rift valley floor is partly attributable to the increase in evaporation as a result of increasing temperatures from the 4 existing lakes in the rift valley floor. Though, annual rainfall is decreasing on the escarpment and highlands, there was no significant correlation between this rainfall decrease and forest and wood land decline and also rainfall variability in the region was not explained by forest cover. Hence, the decrease in annual rainfall on the escarpment and highlands is likely related to the global warming of the atmosphere and the surface waters of the Indian Ocean. Spatial variability of number of rainy days from systematically observed two-year’s rainfall data (2012-2013) was significantly (R2=-0.63) explained by forest cover (distance from forest). But, forest cover was not a significant variable (R2=-0.40) in explaining annual rainfall amount. Generally, past deforestation and existing forest cover showed very little effect on long term and short term rainfall distribution, but a significant effect on number of rainy days in the CRV of Ethiopia.

Keywords: elevation, forest cover, rainfall, slope

Procedia PDF Downloads 545
186 Effects of Nutrient Source and Drying Methods on Physical and Phytochemical Criteria of Pot Marigold (Calendula offiCinalis L.) Flowers

Authors: Leila Tabrizi, Farnaz Dezhaboun

Abstract:

In order to study the effect of plant nutrient source and different drying methods on physical and phytochemical characteristics of pot marigold (Calendula officinalis L., Asteraceae) flowers, a factorial experiment was conducted based on completely randomized design with three replications in Research Laboratory of University of Tehran in 2010. Different nutrient sources (vermicompost, municipal waste compost, cattle manure, mushroom compost and control) which were applied in a field experiment for flower production and different drying methods including microwave (300, 600 and 900 W), oven (60, 70 and 80oC) and natural-shade drying in room temperature, were tested. Criteria such as drying kinetic, antioxidant activity, total flavonoid content, total phenolic compounds and total carotenoid of flowers were evaluated. Results indicated that organic inputs as nutrient source for flowers had no significant effects on quality criteria of pot marigold except of total flavonoid content, while drying methods significantly affected phytochemical criteria. Application of microwave 300, 600 and 900 W resulted in the highest amount of total flavonoid content, total phenolic compounds and antioxidant activity, respectively, while oven drying caused the lowest amount of phytochemical criteria. Also, interaction effect of nutrient source and drying method significantly affected antioxidant activity in which the highest amount of antioxidant activity was obtained in combination of vermicompost and microwave 900 W. In addition, application of vermicompost combined with oven drying at 60oC caused the lowest amount of antioxidant activity. Based on results of drying trend, microwave drying showed a faster drying rate than those oven and natural-shade drying in which by increasing microwave power and oven temperature, time of flower drying decreased whereas slope of moisture content reduction curve showed accelerated trend.

Keywords: drying kinetic, medicinal plant, organic fertilizer, phytochemical criteria

Procedia PDF Downloads 334
185 Flood Hazard Assessment and Land Cover Dynamics of the Orai Khola Watershed, Bardiya, Nepal

Authors: Loonibha Manandhar, Rajendra Bhandari, Kumud Raj Kafle

Abstract:

Nepal’s Terai region is a part of the Ganges river basin which is one of the most disaster-prone areas of the world, with recurrent monsoon flooding causing millions in damage and the death and displacement of hundreds of people and households every year. The vulnerability of human settlements to natural disasters such as floods is increasing, and mapping changes in land use practices and hydro-geological parameters is essential in developing resilient communities and strong disaster management policies. The objective of this study was to develop a flood hazard zonation map of Orai Khola watershed and map the decadal land use/land cover dynamics of the watershed. The watershed area was delineated using SRTM DEM, and LANDSAT images were classified into five land use classes (forest, grassland, sediment and bare land, settlement area and cropland, and water body) using pixel-based semi-automated supervised maximum likelihood classification. Decadal changes in each class were then quantified using spatial modelling. Flood hazard mapping was performed by assigning weights to factors slope, rainfall distribution, distance from the river and land use/land cover on the basis of their estimated influence in causing flood hazard and performing weighed overlay analysis to identify areas that are highly vulnerable. The forest and grassland coverage increased by 11.53 km² (3.8%) and 1.43 km² (0.47%) from 1996 to 2016. The sediment and bare land areas decreased by 12.45 km² (4.12%) from 1996 to 2016 whereas settlement and cropland areas showed a consistent increase to 14.22 km² (4.7%). Waterbody coverage also increased to 0.3 km² (0.09%) from 1996-2016. 1.27% (3.65 km²) of total watershed area was categorized into very low hazard zone, 20.94% (60.31 km²) area into low hazard zone, 37.59% (108.3 km²) area into moderate hazard zone, 29.25% (84.27 km²) area into high hazard zone and 31 villages which comprised 10.95% (31.55 km²) were categorized into high hazard zone area.

Keywords: flood hazard, land use/land cover, Orai river, supervised maximum likelihood classification, weighed overlay analysis

Procedia PDF Downloads 351
184 Critical Parameters of a Square-Well Fluid

Authors: Hamza Javar Magnier, Leslie V. Woodcock

Abstract:

We report extensive molecular dynamics (MD) computational investigations into the thermodynamic description of supercritical properties for a model fluid that is the simplest realistic representation of atoms or molecules. The pair potential is a hard-sphere repulsion of diameter σ with a very short attraction of length λσ. When λ = 1.005 the range is so short that the model atoms are referred to as “adhesive spheres”. Molecular dimers, trimers …etc. up to large clusters, or droplets, of many adhesive-sphere atoms are unambiguously defined. This then defines percolation transitions at the molecular level that bound the existence of gas and liquid phases at supercritical temperatures, and which define the existence of a supercritical mesophase. Both liquid and gas phases are seen to terminate at the loci of percolation transitions, and below a second characteristic temperature (Tc2) are separated by the supercritical mesophase. An analysis of the distribution of clusters in gas, meso- and liquid phases confirms the colloidal nature of this mesophase. The general phase behaviour is compared with both experimental properties of the water-steam supercritical region and also with formally exact cluster theory of Mayer and Mayer. Both are found to be consistent with the present findings that in this system the supercritical mesophase narrows in density with increasing T > Tc and terminates at a higher Tc2 at a confluence of the primary percolation loci. The expended plot of the MD data points in the mesophase of 7 critical and supercritical isotherms in highlight this narrowing in density of the linear-slope region of the mesophase as temperature is increased above the critical. This linearity in the mesophase implies the existence of a linear combination rule between gas and liquid which is an extension of the Lever rule in the subcritical region, and can be used to obtain critical parameters without resorting to experimental data in the two-phase region. Using this combination rule, the calculated critical parameters Tc = 0.2007 and Pc = 0.0278 are found be agree with the values found by of Largo and coworkers. The properties of this supercritical mesophase are shown to be consistent with an alternative description of the phenomenon of critical opalescence seen in the supercritical region of both molecular and colloidal-protein supercritical fluids.

Keywords: critical opalescence, supercritical, square-well, percolation transition, critical parameters.

Procedia PDF Downloads 520
183 Load-Deflecting Characteristics of a Fabricated Orthodontic Wire with 50.6Ni 49.4Ti Alloy Composition

Authors: Aphinan Phukaoluan, Surachai Dechkunakorn, Niwat Anuwongnukroh, Anak Khantachawana, Pongpan Kaewtathip, Julathep Kajornchaiyakul, Peerapong Tua-Ngam

Abstract:

Aims: The objectives of this study was to determine the load-deflecting characteristics of a fabricated orthodontic wire with alloy composition of 50.6% (atomic weight) Ni and 49.4% (atomic weight) Ti and to compare the results with Ormco, a commercially available pre-formed NiTi orthodontic archwire. Materials and Methods: The ingots alloys with atomic weight ratio 50.6 Ni: 49.4 Ti alloy were used in this study. Three specimens were cut to have wire dimensions of 0.016 inch x0.022 inch. For comparison, a commercially available pre-formed NiTi archwire, Ormco, with dimensions of 0.016 inch x 0.022 inch was used. Three-point bending tests were performed at the temperature 36+1 °C using a Universal Testing Machine on the newly fabricated and commercial archwires to assess the characteristics of the load-deflection curve with loading and unloading forces. The loading and unloading features at the deflection points 0.25, 0.50, 0.75. 1.0, 1.25, and 1.5 mm were compared. Descriptive statistics was used to evaluate each variables, and independent t-test at p < 0.05 was used to analyze the mean differences between the two groups. Results: The load-deflection curve of the 50.6Ni: 49.4Ti wires exhibited the characteristic features of superelasticity. The curves at the loading and unloading slope of Ormco NiTi archwire were more parallel than the newly fabricated NiTi wires. The average deflection force of the 50.6Ni: 49.4Ti wire was 304.98 g and 208.08 g for loading and unloading, respectively. Similarly, the values were 358.02 g loading and 253.98 g for unloading of Ormco NiTi archwire. The interval difference forces between each deflection points were in the range 20.40-121.38 g and 36.72-92.82 g for the loading and unloading curve of 50.6Ni: 49.4Ti wire, respectively, and 4.08-157.08 g and 14.28-90.78 g for the loading and unloading curve of commercial wire, respectively. The average deflection force of the 50.6Ni: 49.4Ti wire was less than that of Ormco NiTi archwire, which could have been due to variations in the wire dimensions. Although a greater force was required for each deflection point of loading and unloading for the 50.6Ni: 49.4Ti wire as compared to Ormco NiTi archwire, the values were still within the acceptable limits to be clinically used in orthodontic treatment. Conclusion: The 50.6Ni: 49.4Ti wires presented the characteristics of a superelastic orthodontic wire. The loading and unloading force were also suitable for orthodontic tooth movement. These results serve as a suitable foundation for further studies in the development of new orthodontic NiTi archwires.

Keywords: 50.6 ni 49.4 Ti alloy wire, load deflection curve, loading and unloading force, orthodontic

Procedia PDF Downloads 300
182 Determination of Potential Agricultural Lands Using Landsat 8 OLI Images and GIS: Case Study of Gokceada (Imroz) Turkey

Authors: Rahmi Kafadar, Levent Genc

Abstract:

In present study, it was aimed to determine potential agricultural lands (PALs) in Gokceada (Imroz) Island of Canakkale province, Turkey. Seven-band Landsat 8 OLI images acquired on July 12 and August 13, 2013, and their 14-band combination image were used to identify current Land Use Land Cover (LULC) status. Principal Component Analysis (PCA) was applied to three Landsat datasets in order to reduce the correlation between the bands. A total of six Original and PCA images were classified using supervised classification method to obtain the LULC maps including 6 main classes (“Forest”, “Agriculture”, “Water Surface”, “Residential Area-Bare Soil”, “Reforestation” and “Other”). Accuracy assessment was performed by checking the accuracy of 120 randomized points for each LULC maps. The best overall accuracy and Kappa statistic values (90.83%, 0.8791% respectively) were found for PCA images which were generated from 14-bands combined images called 3-B/JA. Digital Elevation Model (DEM) with 15 m spatial resolution (ASTER) was used to consider topographical characteristics. Soil properties were obtained by digitizing 1:25000 scaled soil maps of rural services directorate general. Potential Agricultural Lands (PALs) were determined using Geographic information Systems (GIS). Procedure was applied considering that “Other” class of LULC map may be used for agricultural purposes in the future properties. Overlaying analysis was conducted using Slope (S), Land Use Capability Class (LUCC), Other Soil Properties (OSP) and Land Use Capability Sub-Class (SUBC) properties. A total of 901.62 ha areas within “Other” class (15798.2 ha) of LULC map were determined as PALs. These lands were ranked as “Very Suitable”, “Suitable”, “Moderate Suitable” and “Low Suitable”. It was determined that the 8.03 ha were classified as “Very Suitable” while 18.59 ha as suitable and 11.44 ha as “Moderate Suitable” for PALs. In addition, 756.56 ha were found to be “Low Suitable”. The results obtained from this preliminary study can serve as basis for further studies.

Keywords: digital elevation model (DEM), geographic information systems (GIS), gokceada (Imroz), lANDSAT 8 OLI-TIRS, land use land cover (LULC)

Procedia PDF Downloads 352
181 Optimization of Water Pipeline Routes Using a GIS-Based Multi-Criteria Decision Analysis and a Geometric Search Algorithm

Authors: Leon Mortari

Abstract:

The Metropolitan East region of Rio de Janeiro state, Brazil, faces a historic water scarcity. Among the alternatives studied to solve this situation, the possibility of adduction of the available water in the reservoir Lagoa de Juturnaíba to supply the region's municipalities stands out. The allocation of a linear engineering project must occur through an evaluation of different aspects, such as altitude, slope, proximity to roads, distance from watercourses, land use and occupation, and physical and chemical features of the soil. This work aims to apply a multi-criteria model that combines geoprocessing techniques, decision-making, and geometric search algorithm to optimize a hypothetical adductor system in the scenario of expanding the water supply system that serves this region, known as Imunana-Laranjal, using the Lagoa de Juturnaíba as the source. It is proposed in this study, the construction of a spatial database related to the presented evaluation criteria, treatment and rasterization of these data, and standardization and reclassification of this information in a Geographic Information System (GIS) platform. The methodology involves the integrated analysis of these criteria, using their relative importance defined by weighting them based on expert consultations and the Analytic Hierarchy Process (AHP) method. Three approaches are defined for weighting the criteria by AHP: the first treats all criteria as equally important, the second considers weighting based on a pairwise comparison matrix, and the third establishes a hierarchy based on the priority of the criteria. For each approach, a distinct group of weightings is defined. In the next step, map algebra tools are used to overlay the layers and generate cost surfaces, that indicates the resistance to the passage of the adductor route, using the three groups of weightings. The Dijkstra algorithm, a geometric search algorithm, is then applied to these cost surfaces to find an optimized path within the geographical space, aiming to minimize resources, time, investment, maintenance, and environmental and social impacts.

Keywords: geometric search algorithm, GIS, pipeline, route optimization, spatial multi-criteria analysis model

Procedia PDF Downloads 30
180 Method for Improving ICESAT-2 ATL13 Altimetry Data Utility on Rivers

Authors: Yun Chen, Qihang Liu, Catherine Ticehurst, Chandrama Sarker, Fazlul Karim, Dave Penton, Ashmita Sengupta

Abstract:

The application of ICESAT-2 altimetry data in river hydrology critically depends on the accuracy of the mean water surface elevation (WSE) at a virtual station (VS) where satellite observations intersect with water. The ICESAT-2 track generates multiple VSs as it crosses the different water bodies. The difficulties are particularly pronounced in large river basins where there are many tributaries and meanders often adjacent to each other. One challenge is to split photon segments along a beam to accurately partition them to extract only the true representative water height for individual elements. As far as we can establish, there is no automated procedure to make this distinction. Earlier studies have relied on human intervention or river masks. Both approaches are unsatisfactory solutions where the number of intersections is large, and river width/extent changes over time. We describe here an automated approach called “auto-segmentation”. The accuracy of our method was assessed by comparison with river water level observations at 10 different stations on 37 different dates along the Lower Murray River, Australia. The congruence is very high and without detectable bias. In addition, we compared different outlier removal methods on the mean WSE calculation at VSs post the auto-segmentation process. All four outlier removal methods perform almost equally well with the same R2 value (0.998) and only subtle variations in RMSE (0.181–0.189m) and MAE (0.130–0.142m). Overall, the auto-segmentation method developed here is an effective and efficient approach to deriving accurate mean WSE at river VSs. It provides a much better way of facilitating the application of ICESAT-2 ATL13 altimetry to rivers compared to previously reported studies. Therefore, the findings of our study will make a significant contribution towards the retrieval of hydraulic parameters, such as water surface slope along the river, water depth at cross sections, and river channel bathymetry for calculating flow velocity and discharge from remotely sensed imagery at large spatial scales.

Keywords: lidar sensor, virtual station, cross section, mean water surface elevation, beam/track segmentation

Procedia PDF Downloads 60
179 Realistic Modeling of the Preclinical Small Animal Using Commercial Software

Authors: Su Chul Han, Seungwoo Park

Abstract:

As the increasing incidence of cancer, the technology and modality of radiotherapy have advanced and the importance of preclinical model is increasing in the cancer research. Furthermore, the small animal dosimetry is an essential part of the evaluation of the relationship between the absorbed dose in preclinical small animal and biological effect in preclinical study. In this study, we carried out realistic modeling of the preclinical small animal phantom possible to verify irradiated dose using commercial software. The small animal phantom was modeling from 4D Digital Mouse whole body phantom. To manipulate Moby phantom in commercial software (Mimics, Materialise, Leuven, Belgium), we converted Moby phantom to DICOM image file of CT by Matlab and two- dimensional of CT images were converted to the three-dimensional image and it is possible to segment and crop CT image in Sagittal, Coronal and axial view). The CT images of small animals were modeling following process. Based on the profile line value, the thresholding was carried out to make a mask that was connection of all the regions of the equal threshold range. Using thresholding method, we segmented into three part (bone, body (tissue). lung), to separate neighboring pixels between lung and body (tissue), we used region growing function of Mimics software. We acquired 3D object by 3D calculation in the segmented images. The generated 3D object was smoothing by remeshing operation and smoothing operation factor was 0.4, iteration value was 5. The edge mode was selected to perform triangle reduction. The parameters were that tolerance (0.1mm), edge angle (15 degrees) and the number of iteration (5). The image processing 3D object file was converted to an STL file to output with 3D printer. We modified 3D small animal file using 3- Matic research (Materialise, Leuven, Belgium) to make space for radiation dosimetry chips. We acquired 3D object of realistic small animal phantom. The width of small animal phantom was 2.631 cm, thickness was 2.361 cm, and length was 10.817. Mimics software supported efficiency about 3D object generation and usability of conversion to STL file for user. The development of small preclinical animal phantom would increase reliability of verification of absorbed dose in small animal for preclinical study.

Keywords: mimics, preclinical small animal, segmentation, 3D printer

Procedia PDF Downloads 365
178 Detection of Powdery Mildew Disease in Strawberry Using Image Texture and Supervised Classifiers

Authors: Sultan Mahmud, Qamar Zaman, Travis Esau, Young Chang

Abstract:

Strawberry powdery mildew (PM) is a serious disease that has a significant impact on strawberry production. Field scouting is still a major way to find PM disease, which is not only labor intensive but also almost impossible to monitor disease severity. To reduce the loss caused by PM disease and achieve faster automatic detection of the disease, this paper proposes an approach for detection of the disease, based on image texture and classified with support vector machines (SVMs) and k-nearest neighbors (kNNs). The methodology of the proposed study is based on image processing which is composed of five main steps including image acquisition, pre-processing, segmentation, features extraction and classification. Two strawberry fields were used in this study. Images of healthy leaves and leaves infected with PM (Sphaerotheca macularis) disease under artificial cloud lighting condition. Colour thresholding was utilized to segment all images before textural analysis. Colour co-occurrence matrix (CCM) was introduced for extraction of textural features. Forty textural features, related to a physiological parameter of leaves were extracted from CCM of National television system committee (NTSC) luminance, hue, saturation and intensity (HSI) images. The normalized feature data were utilized for training and validation, respectively, using developed classifiers. The classifiers have experimented with internal, external and cross-validations. The best classifier was selected based on their performance and accuracy. Experimental results suggested that SVMs classifier showed 98.33%, 85.33%, 87.33%, 93.33% and 95.0% of accuracy on internal, external-I, external-II, 4-fold cross and 5-fold cross-validation, respectively. Whereas, kNNs results represented 90.0%, 72.00%, 74.66%, 89.33% and 90.3% of classification accuracy, respectively. The outcome of this study demonstrated that SVMs classified PM disease with a highest overall accuracy of 91.86% and 1.1211 seconds of processing time. Therefore, overall results concluded that the proposed study can significantly support an accurate and automatic identification and recognition of strawberry PM disease with SVMs classifier.

Keywords: powdery mildew, image processing, textural analysis, color co-occurrence matrix, support vector machines, k-nearest neighbors

Procedia PDF Downloads 120
177 Effects of Exercise in the Cold on Glycolipid Metabolism and Insulin Sensitivity in Obese Rats

Authors: Chaoge Wang, Xiquan Weng, Yan Meng, Wentao Lin

Abstract:

Objective: Cold exposure and exercise serve as two physiological stimuli to glycolipid metabolism and insulin sensitivity. So far, it remains to be elucidated whether exercise plus cold exposure can produce an addictive effect on promoting glycolipid metabolism and insulin sensitivity. Methods: 64 SD rats were subjected to high-fat and high-sugar diets for 9-week and sucessfully to establish an obesity model. They were randomly divided into 8 groups: normal control group (NC), normal exercise group (NE), continuous cold control group (CC), continuous cold exercise group (CE), acute clod control group (AC), acute cold exercise group (AE), intermittent cold control group (IC) and intermittent cold exercise group (IE). For continuous cold exposure, the rats stayed in a cold environment all day; for acute cold exposure, the rats were exposed to cold for only 4h before the end of the experiment; for intermittent cold exposure, the rats were exposed to cold for 4h per day. The protocol for treadmill runnings were as follows: 25m/min (speed), 0°C (slope), 30 mins each time, an interval for 10 mins between two runnings, twice/two days, lasting for 5 weeks. Sampling were conducted on the 5th weekend. Blood lipids, free fatty acids, blood glucose (FBG), and serum insulin (FINS) were examined, and the insulin resistance index (HOMA-IR = FBG (mmol/L)×FINS(mIU/L)/22.5) was calculated. SPSS 22.0 was used for statistical analysis of the experimental results, and the ANOVA analysis was performed between groups (p < 0.05 was significant). Results: (1) Compared with the NC group, the FBG of the rats was significantly declined in the NE, CE, AC, AE, and IE groups (p < 0.05), the FINS of the rats was significantly declined in the AE group (p < 0.05), the HOMA-IR of the rats was significantly declined in the NE, CE, AC, AE and IE groups (p < 0.05). Compared with the NE group, the FBG of the rats was significantly declined in the CE, AE, and IE groups (p < 0.05), the FINS and HOMA-IR of the rats were significantly declined in the AE group (p < 0.05). (2) Compared with the NC group, the CHO, TG, LDL-C, and FFA of the rats were significantly declined in CE and IE groups (p < 0.05), the HDL-C of the rats was significantly higher in NE, CC, CE, AE, and IE groups (p < 0.05). Compared with the NE group, the HDL-C of the rats was significantly higher in the CE and IE groups (p < 0.05). Conclusions: Sedentariness or exercise in the acute cold doesn't make sense in the treatment of type 2 diabetes, which led to one-off increases of the body's insulin sensitivity. Exercise in the continuous and intermittent cold can effectively decline the FBG, TC, TG, LDL-C, and FFA levels and increase the HDL-C level and insulin sensitivity in obese rats. These results can impact the prevention and treatment of type 2 diabetes.

Keywords: cold, exercise, insulin sensitivity, obesity

Procedia PDF Downloads 142