Search results for: fully spatial signal processing
6622 Estimation of Soil Erosion Potential in Herat Province, Afghanistan
Authors: M. E. Razipoor, T. Masunaga, K. Sato, M. S. Saboory
Abstract:
Estimation of soil erosion is economically and environmentally important in Herat, Afghanistan. Degradation of soil has negative impact (decreased soil fertility, destroyed soil structure, and consequently soil sealing and crusting) on life of Herat residents. Water and wind are the main erosive factors causing soil erosion in Herat. Furthermore, scarce vegetation cover, exacerbated by socioeconomic constraint, and steep slopes accelerate soil erosion. To sustain soil productivity and reduce soil erosion impact on human life, due to sustaining agricultural production and auditing the environment, it is needed to quantify the magnitude and extent of soil erosion in a spatial domain. Thus, this study aims to estimate soil loss potential and its spatial distribution in Herat, Afghanistan by applying RUSLE in GIS environment. The rainfall erosivity factor ranged between values of 125 and 612 (MJ mm ha-1 h-1 year-1). Soil erodibility factor varied from 0.036 to 0.073 (Mg h MJ-1 mm-1). Slope length and steepness factor (LS) values were between 0.03 and 31.4. The vegetation cover factor (C), derived from NDVI analysis of Landsat-8 OLI scenes, resulting in range of 0.03 to 1. Support practice factor (P) were assigned to a value of 1, since there is not significant mitigation practices in the study area. Soil erosion potential map was the product of these factors. Mean soil erosion rate of Herat Province was 29 Mg ha-1 year-1 that ranged from 0.024 Mg ha-1 year-1 in flat areas with dense vegetation cover to 778 Mg ha-1 year-1 in sharp slopes with high rainfall but least vegetation cover. Based on land cover map of Afghanistan, areas with soil loss rate higher than soil loss tolerance (8 Mg ha-1 year-1) occupies 98% of Forests, 81% rangelands, 64% barren lands, 60% rainfed lands, 28% urban area and 18% irrigated Lands.Keywords: Afghanistan, erosion, GIS, Herat, RUSLE
Procedia PDF Downloads 4346621 Parametric Analysis of Lumped Devices Modeling Using Finite-Difference Time-Domain
Authors: Felipe M. de Freitas, Icaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende
Abstract:
The SPICE-based simulators are quite robust and widely used for simulation of electronic circuits, their algorithms support linear and non-linear lumped components and they can manipulate an expressive amount of encapsulated elements. Despite the great potential of these simulators based on SPICE in the analysis of quasi-static electromagnetic field interaction, that is, at low frequency, these simulators are limited when applied to microwave hybrid circuits in which there are both lumped and distributed elements. Usually the spatial discretization of the FDTD (Finite-Difference Time-Domain) method is done according to the actual size of the element under analysis. After spatial discretization, the Courant Stability Criterion calculates the maximum temporal discretization accepted for such spatial discretization and for the propagation velocity of the wave. This criterion guarantees the stability conditions for the leapfrogging of the Yee algorithm; however, it is known that for the field update, the stability of the complete FDTD procedure depends on factors other than just the stability of the Yee algorithm, because the FDTD program needs other algorithms in order to be useful in engineering problems. Examples of these algorithms are Absorbent Boundary Conditions (ABCs), excitation sources, subcellular techniques, grouped elements, and non-uniform or non-orthogonal meshes. In this work, the influence of the stability of the FDTD method in the modeling of concentrated elements such as resistive sources, resistors, capacitors, inductors and diode will be evaluated. In this paper is proposed, therefore, the electromagnetic modeling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-wide frequencies. The models of the resistive source, the resistor, the capacitor, the inductor, and the diode will be evaluated, among the mathematical models for lumped components in the LE-FDTD method (Lumped-Element Finite-Difference Time-Domain), through the parametric analysis of Yee cells size which discretizes the lumped components. In this way, it is sought to find an ideal cell size so that the analysis in FDTD environment is in greater agreement with the expected circuit behavior, maintaining the stability conditions of this method. Based on the mathematical models and the theoretical basis of the required extensions of the FDTD method, the computational implementation of the models in Matlab® environment is carried out. The boundary condition Mur is used as the absorbing boundary of the FDTD method. The validation of the model is done through the comparison between the obtained results by the FDTD method through the electric field values and the currents in the components, and the analytical results using circuit parameters.Keywords: hybrid circuits, LE-FDTD, lumped element, parametric analysis
Procedia PDF Downloads 1536620 Land Subsidence Monitoring in Semarang and Demak Coastal Area Using Persistent Scatterer Interferometric Synthetic Aperture Radar
Authors: Reyhan Azeriansyah, Yudo Prasetyo, Bambang Darmo Yuwono
Abstract:
Land subsidence is one of the problems that occur in the coastal areas of Java Island, one of which is the Semarang and Demak areas located in the northern region of Central Java. The impact of sea erosion, rising sea levels, soil structure vulnerable and economic development activities led to both these areas often occurs on land subsidence. To know how much land subsidence that occurred in the region needs to do the monitoring carried out by remote sensing methods such as PS-InSAR method. PS-InSAR is a remote sensing technique that is the development of the DInSAR method that can monitor the movement of the ground surface that allows users to perform regular measurements and monitoring of fixed objects on the surface of the earth. PS InSAR processing is done using Standford Method of Persistent Scatterers (StaMPS). Same as the recent analysis technique, Persistent Scatterer (PS) InSAR addresses both the decorrelation and atmospheric problems of conventional InSAR. StaMPS identify and extract the deformation signal even in the absence of bright scatterers. StaMPS is also applicable in areas undergoing non-steady deformation, with no prior knowledge of the variations in deformation rate. In addition, this method can also cover a large area so that the decline in the face of the land can cover all coastal areas of Semarang and Demak. From the PS-InSAR method can be known the impact on the existing area in Semarang and Demak region per year. The PS-InSAR results will also be compared with the GPS monitoring data to determine the difference in land decline that occurs between the two methods. By utilizing remote sensing methods such as PS-InSAR method, it is hoped that the PS-InSAR method can be utilized in monitoring the land subsidence and can assist other survey methods such as GPS surveys and the results can be used in policy determination in the affected coastal areas of Semarang and Demak.Keywords: coastal area, Demak, land subsidence, PS-InSAR, Semarang, StaMPS
Procedia PDF Downloads 2666619 Using Contingency Valuation Approaches to Assess Community Benefits through the Use of Great Zimbabwe World Heritage Site as a Tourism Attraction
Authors: Nyasha Agnes Gurira, Patrick Ngulube
Abstract:
Heritage as an asset can be used to achieve cultural and socio-economic development through its careful use as a tourist attraction. Cultural heritage sites, especially those listed as World Heritage sites generate a lot of revenue through their use as tourist attractions. According to article 5(a) of the World Heritage Convention, World Heritage Sites (WHS) must serve a function in the life of the communities. This is further stressed by the International Council on Monuments and Sites (ICOMOS) charter on cultural heritage tourism which recognizes the positive effects of tourism on cultural heritage and underlines that domestic and international tourism is among the foremost vehicles for cultural exchange, conservation should thus provide for responsible and well-managed opportunities for local communities. The inclusion of communities in the world heritage agenda identifies them as the owners of the heritage and partners in the management planning process. This reiterates the need to empower communities and enable them to participate in the decisions which relate to the use of their heritage divorcing from the ideals of viewing communities as beneficiaries from the heritage resource. It recognizes community ownership rights to cultural heritage an element enshrined in Zimbabwe’ national constitution. Through the use of contingency valuation approaches, by assessing the Willingness to pay for visitors at the site the research determined the tourism use value of Great Zimbabwe (WHS). It assessed the extent to which the communities at Great Zimbabwe (WHS) have been developed through the tourism use of the WHS. Findings show that the current management mechanism in place regards communities as stakeholders in the management of the WHS, their ownership and property rights are not fully recognized. They receive indirect benefits from the tourism use of the WHS. This paper calls for a shift in management approach where community ownership rights are fully recognized and more inclusive approaches are adopted to ensure that the goal of sustainable development is achieved. Pro-poor benefits of tourism are key to enhancing the livelihoods of communities and can only be achieved if their rights are recognized and respected.Keywords: communities, cultural heritage tourism, development, property ownership rights, pro-poor benefits, sustainability, world heritage site
Procedia PDF Downloads 2586618 Heritage Landmark of Penang: Segara Ninda, a Mix of Culture
Authors: Normah Sulaiman, Yong Zhi Kang, Nor Hayati Hussain, Abdul Rehman Khalid
Abstract:
Segara Ninda owned by Din Ku Meh, the governor of the province Satul, a Malay man with a big role liaising with Thailand. This mansion is part of the legacy he left behind among other properties in George Town, Penang, besides his family. The island’s geographical location is strategic which has benefitted it through important trade routes for Europe, Middle, East, India, and China in the past. Due to this reasoning, various architectural styles were introduced in Penang; Late Straits Eclectic style is one of the forms of the Colonial Architectural style widely spread as vernacular shophouses in George Town. Segara Ninda is located among the mixture of nouveau-riche, historical and heritage sites at the most important street; Penang Road, which dated back to the late 18th century. This paper examines the strait eclectic style that Segara Ninda encompasses. Acknowledging the mixture of colonial architecture in Georgetown, we argue that the mansion faces challenging issues in conservation processes to be vindicated. This is reflected by analysing the spatial layout, visual elements quality, and its activity through interviews with the occupants of the mansion. The focus will be on the understanding of building form, features, and functions; respecting the architectural spaces and their activity. The methodology applied is to promote our understanding of the mix of culture that the mansion holds through documentation, observation and measuring exercises. This offers a positional interpretation of the mix of culture that the mansion holds. This conservation effort will further contribute exposure to the public and recognize it in the society as its essence is a deficiency character to the existing built environment.Keywords: eclectic, heritage, spatial organization, culture
Procedia PDF Downloads 1806617 Modelling Spatial Dynamics of Terrorism
Authors: André Python
Abstract:
To this day, terrorism persists as a worldwide threat, exemplified by the recent deadly attacks in January 2015 in Paris and the ongoing massacres perpetrated by ISIS in Iraq and Syria. In response to this threat, states deploy various counterterrorism measures, the cost of which could be reduced through effective preventive measures. In order to increase the efficiency of preventive measures, policy-makers may benefit from accurate predictive models that are able to capture the complex spatial dynamics of terrorism occurring at a local scale. Despite empirical research carried out at country-level that has confirmed theories explaining the diffusion processes of terrorism across space and time, scholars have failed to assess diffusion’s theories on a local scale. Moreover, since scholars have not made the most of recent statistical modelling approaches, they have been unable to build up predictive models accurate in both space and time. In an effort to address these shortcomings, this research suggests a novel approach to systematically assess the theories of terrorism’s diffusion on a local scale and provide a predictive model of the local spatial dynamics of terrorism worldwide. With a focus on the lethal terrorist events that occurred after 9/11, this paper addresses the following question: why and how does lethal terrorism diffuse in space and time? Based on geolocalised data on worldwide terrorist attacks and covariates gathered from 2002 to 2013, a binomial spatio-temporal point process is used to model the probability of terrorist attacks on a sphere (the world), the surface of which is discretised in the form of Delaunay triangles and refined in areas of specific interest. Within a Bayesian framework, the model is fitted through an integrated nested Laplace approximation - a recent fitting approach that computes fast and accurate estimates of posterior marginals. Hence, for each location in the world, the model provides a probability of encountering a lethal terrorist attack and measures of volatility, which inform on the model’s predictability. Diffusion processes are visualised through interactive maps that highlight space-time variations in the probability and volatility of encountering a lethal attack from 2002 to 2013. Based on the previous twelve years of observation, the location and lethality of terrorist events in 2014 are statistically accurately predicted. Throughout the global scope of this research, local diffusion processes such as escalation and relocation are systematically examined: the former process describes an expansion from high concentration areas of lethal terrorist events (hotspots) to neighbouring areas, while the latter is characterised by changes in the location of hotspots. By controlling for the effect of geographical, economical and demographic variables, the results of the model suggest that the diffusion processes of lethal terrorism are jointly driven by contagious and non-contagious factors that operate on a local scale – as predicted by theories of diffusion. Moreover, by providing a quantitative measure of predictability, the model prevents policy-makers from making decisions based on highly uncertain predictions. Ultimately, this research may provide important complementary tools to enhance the efficiency of policies that aim to prevent and combat terrorism.Keywords: diffusion process, terrorism, spatial dynamics, spatio-temporal modeling
Procedia PDF Downloads 3516616 Design and Development of a Platform for Analyzing Spatio-Temporal Data from Wireless Sensor Networks
Authors: Walid Fantazi
Abstract:
The development of sensor technology (such as microelectromechanical systems (MEMS), wireless communications, embedded systems, distributed processing and wireless sensor applications) has contributed to a broad range of WSN applications which are capable of collecting a large amount of spatiotemporal data in real time. These systems require real-time data processing to manage storage in real time and query the data they process. In order to cover these needs, we propose in this paper a Snapshot spatiotemporal data model based on object-oriented concepts. This model allows saving storing and reducing data redundancy which makes it easier to execute spatiotemporal queries and save analyzes time. Further, to ensure the robustness of the system as well as the elimination of congestion from the main access memory we propose a spatiotemporal indexing technique in RAM called Captree *. As a result, we offer an RIA (Rich Internet Application) -based SOA application architecture which allows the remote monitoring and control.Keywords: WSN, indexing data, SOA, RIA, geographic information system
Procedia PDF Downloads 2536615 Towards an Effective Approach for Modelling near Surface Air Temperature Combining Weather and Satellite Data
Authors: Nicola Colaninno, Eugenio Morello
Abstract:
The urban environment affects local-to-global climate and, in turn, suffers global warming phenomena, with worrying impacts on human well-being, health, social and economic activities. Physic-morphological features of the built-up space affect urban air temperature, locally, causing the urban environment to be warmer compared to surrounding rural. This occurrence, typically known as the Urban Heat Island (UHI), is normally assessed by means of air temperature from fixed weather stations and/or traverse observations or based on remotely sensed Land Surface Temperatures (LST). The information provided by ground weather stations is key for assessing local air temperature. However, the spatial coverage is normally limited due to low density and uneven distribution of the stations. Although different interpolation techniques such as Inverse Distance Weighting (IDW), Ordinary Kriging (OK), or Multiple Linear Regression (MLR) are used to estimate air temperature from observed points, such an approach may not effectively reflect the real climatic conditions of an interpolated point. Quantifying local UHI for extensive areas based on weather stations’ observations only is not practicable. Alternatively, the use of thermal remote sensing has been widely investigated based on LST. Data from Landsat, ASTER, or MODIS have been extensively used. Indeed, LST has an indirect but significant influence on air temperatures. However, high-resolution near-surface air temperature (NSAT) is currently difficult to retrieve. Here we have experimented Geographically Weighted Regression (GWR) as an effective approach to enable NSAT estimation by accounting for spatial non-stationarity of the phenomenon. The model combines on-site measurements of air temperature, from fixed weather stations and satellite-derived LST. The approach is structured upon two main steps. First, a GWR model has been set to estimate NSAT at low resolution, by combining air temperature from discrete observations retrieved by weather stations (dependent variable) and the LST from satellite observations (predictor). At this step, MODIS data, from Terra satellite, at 1 kilometer of spatial resolution have been employed. Two time periods are considered according to satellite revisit period, i.e. 10:30 am and 9:30 pm. Afterward, the results have been downscaled at 30 meters of spatial resolution by setting a GWR model between the previously retrieved near-surface air temperature (dependent variable), the multispectral information as provided by the Landsat mission, in particular the albedo, and Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM), both at 30 meters. Albedo and DEM are now the predictors. The area under investigation is the Metropolitan City of Milan, which covers an area of approximately 1,575 km2 and encompasses a population of over 3 million inhabitants. Both models, low- (1 km) and high-resolution (30 meters), have been validated according to a cross-validation that relies on indicators such as R2, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). All the employed indicators give evidence of highly efficient models. In addition, an alternative network of weather stations, available for the City of Milano only, has been employed for testing the accuracy of the predicted temperatures, giving and RMSE of 0.6 and 0.7 for daytime and night-time, respectively.Keywords: urban climate, urban heat island, geographically weighted regression, remote sensing
Procedia PDF Downloads 1946614 Automated End of Sprint Detection for Force-Velocity-Power Analysis with GPS/GNSS Systems
Authors: Patrick Cormier, Cesar Meylan, Matt Jensen, Dana Agar-Newman, Chloe Werle, Ming-Chang Tsai, Marc Klimstra
Abstract:
Sprint-derived horizontal force-velocity-power (FVP) profiles can be developed with adequate validity and reliability with satellite (GPS/GNSS) systems. However, FVP metrics are sensitive to small nuances in data processing procedures such that minor differences in defining the onset and end of the sprint could result in different FVP metric outcomes. Furthermore, in team-sports, there is a requirement for rapid analysis and feedback of results from multiple athletes, therefore developing standardized and automated methods to improve the speed, efficiency and reliability of this process are warranted. Thus, the purpose of this study was to compare different methods of sprint end detection on the development of FVP profiles from 10Hz GPS/GNSS data through goodness-of-fit and intertrial reliability statistics. Seventeen national team female soccer players participated in the FVP protocol which consisted of 2x40m maximal sprints performed towards the end of a soccer specific warm-up in a training session (1020 hPa, wind = 0, temperature = 30°C) on an open grass field. Each player wore a 10Hz Catapult system unit (Vector S7, Catapult Innovations) inserted in a vest in a pouch between the scapulae. All data were analyzed following common procedures. Variables computed and assessed were the model parameters, estimated maximal sprint speed (MSS) and the acceleration constant τ, in addition to horizontal relative force (F₀), velocity at zero (V₀), and relative mechanical power (Pmax). The onset of the sprints was standardized with an acceleration threshold of 0.1 m/s². The sprint end detection methods were: 1. Time when peak velocity (MSS) was achieved (zero acceleration), 2. Time after peak velocity drops by -0.4 m/s, 3. Time after peak velocity drops by -0.6 m/s, and 4. When the integrated distance from the GPS/GNSS signal achieves 40-m. Goodness-of-fit of each sprint end detection method was determined using the residual sum of squares (RSS) to demonstrate the error of the FVP modeling with the sprint data from the GPS/GNSS system. Inter-trial reliability (from 2 trials) was assessed utilizing intraclass correlation coefficients (ICC). For goodness-of-fit results, the end detection technique that used the time when peak velocity was achieved (zero acceleration) had the lowest RSS values, followed by -0.4 and -0.6 velocity decay, and 40-m end had the highest RSS values. For intertrial reliability, the end of sprint detection techniques that were defined as the time at (method 1) or shortly after (method 2 and 3) when MSS was achieved had very large to near perfect ICC and the time at the 40 m integrated distance (method 4) had large to very large ICCs. Peak velocity was reached at 29.52 ± 4.02-m. Therefore, sport scientists should implement end of sprint detection either when peak velocity is determined or shortly after to improve goodness of fit to achieve reliable between trial FVP profile metrics. Although, more robust processing and modeling procedures should be developed in future research to improve sprint model fitting. This protocol was seamlessly integrated into the usual training which shows promise for sprint monitoring in the field with this technology.Keywords: automated, biomechanics, team-sports, sprint
Procedia PDF Downloads 1196613 Advantages of Computer Navigation in Knee Arthroplasty
Authors: Mohammad Ali Al Qatawneh, Bespalchuk Pavel Ivanovich
Abstract:
Computer navigation has been introduced in total knee arthroplasty to improve the accuracy of the procedure. Computer navigation improves the accuracy of bone resection in the coronal and sagittal planes. It was also noted that it normalizes the rotational alignment of the femoral component and fully assesses and balances the deformation of soft tissues in the coronal plane. The work is devoted to the advantages of using computer navigation technology in total knee arthroplasty in 62 patients (11 men and 51 women) suffering from gonarthrosis, aged 51 to 83 years, operated using a computer navigation system, followed up to 3 years from the moment of surgery. During the examination, the deformity variant was determined, and radiometric parameters of the knee joints were measured using the Knee Society Score (KSS), Functional Knee Society Score (FKSS), and Western Ontario and McMaster University Osteoarthritis Index (WOMAC) scales. Also, functional stress tests were performed to assess the stability of the knee joint in the frontal plane and functional indicators of the range of motion. After surgery, improvement was observed in all scales; firstly, the WOMAC values decreased by 5.90 times, and the median value to 11 points (p < 0.001), secondly KSS increased by 3.91 times and reached 86 points (p < 0.001), and the third one is that FKSS data increased by 2.08 times and reached 94 points (p < 0.001). After TKA, the axis deviation of the lower limbs of more than 3 degrees was observed in 4 patients at 6.5% and frontal instability of the knee joint just in 2 cases at 3.2%., The lower incidence of sagittal instability of the knee joint after the operation was 9.6%. The range of motion increased by 1.25 times; the volume of movement averaged 125 degrees (p < 0.001). Computer navigation increases the accuracy of the spatial orientation of the endoprosthesis components in all planes, reduces the variability of the axis of the lower limbs within ± 3 °, allows you to achieve the best results of surgical interventions, and can be used to solve most basic tasks, allowing you to achieve excellent and good outcomes of operations in 100% of cases according to the WOMAC scale. With diaphyseal deformities of the femur and/or tibia, as well as with obstruction of their medullary canal, the use of computer navigation is the method of choice. The use of computer navigation prevents the occurrence of flexion contracture and hyperextension of the knee joint during the distal sawing of the femur. Using the navigation system achieves high-precision implantation for the endoprosthesis; in addition, it achieves an adequate balance of the ligaments, which contributes to the stability of the joint, reduces pain, and allows for the achievement of a good functional result of the treatment.Keywords: knee joint, arthroplasty, computer navigation, advantages
Procedia PDF Downloads 906612 Design of Ka-Band Satellite Links in Indonesia
Authors: Zulfajri Basri Hasanuddin
Abstract:
There is an increasing demand for broadband services in Indonesia. Therefore, the answer is the use of Ka-Band which has some advantages such as wider bandwidth, the higher transmission speeds, and smaller size of antenna in the ground. However, rain attenuation is the primary factor in the degradation of signal at the Kaband. In this paper, the author will determine whether the Ka-band frequency can be implemented in Indonesia which has high intensity of rainfall.Keywords: Ka-band, link budget, link availability, BER, Eb/No, C/N
Procedia PDF Downloads 4226611 Studies of Carbohydrate, Antioxidant, Nutrient and Genomic DNA Characterization of Fresh Olive Treated with Alkaline and Acidic Solvent: An Innovation
Authors: A. B. M. S. Hossain, A. Abdelgadir, N. A. Ibrahim
Abstract:
Fresh ripen olive cannot be consumed immediately after harvest due to the excessive bitterness having polyphenol as antioxidant. Industrial processing needs to be edible the fruit. The laboratory processing technique has been used to make it edible by using acid (vinegar, 5% acetic acid) and alkaline solvent (NaOH). Based on the treatment and consequence, innovative data have been found in this regard. The experiment was conducted to investigate biochemical content, nutritional and DNA characterization of olive fruit treated with alkaline (Sodium chloride anhydrous) and acidic solvent (5% acetic acid, vinegar). The treatments were used as control (no water), water control, 10% sodium chloride anhydrous (NaOH), vinegar (5% acetic acid), vinegar + NaOH and vinegar + NaOH + hot water treatment. Our results showed that inverted sugar and glucose content were higher in the vinegar and NaOH treated olive than in other treatments. Fructose content was the highest in vinegar + NaOH treated fruit. Nutrient contents NO3 K, Ca and Na were found higher in the treated fruit than the control fruit. Moreover, maximum K content was observed in the case of all treatments compared to the other nutrient content. The highest acidic (lower pH) condition (sour) was found in treated fruit. DNA yield was found higher in water control than acid and alkaline treated olives. DNA band was wider in the olive treated water control compared to the NaOH, vinegar, vinegar + NaOH and vinegar + NaOH + Hot water treatment. Finally, results suggest that vinegar + NaOH treated olive fruit was the best for fresh olive homemade processing after harvesting for edible purpose.Keywords: olive, vinegar, sugars, DNA band, bioprocess biotechnology
Procedia PDF Downloads 1856610 Mitigating Urban Flooding through Spatial Planning Interventions: A Case of Bhopal City
Authors: Rama Umesh Pandey, Jyoti Yadav
Abstract:
Flooding is one of the waterborne disasters that causes extensive destruction in urban areas. Developing countries are at a higher risk of such damage and more than half of the global flooding events take place in Asian countries including India. Urban flooding is more of a human-induced disaster rather than natural. This is highly influenced by the anthropogenic factors, besides metrological and hydrological causes. Unplanned urbanization and poor management of cities enhance the impact manifold and cause huge loss of life and property in urban areas. It is an irony that urban areas have been facing water scarcity in summers and flooding during monsoon. This paper is an attempt to highlight the factors responsible for flooding in a city especially from an urban planning perspective and to suggest mitigating measures through spatial planning interventions. Analysis has been done in two stages; first is to assess the impacts of previous flooding events and second to analyze the factors responsible for flooding at macro and micro level in cities. Bhopal, a city in Central India having nearly two million population, has been selected for the study. The city has been experiencing flooding during heavy rains in monsoon. The factors responsible for urban flooding were identified through literature review as well as various case studies from different cities across the world and India. The factors thus identified were analyzed for both macro and micro level influences. For macro level, the previous flooding events that have caused huge destructions were analyzed and the most affected areas in Bhopal city were identified. Since the identified area was falling within the catchment of a drain so the catchment area was delineated for the study. The factors analyzed were: rainfall pattern to calculate the return period using Weibull’s formula; imperviousness through mapping in ArcGIS; runoff discharge by using Rational method. The catchment was divided into micro watersheds and the micro watershed having maximum impervious surfaces was selected to analyze the coverage and effect of physical infrastructure such as: storm water management; sewerage system; solid waste management practices. The area was further analyzed to assess the extent of violation of ‘building byelaws’ and ‘development control regulations’ and encroachment over the natural water streams. Through analysis, the study has revealed that the main issues have been: lack of sewerage system; inadequate storm water drains; inefficient solid waste management in the study area; violation of building byelaws through extending building structures ether on to the drain or on the road; encroachments by slum dwellers along or on to the drain reducing the width and capacity of the drain. Other factors include faulty culvert’s design resulting in back water effect. Roads are at higher level than the plinth of houses which creates submersion of their ground floors. The study recommends spatial planning interventions for mitigating urban flooding and strategies for management of excess rain water during monsoon season. Recommendations have also been made for efficient land use management to mitigate water logging in areas vulnerable to flooding.Keywords: mitigating strategies, spatial planning interventions, urban flooding, violation of development control regulations
Procedia PDF Downloads 3296609 Internal Combustion Engine Fuel Composition Detection by Analysing Vibration Signals Using ANFIS Network
Authors: M. N. Khajavi, S. Nasiri, E. Farokhi, M. R. Bavir
Abstract:
Alcohol fuels are renewable, have low pollution and have high octane number; therefore, they are important as fuel in internal combustion engines. Percentage detection of these alcoholic fuels with gasoline is a complicated, time consuming, and expensive process. Nowadays, these processes are done in equipped laboratories, based on international standards. The aim of this research is to determine percentage detection of different fuels based on vibration analysis of engine block signals. By doing, so considerable saving in time and cost can be achieved. Five different fuels consisted of pure gasoline (G) as base fuel and combination of this fuel with different percent of ethanol and methanol are prepared. For example, volumetric combination of pure gasoline with 10 percent ethanol is called E10. By this convention, we made M10 (10% methanol plus 90% pure gasoline), E30 (30% ethanol plus 70% pure gasoline), and M30 (30% Methanol plus 70% pure gasoline) were prepared. To simulate real working condition for this experiment, the vehicle was mounted on a chassis dynamometer and run under 1900 rpm and 30 KW load. To measure the engine block vibration, a three axis accelerometer was mounted between cylinder 2 and 3. After acquisition of vibration signal, eight time feature of these signals were used as inputs to an Adaptive Neuro Fuzzy Inference System (ANFIS). The designed ANFIS was trained for classifying these five different fuels. The results show suitable classification ability of the designed ANFIS network with 96.3 percent of correct classification.Keywords: internal combustion engine, vibration signal, fuel composition, classification, ANFIS
Procedia PDF Downloads 4016608 Spatial Variability of Heavy Metals in Sediments of Two Streams of the Olifants River System, South Africa
Authors: Abraham Addo-Bediako, Sophy Nukeri, Tebatso Mmako
Abstract:
Many freshwater ecosystems have been subjected to prolonged and cumulative pollution as a result of human activities such as mining, agricultural, industrial and human settlements in their catchments. The objective of this study was to investigate spatial variability of heavy metal pollution of sediments and possible sources of pollutants in two streams of the Olifants River System, South Africa. Stream sediments were collected and analysed for Arsenic (As), Cadmium (Cd), Chromium (Cr), Copper (Cu), Lead (Pb), Nickel (Ni) and Zinc (Zn) concentrations using inductively coupled plasma-mass mass spectrometry (ICP-MS). In both rivers, As, Cd, Cu, Pb and Zn fell within the concentration ranges recommended by CCME and ANZECC, while the concentrations of Cr and Ni exceeded the standards; the results indicated that Cr and Ni in the sediments originated from human activities and not from natural geological background. The index of geo-accumulation (Igeo) was used to assess the degree of pollution. The results of the geo-accumulation index evaluation showed that Cr and Ni were present in the sediments of the rivers at moderately to extremely polluted levels, while As, Cd, Cu, Pb and Zn existed at unpolluted to moderately polluted levels. Generally, heavy metal concentrations increased along the gradient in the rivers. The high concentrations of Cr and Ni in both rivers are of great concern, as previously these two rivers were classified to be supplying the Olifants River with water of good quality. There is a critical need, therefore to monitor heavy metal concentrations and distributions, as well as a comprehensive plan to prevent health risks, especially those communities still reliant on untreated water from the rivers, as sediment pollution may pose a risk of secondary water pollution under sediment disturbance and/or changes in the geo-chemistry of sediments.Keywords: geo-accumulation index, heavy metals, sediment pollution, water quality
Procedia PDF Downloads 1646607 Application of Regularized Spatio-Temporal Models to the Analysis of Remote Sensing Data
Authors: Salihah Alghamdi, Surajit Ray
Abstract:
Space-time data can be observed over irregularly shaped manifolds, which might have complex boundaries or interior gaps. Most of the existing methods do not consider the shape of the data, and as a result, it is difficult to model irregularly shaped data accommodating the complex domain. We used a method that can deal with space-time data that are distributed over non-planner shaped regions. The method is based on partial differential equations and finite element analysis. The model can be estimated using a penalized least squares approach with a regularization term that controls the over-fitting. The model is regularized using two roughness penalties, which consider the spatial and temporal regularities separately. The integrated square of the second derivative of the basis function is used as temporal penalty. While the spatial penalty consists of the integrated square of Laplace operator, which is integrated exclusively over the domain of interest that is determined using finite element technique. In this paper, we applied a spatio-temporal regression model with partial differential equations regularization (ST-PDE) approach to analyze a remote sensing data measuring the greenness of vegetation, measure by an index called enhanced vegetation index (EVI). The EVI data consist of measurements that take values between -1 and 1 reflecting the level of greenness of some region over a period of time. We applied (ST-PDE) approach to irregular shaped region of the EVI data. The approach efficiently accommodates the irregular shaped regions taking into account the complex boundaries rather than smoothing across the boundaries. Furthermore, the approach succeeds in capturing the temporal variation in the data.Keywords: irregularly shaped domain, partial differential equations, finite element analysis, complex boundray
Procedia PDF Downloads 1406606 Rapid Fetal MRI Using SSFSE, FIESTA and FSPGR Techniques
Authors: Chen-Chang Lee, Po-Chou Chen, Jo-Chi Jao, Chun-Chung Lui, Leung-Chit Tsang, Lain-Chyr Hwang
Abstract:
Fetal Magnetic Resonance Imaging (MRI) is a challenge task because the fetal movements could cause motion artifact in MR images. The remedy to overcome this problem is to use fast scanning pulse sequences. The Single-Shot Fast Spin-Echo (SSFSE) T2-weighted imaging technique is routinely performed and often used as a gold standard in clinical examinations. Fast spoiled gradient-echo (FSPGR) T1-Weighted Imaging (T1WI) is often used to identify fat, calcification and hemorrhage. Fast Imaging Employing Steady-State Acquisition (FIESTA) is commonly used to identify fetal structures as well as the heart and vessels. The contrast of FIESTA image is related to T1/T2 and is different from that of SSFSE. The advantages and disadvantages of these two scanning sequences for fetal imaging have not been clearly demonstrated yet. This study aimed to compare these three rapid MRI techniques (SSFSE, FIESTA, and FSPGR) for fetal MRI examinations. The image qualities and influencing factors among these three techniques were explored. A 1.5T GE Discovery 450 clinical MR scanner with an eight-channel high-resolution abdominal coil was used in this study. Twenty-five pregnant women were recruited to enroll fetal MRI examination with SSFSE, FIESTA and FSPGR scanning. Multi-oriented and multi-slice images were acquired. Afterwards, MR images were interpreted and scored by two senior radiologists. The results showed that both SSFSE and T2W-FIESTA can provide good image quality among these three rapid imaging techniques. Vessel signals on FIESTA images are higher than those on SSFSE images. The Specific Absorption Rate (SAR) of FIESTA is lower than that of the others two techniques, but it is prone to cause banding artifacts. FSPGR-T1WI renders lower Signal-to-Noise Ratio (SNR) because it severely suffers from the impact of maternal and fetal movements. The scan times for these three scanning sequences were 25 sec (T2W-SSFSE), 20 sec (FIESTA) and 18 sec (FSPGR). In conclusion, all these three rapid MR scanning sequences can produce high contrast and high spatial resolution images. The scan time can be shortened by incorporating parallel imaging techniques so that the motion artifacts caused by fetal movements can be reduced. Having good understanding of the characteristics of these three rapid MRI techniques is helpful for technologists to obtain reproducible fetal anatomy images with high quality for prenatal diagnosis.Keywords: fetal MRI, FIESTA, FSPGR, motion artifact, SSFSE
Procedia PDF Downloads 5306605 Spatio-Temporal Analysis of Drought in Cholistan Region, Pakistan: An Application of Standardized Precipitation Index
Authors: Qurratulain Safdar
Abstract:
Drought is a temporary aberration in contrast to aridity, as it is a permanent feature of climate. Virtually, it takes place in all types of climatic regions that range from high to low rainfall areas. Due to the wide latitudinal extent of Pakistan, there is seasonal and annual variability in rainfall. The south-central part of the country is arid and hyper-arid. This study focuses on the spatio-temporal analysis of droughts in arid and hyperarid region of Cholistan using the standardized precipitation index (SPI) approach. This study has assessed the extent of recurrences of drought and its temporal vulnerability to drought in Cholistan region. Initially, the paper described the geographic setup of the study area along with a brief description of the drought conditions that prevail in Pakistan. The study also provides a scientific foundation for preparing literature and theoretical framework in-line with the selected parameters and indicators. Data were collected both from primary and secondary data sources. Rainfall and temperature data were obtained from Pakistan Meteorology Department. By applying geostatistical approach, a standardized precipitation index (SPI) was calculated for the study region, and the value of spatio-temporal variability of drought and its severity was explored. As a result, in-depth spatial analysis of drought conditions in Cholistan area was found. Parallel to this, drought-prone areas with seasonal variation were also identified using Kriging spatial interpolation techniques in a GIS environment. The study revealed that there is temporal variation in droughts' occurrences both in time series and SPI values. The paper is finally concluded, and strategic plan was suggested to minimize the impacts of drought.Keywords: Cholistan desert, climate anomalies, metrological droughts, standardized precipitation index
Procedia PDF Downloads 2136604 Special Education in the South African Context: A Bio-Ecological Perspective
Authors: Suegnet Smit
Abstract:
Prior to 1994, special education in South Africa was marginalized and fragmented. Moving away from a Medical model approach to special education, the Government, after 1994, promoted an Inclusive approach, as a means to transform education in general, and special education in particular. This transformation, however, is moving at too a slow pace for learners with barriers to learning and development to benefit fully from their education. The goal of the Department of Basic Education is to minimize, remove, and prevent barriers to learning and development in the educational setting, by attending to the unique needs of the individual learner. However, the implementation of Inclusive education is problematic, and general education remains poor. This paper highlights the historical development of special education in South Africa, underpinned by a bio-ecological perspective. Problematic areas within the systemic levels of the education system are highlighted in order to indicate how the interactive processes within the systemic levels affect special needs learners on the personal dimension of the bio-ecological approach. As part of the methodology, thorough document analysis was conducted on information collected from a large body of research literature, which included academic articles, reports, policies, and policy reviews. Through a qualitative analysis, data were grouped and categorized according to the bio-ecological model systems, which revealed various successes and challenges within the education system. The challenges inhibit change, growth, and development for the child, who experience barriers to learning. From these findings, it is established that special education in South Africa has been, and still is, on a bumpy road. Sadly, the transformation process of change, envisaged by implementing Inclusive education, is still yet a dream, not fully realized. Special education seems to be stuck at what is, and the education system has not moved forward significantly enough to reach what special education should and could be. The gap that exists between a vision of Inclusive quality education for all, and the current reality, is still too wide. Problems encountered in all the education system levels, causes a funnel-effect downward to learners with special educational needs, with negative effects for the development of these learners.Keywords: bio-ecological perspective, education systems, inclusive education, special education
Procedia PDF Downloads 1446603 Design and Synthesis of Fully Benzoxazine-Based Porous Organic Polymer Through Sonogashira Coupling Reaction for CO₂ Capture and Energy Storage Application
Authors: Mohsin Ejaz, Shiao-Wei Kuo
Abstract:
The growing production and exploitation of fossil fuels have placed human society in serious environmental issues. As a result, it's critical to design efficient and eco-friendly energy production and storage techniques. Porous organic polymers (POPs) are multi-dimensional porous network materials developed through the formation of covalent bonds between different organic building blocks that possess distinct geometries and topologies. POPs have tunable porosities and high surface area making them a good candidate for an effective electrode material in energy storage applications. Herein, we prepared a fully benzoxazine-based porous organic polymers (TPA–DHTP–BZ POP) through sonogashira coupling of dihydroxyterephthalaldehyde (DHPT) and triphenylamine (TPA) containing benzoxazine (BZ) monomers. Firstly, both BZ monomers (TPA-BZ-Br and DHTP-BZ-Ea) were synthesized by three steps, including Schiff base, reduction, and mannich condensation reaction. Finally, the TPA–DHTP–BZ POP was prepared through the sonogashira coupling reaction of brominated monomer (TPA-BZ-Br) and ethynyl monomer (DHTP-BZ-Ea). Fourier transform infrared (FTIR) and solid-state nuclear magnetic resonance (NMR) spectroscopy confirmed the successful synthesis of monomers as well as POP. The porosity of TPA–DHTP–BZ POP was investigated by the N₂ absorption technique and showed a Brunauer–Emmett–Teller (BET) surface area of 196 m² g−¹, pore size 2.13 nm and pore volume of 0.54 cm³ g−¹, respectively. The TPA–DHTP–BZ POP experienced thermal ring-opening polymerization, resulting in poly (TPA–DHTP–BZ) POP having strong inter and intramolecular hydrogen bonds formed by phenolic groups and Mannich bridges, thereby enhancing CO₂ capture and supercapacitive performance. The poly(TPA–DHTP–BZ) POP demonstrated a remarkable CO₂ capture of 3.28 mmol g−¹ and a specific capacitance of 67 F g−¹ at 0.5 A g−¹. Thus, poly(TPA–DHTP–BZ) POP could potentially be used for energy storage and CO₂ capture applications.Keywords: porous organic polymer, benzoxazine, sonogashira coupling, CO₂, supercapacitor
Procedia PDF Downloads 736602 Computational Fluid Dynamics Modeling of Flow Properties Fluctuations in Slug-Churn Flow through Pipe Elbow
Authors: Nkemjika Chinenye-Kanu, Mamdud Hossain, Ghazi Droubi
Abstract:
Prediction of multiphase flow induced forces, void fraction and pressure is crucial at both design and operating stages of practical energy and process pipe systems. In this study, transient numerical simulations of upward slug-churn flow through a vertical 90-degree elbow have been conducted. The volume of fluid (VOF) method was used to model the two-phase flows while the K-epsilon Reynolds-Averaged Navier-Stokes (RANS) equations were used to model turbulence in the flows. The simulation results were validated using experimental results. Void fraction signal, peak frequency and maximum magnitude of void fraction fluctuation of the slug-churn flow validation case studies compared well with experimental results. The x and y direction force fluctuation signals at the elbow control volume were obtained by carrying out force balance calculations using the directly extracted time domain signals of flow properties through the control volume in the numerical simulation. The computed force signal compared well with experiment for the slug and churn flow validation case studies. Hence, the present numerical simulation technique was able to predict the behaviours of the one-way flow induced forces and void fraction fluctuations.Keywords: computational fluid dynamics, flow induced vibration, slug-churn flow, void fraction and force fluctuation
Procedia PDF Downloads 1566601 A Survey and Analysis on Inflammatory Pain Detection and Standard Protocol Selection Using Medical Infrared Thermography from Image Processing View Point
Authors: Mrinal Kanti Bhowmik, Shawli Bardhan Jr., Debotosh Bhattacharjee
Abstract:
Human skin containing temperature value more than absolute zero, discharges infrared radiation related to the frequency of the body temperature. The difference in infrared radiation from the skin surface reflects the abnormality present in human body. Considering the difference, detection and forecasting the temperature variation of the skin surface is the main objective of using Medical Infrared Thermography(MIT) as a diagnostic tool for pain detection. Medical Infrared Thermography(MIT) is a non-invasive imaging technique that records and monitors the temperature flow in the body by receiving the infrared radiated from the skin and represent it through thermogram. The intensity of the thermogram measures the inflammation from the skin surface related to pain in human body. Analysis of thermograms provides automated anomaly detection associated with suspicious pain regions by following several image processing steps. The paper represents a rigorous study based survey related to the processing and analysis of thermograms based on the previous works published in the area of infrared thermal imaging for detecting inflammatory pain diseases like arthritis, spondylosis, shoulder impingement, etc. The study also explores the performance analysis of thermogram processing accompanied by thermogram acquisition protocols, thermography camera specification and the types of pain detected by thermography in summarized tabular format. The tabular format provides a clear structural vision of the past works. The major contribution of the paper introduces a new thermogram acquisition standard associated with inflammatory pain detection in human body to enhance the performance rate. The FLIR T650sc infrared camera with high sensitivity and resolution is adopted to increase the accuracy of thermogram acquisition and analysis. The survey of previous research work highlights that intensity distribution based comparison of comparable and symmetric region of interest and their statistical analysis assigns adequate result in case of identifying and detecting physiological disorder related to inflammatory diseases.Keywords: acquisition protocol, inflammatory pain detection, medical infrared thermography (MIT), statistical analysis
Procedia PDF Downloads 3426600 Suggestion of Methodology to Detect Building Damage Level Collectively with Flood Depth Utilizing Geographic Information System at Flood Disaster in Japan
Authors: Munenari Inoguchi, Keiko Tamura
Abstract:
In Japan, we were suffered by earthquake, typhoon, and flood disaster in 2019. Especially, 38 of 47 prefectures were affected by typhoon #1919 occurred in October 2019. By this disaster, 99 people were dead, three people were missing, and 484 people were injured as human damage. Furthermore, 3,081 buildings were totally collapsed, 24,998 buildings were half-collapsed. Once disaster occurs, local responders have to inspect damage level of each building by themselves in order to certificate building damage for survivors for starting their life reconstruction process. At that disaster, the total number to be inspected was so high. Based on this situation, Cabinet Office of Japan approved the way to detect building damage level efficiently, that is collectively detection. However, they proposed a just guideline, and local responders had to establish the concrete and infallible method by themselves. Against this issue, we decided to establish the effective and efficient methodology to detect building damage level collectively with flood depth. Besides, we thought that the flood depth was relied on the land height, and we decided to utilize GIS (Geographic Information System) for analyzing the elevation spatially. We focused on the analyzing tool of spatial interpolation, which is utilized to survey the ground water level usually. In establishing the methodology, we considered 4 key-points: 1) how to satisfy the condition defined in the guideline approved by Cabinet Office for detecting building damage level, 2) how to satisfy survivors for the result of building damage level, 3) how to keep equitability and fairness because the detection of building damage level was executed by public institution, 4) how to reduce cost of time and human-resource because they do not have enough time and human-resource for disaster response. Then, we proposed a methodology for detecting building damage level collectively with flood depth utilizing GIS with five steps. First is to obtain the boundary of flooded area. Second is to collect the actual flood depth as sampling over flooded area. Third is to execute spatial analysis of interpolation with sampled flood depth to detect two-dimensional flood depth extent. Fourth is to divide to blocks by four categories of flood depth (non-flooded, over the floor to 100 cm, 100 cm to 180 cm and over 180 cm) following lines of roads for getting satisfaction from survivors. Fifth is to put flood depth level to each building. In Koriyama city of Fukushima prefecture, we proposed the methodology of collectively detection for building damage level as described above, and local responders decided to adopt our methodology at typhoon #1919 in 2019. Then, we and local responders detect building damage level collectively to over 1,000 buildings. We have received good feedback that the methodology was so simple, and it reduced cost of time and human-resources.Keywords: building damage inspection, flood, geographic information system, spatial interpolation
Procedia PDF Downloads 1246599 Detection of Powdery Mildew Disease in Strawberry Using Image Texture and Supervised Classifiers
Authors: Sultan Mahmud, Qamar Zaman, Travis Esau, Young Chang
Abstract:
Strawberry powdery mildew (PM) is a serious disease that has a significant impact on strawberry production. Field scouting is still a major way to find PM disease, which is not only labor intensive but also almost impossible to monitor disease severity. To reduce the loss caused by PM disease and achieve faster automatic detection of the disease, this paper proposes an approach for detection of the disease, based on image texture and classified with support vector machines (SVMs) and k-nearest neighbors (kNNs). The methodology of the proposed study is based on image processing which is composed of five main steps including image acquisition, pre-processing, segmentation, features extraction and classification. Two strawberry fields were used in this study. Images of healthy leaves and leaves infected with PM (Sphaerotheca macularis) disease under artificial cloud lighting condition. Colour thresholding was utilized to segment all images before textural analysis. Colour co-occurrence matrix (CCM) was introduced for extraction of textural features. Forty textural features, related to a physiological parameter of leaves were extracted from CCM of National television system committee (NTSC) luminance, hue, saturation and intensity (HSI) images. The normalized feature data were utilized for training and validation, respectively, using developed classifiers. The classifiers have experimented with internal, external and cross-validations. The best classifier was selected based on their performance and accuracy. Experimental results suggested that SVMs classifier showed 98.33%, 85.33%, 87.33%, 93.33% and 95.0% of accuracy on internal, external-I, external-II, 4-fold cross and 5-fold cross-validation, respectively. Whereas, kNNs results represented 90.0%, 72.00%, 74.66%, 89.33% and 90.3% of classification accuracy, respectively. The outcome of this study demonstrated that SVMs classified PM disease with a highest overall accuracy of 91.86% and 1.1211 seconds of processing time. Therefore, overall results concluded that the proposed study can significantly support an accurate and automatic identification and recognition of strawberry PM disease with SVMs classifier.Keywords: powdery mildew, image processing, textural analysis, color co-occurrence matrix, support vector machines, k-nearest neighbors
Procedia PDF Downloads 1206598 A Comparative Study of Cognitive Functions in Relapsing-Remitting Multiple Sclerosis Patients, Secondary-Progressive Multiple Sclerosis Patients and Normal People
Authors: Alireza Pirkhaefi
Abstract:
Background: Multiple sclerosis (MS) is one of the most common diseases of the central nervous system (brain and spinal cord). Given the importance of cognitive disorders in patients with multiple sclerosis, the present study was in order to compare cognitive functions (Working memory, Attention and Centralization, and Visual-spatial perception) in patients with relapsing- remitting multiple sclerosis (RRMS) and secondary progressive multiple sclerosis (SPMS). Method: Present study was performed as a retrospective study. This research was conducted with Ex-Post Facto method. The samples of research consisted of 60 patients with multiple sclerosis (30 patients relapsing-retrograde and 30 patients secondary progressive), who were selected from Tehran Community of MS Patients Supported as convenience sampling. 30 normal persons were also selected as a comparison group. Montreal Cognitive Assessment (MOCA) was used to assess cognitive functions. Data were analyzed using multivariate analysis of variance. Results: The results showed that there were significant differences among cognitive functioning in patients with RRMS, SPMS, and normal individuals. There were not significant differences in working memory between two groups of patients with RRMS and SPMS; while significant differences in these variables were seen between the two groups and normal individuals. Also, results showed significant differences in attention and centralization and visual-spatial perception among three groups. Conclusions: Results showed that there are differences between cognitive functions of RRMS and SPMS patients so that the functions of RRMS patients are better than SPMS patients. These results have a critical role in improvement of cognitive functions; reduce the factors causing disability due to cognitive impairment, and especially overall health of society.Keywords: multiple sclerosis, cognitive function, secondary-progressive, normal subjects
Procedia PDF Downloads 2396597 Analysis of Underground Logistics Transportation Technology and Planning Research: Based on Xiong'an New Area, China
Authors: Xia Luo, Cheng Zeng
Abstract:
Under the promotion of the Central Committee of the Communist Party of China and the State Council in 2017, Xiong'an New Area is the third crucial new area in China established after Shenzhen and Shanghai. Its constructions' significance lies in mitigating Beijing's non-capital functions and exploring a new mode of optimizing development in densely populated and economically intensive areas. For this purpose, developing underground logistics can assume the role of goods distribution in the capital, relieve the road transport pressure in Beijing-Tianjin-Hebei Urban Agglomeration, adjust and optimize the urban layout and spatial structure of it. Firstly, the construction planning of Xiong'an New Area and underground logistics development are summarized, especially the development status abroad, the development trend, and bottlenecks of underground logistics in China. This paper explores the technicality, feasibility, and necessity of four modes of transportation. There are pneumatic capsule pipeline (PCP) technology, the CargoCap technology, cable hauled mule, and automatic guided vehicle (AGV). The above technical parameters and characteristics are introduced to relevant experts or scholars. Through establishing an indicator system, carrying out a questionnaire survey with the Delphi method, the final suggestion is obtained: China should develop logistics vehicles similar to CargoCap, adopting rail mode and driverless mode. Based on China's temporal and spatial logistics demand and the geographical pattern of Xiong'an New Area, the construction scale, technical parameters, node location, and other vital parameters of underground logistics are planned. In this way, we hope to speed up the new area's construction and the logistics industry's innovation.Keywords: the Xiong'an new area, underground logistics, contrastive analysis, CargoCap, logistics planning
Procedia PDF Downloads 1296596 Detection and Classification of Rubber Tree Leaf Diseases Using Machine Learning
Authors: Kavyadevi N., Kaviya G., Gowsalya P., Janani M., Mohanraj S.
Abstract:
Hevea brasiliensis, also known as the rubber tree, is one of the foremost assets of crops in the world. One of the most significant advantages of the Rubber Plant in terms of air oxygenation is its capacity to reduce the likelihood of an individual developing respiratory allergies like asthma. To construct such a system that can properly identify crop diseases and pests and then create a database of insecticides for each pest and disease, we must first give treatment for the illness that has been detected. We shall primarily examine three major leaf diseases since they are economically deficient in this article, which is Bird's eye spot, algal spot and powdery mildew. And the recommended work focuses on disease identification on rubber tree leaves. It will be accomplished by employing one of the superior algorithms. Input, Preprocessing, Image Segmentation, Extraction Feature, and Classification will be followed by the processing technique. We will use time-consuming procedures that they use to detect the sickness. As a consequence, the main ailments, underlying causes, and signs and symptoms of diseases that harm the rubber tree are covered in this study.Keywords: image processing, python, convolution neural network (CNN), machine learning
Procedia PDF Downloads 766595 Structure and Properties of Meltblown Polyetherimide as High Temperature Filter Media
Authors: Gajanan Bhat, Vincent Kandagor, Daniel Prather, Ramesh Bhave
Abstract:
Polyetherimide (PEI), an engineering plastic with very high glass transition temperature and excellent chemical and thermal stability, has been processed into a controlled porosity filter media of varying pore size, performance, and surface characteristics. A special grade of the PEI was processed by melt blowing to produce microfiber nonwovens suitable as filter media. The resulting microfiber webs were characterized to evaluate their structure and properties. The fiber webs were further modified by hot pressing, a post processing technique, which reduces the pore size in order to improve the barrier properties of the resulting membranes. This ongoing research has shown that PEI can be a good candidate for filter media requiring high temperature and chemical resistance with good mechanical properties. Also, by selecting the appropriate processing conditions, it is possible to achieve desired filtration performance from this engineering plastic.Keywords: nonwovens, melt blowing, polyehterimide, filter media, microfibers
Procedia PDF Downloads 3156594 Drying of Agro-Industrial Wastes Using a Cabinet Type Solar Dryer
Authors: N. Metidji, O. Badaoui, A. Djebli, H. Bendjebbas, R. Sellami
Abstract:
The agro-industry is considered as one of the most waste producing industrial fields as a result of food processing. Upgrading and reuse of these wastes as animal or poultry food seems to be a promising alternative. Combined with the use of clean energy resources, the recovery process would contribute more to the environment protection. It is in this framework that a new solar dryer has been designed in the Unit of Solar Equipment Development. Direct solar drying has, also, many advantages compared to natural sun drying. In fact, the first does not cause product degradation as it is protected by the drying chamber from direct sun, insects and exterior environment. The aim of this work is to study the drying kinetics of waste, generated during the processing of pepper, by using a direct natural convection solar dryer at 35◦C and 55◦C. The rate of moisture removal from the product to be dried has been found to be directly related to temperature, humidity and flow rate. The characterization of these parameters has allowed the determination of the appropriate drying time for this product namely peppers waste.Keywords: solar energy, solar dryer, energy conversion, pepper drying, forced convection solar dryer
Procedia PDF Downloads 4116593 Working Memory and Phonological Short-Term Memory in the Acquisition of Academic Formulaic Language
Authors: Zhicheng Han
Abstract:
This study examines the correlation between knowledge of formulaic language, working memory (WM), and phonological short-term memory (PSTM) in Chinese L2 learners of English. This study investigates if WM and PSTM correlate differently to the acquisition of formulaic language, which may be relevant for the discourse around the conceptualization of formulas. Connectionist approaches have lead scholars to argue that formulas are form-meaning connections stored whole, making PSTM significant in the acquisitional process as it pertains to the storage and retrieval of chunk information. Generativist scholars, on the other hand, argued for active participation of interlanguage grammar in the acquisition and use of formulaic language, where formulas are represented in the mind but retain the internal structure built around a lexical core. This would make WM, especially the processing component of WM an important cognitive factor since it plays a role in processing and holding information for further analysis and manipulation. The current study asked L1 Chinese learners of English enrolled in graduate programs in China to complete a preference raking task where they rank their preference for formulas, grammatical non-formulaic expressions, and ungrammatical phrases with and without the lexical core in academic contexts. Participants were asked to rank the options in order of the likeliness of them encountering these phrases in the test sentences within academic contexts. Participants’ syntactic proficiency is controlled with a cloze test and grammar test. Regression analysis found a significant relationship between the processing component of WM and preference of formulaic expressions in the preference ranking task while no significant correlation is found for PSTM or syntactic proficiency. The correlational analysis found that WM, PSTM, and the two proficiency test scores have significant covariates. However, WM and PSTM have different predictor values for participants’ preference for formulaic language. Both storage and processing components of WM are significantly correlated with the preference for formulaic expressions while PSTM is not. These findings are in favor of the role of interlanguage grammar and syntactic knowledge in the acquisition of formulaic expressions. The differing effects of WM and PSTM suggest that selective attention to and processing of the input beyond simple retention play a key role in successfully acquiring formulaic language. Similar correlational patterns were found for preferring the ungrammatical phrase with the lexical core of the formula over the ones without the lexical core, attesting to learners’ awareness of the lexical core around which formulas are constructed. These findings support the view that formulaic phrases retain internal syntactic structures that are recognized and processed by the learners.Keywords: formulaic language, working memory, phonological short-term memory, academic language
Procedia PDF Downloads 62