Search results for: figure and ground principle
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3491

Search results for: figure and ground principle

161 Probabilistic Study of Impact Threat to Civil Aircraft and Realistic Impact Energy

Authors: Ye Zhang, Chuanjun Liu

Abstract:

In-service aircraft is exposed to different types of threaten, e.g. bird strike, ground vehicle impact, and run-way debris, or even lightning strike, etc. To satisfy the aircraft damage tolerance design requirements, the designer has to understand the threatening level for different types of the aircraft structures, either metallic or composite. Exposing to low-velocity impacts may produce very serious internal damages such as delaminations and matrix cracks without leaving visible mark onto the impacted surfaces for composite structures. This internal damage can cause significant reduction in the load carrying capacity of structures. The semi-probabilistic method provides a practical and proper approximation to establish the impact-threat based energy cut-off level for the damage tolerance evaluation of the aircraft components. Thus, the probabilistic distribution of impact threat and the realistic impact energy level cut-offs are the essential establishments required for the certification of aircraft composite structures. A new survey of impact threat to civil aircraft in-service has recently been carried out based on field records concerning around 500 civil aircrafts (mainly single aisles) and more than 4.8 million flight hours. In total 1,006 damages caused by low-velocity impact events had been screened out from more than 8,000 records including impact dents, scratches, corrosions, delaminations, cracks etc. The impact threat dependency on the location of the aircraft structures and structural configuration was analyzed. Although the survey was mainly focusing on the metallic structures, the resulting low-energy impact data are believed likely representative to general civil aircraft, since the service environments and the maintenance operations are independent of the materials of the structures. The probability of impact damage occurrence (Po) and impact energy exceedance (Pe) are the two key parameters for describing the statistic distribution of impact threat. With the impact damage events from the survey, Po can be estimated as 2.1x10-4 per flight hour. Concerning the calculation of Pe, a numerical model was developed using the commercial FEA software ABAQUS to backward estimate the impact energy based on the visible damage characteristics. The relationship between the visible dent depth and impact energy was established and validated by drop-weight impact experiments. Based on survey results, Pe was calculated and assumed having a log-linear relationship versus the impact energy. As the product of two aforementioned probabilities, Po and Pe, it is reasonable and conservative to assume Pa=PoxPe=10-5, which indicates that the low-velocity impact events are similarly likely as the Limit Load events. Combing Pa with two probabilities Po and Pe obtained based on the field survey, the cutoff level of realistic impact energy was estimated and valued as 34 J. In summary, a new survey was recently done on field records of civil aircraft to investigate the probabilistic distribution of impact threat. Based on the data, two probabilities, Po and Pe, were obtained. Considering a conservative assumption of Pa, the cutoff energy level for the realistic impact energy has been determined, which provides potential applicability in damage tolerance certification of future civil aircraft.

Keywords: composite structure, damage tolerance, impact threat, probabilistic

Procedia PDF Downloads 308
160 Training for Safe Tree Felling in the Forest with Symmetrical Collaborative Virtual Reality

Authors: Irene Capecchi, Tommaso Borghini, Iacopo Bernetti

Abstract:

One of the most common pieces of equipment still used today for pruning, felling, and processing trees is the chainsaw in forestry. However, chainsaw use highlights dangers and one of the highest rates of accidents in both professional and non-professional work. Felling is proportionally the most dangerous phase, both in severity and frequency, because of the risk of being hit by the plant the operator wants to cut down. To avoid this, a correct sequence of chainsaw cuts must be taught concerning the different conditions of the tree. Virtual reality (VR) makes it possible to virtually simulate chainsaw use without danger of injury. The limitations of the existing applications are as follow. The existing platforms are not symmetrical collaborative because the trainee is only in virtual reality, and the trainer can only see the virtual environment on a laptop or PC, and this results in an inefficient teacher-learner relationship. Therefore, most applications only involve the use of a virtual chainsaw, and the trainee thus cannot feel the real weight and inertia of a real chainsaw. Finally, existing applications simulate only a few cases of tree felling. The objectives of this research were to implement and test a symmetrical collaborative training application based on VR and mixed reality (MR) with the overlap between real and virtual chainsaws in MR. The research and training platform was developed for the Meta quest 2 head-mounted display. The research and training platform application is based on the Unity 3D engine, and Present Platform Interaction SDK (PPI-SDK) developed by Meta. PPI-SDK avoids the use of controllers and enables hand tracking and MR. With the combination of these two technologies, it was possible to overlay a virtual chainsaw with a real chainsaw in MR and synchronize their movements in VR. This ensures that the user feels the weight of the actual chainsaw, tightens the muscles, and performs the appropriate movements during the test allowing the user to learn the correct body posture. The chainsaw works only if the right sequence of cuts is made to felling the tree. Contact detection is done by Unity's physics system, which allows the interaction of objects that simulate real-world behavior. Each cut of the chainsaw is defined by a so-called collider, and the felling of the tree can only occur if the colliders are activated in the right order simulating a safe technique felling. In this way, the user can learn how to use the chainsaw safely. The system is also multiplayer, so the student and the instructor can experience VR together in a symmetrical and collaborative way. The platform simulates the following tree-felling situations with safe techniques: cutting the tree tilted forward, cutting the medium-sized tree tilted backward, cutting the large tree tilted backward, sectioning the trunk on the ground, and cutting branches. The application is being evaluated on a sample of university students through a special questionnaire. The results are expected to test both the increase in learning compared to a theoretical lecture and the immersive and telepresence of the platform.

Keywords: chainsaw, collaborative symmetric virtual reality, mixed reality, operator training

Procedia PDF Downloads 107
159 The Rite of Jihadification in ISIS Modified Video Games: Mass Deception and Dialectic of Religious Regression in Technological Progression

Authors: Venus Torabi

Abstract:

ISIS, the terrorist organization, modified two videogames, ARMA III and Grand Theft Auto 5 (2013) as means of online recruitment and ideological propaganda. The urge to study the mechanism at work, whether it has been successful or not, derives (Digital) Humanities experts to explore how codes of terror, Islamic ideology and recruitment strategies are incorporated into the ludic mechanics of videogames. Another aspect of the significance lies in the fact that this is a latent problem that has not been fully addressed in an interdisciplinary framework prior to this study, to the best of the researcher’s knowledge. Therefore, due to the complexity of the subject, the present paper entangles with game studies, philosophical and religious poles to form the methodology of conducting the research. As a contextualized epistemology of such exploitation of videogames, the core argument is building on the notion of “Culture Industry” proposed by Theodore W. Adorno and Max Horkheimer in Dialectic of Enlightenment (2002). This article posits that the ideological underpinnings of ISIS’s cause corroborated by the action-bound mechanics of the videogames are in line with adhering to the Islamic Eschatology as a furnishing ground and an excuse in exercising terrorism. It is an account of ISIS’s modification of the videogames, a tool of technological progression to practice online radicalization. Dialectically, this practice is packed up in rhetoric for recognizing a religious myth (the advent of a savior), as a hallmark of regression. The study puts forth that ISIS’s wreaking havoc on the world, both in reality and within action videogames, is negotiating the process of self-assertion in the players of such videogames (by assuming one’s self a member of terrorists) that leads to self-annihilation. It tries to unfold how ludic Mod videogames are misused as tools of mass deception towards ethnic cleansing in reality and line with the distorted Eschatological myth. To conclude, this study posits videogames to be a new avenue of mass deception in the framework of the Culture Industry. Yet, this emerges as a two-edged sword of mass deception in ISIS’s modification of videogames. It shows that ISIS is not only trying to hijack the minds through online/ludic recruitment, it potentially deceives the Muslim communities or those prone to radicalization into believing that it's terrorist practices are preparing the world for the advent of a religious savior based on Islamic Eschatology. This is to claim that the harsh actions of the videogames are potentially breeding minds by seeds of terrorist propaganda and numbing them to violence. The real world becomes an extension of that harsh virtual environment in a ludic/actual continuum, the extension that is contributing to the mass deception mechanism of the terrorists, in a clandestine trend.

Keywords: culture industry, dialectic, ISIS, islamic eschatology, mass deception, video games

Procedia PDF Downloads 137
158 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip

Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas

Abstract:

A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.

Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration

Procedia PDF Downloads 387
157 Devulcanization of Waste Rubber Using Thermomechanical Method Combined with Supercritical CO₂

Authors: L. Asaro, M. Gratton, S. Seghar, N. Poirot, N. Ait Hocine

Abstract:

Rubber waste disposal is an environmental problem. Particularly, many researches are centered in the management of discarded tires. In spite of all different ways of handling used tires, the most common is to deposit them in a landfill, creating a stock of tires. These stocks can cause fire danger and provide ambient for rodents, mosquitoes and other pests, causing health hazards and environmental problems. Because of the three-dimensional structure of the rubbers and their specific composition that include several additives, their recycling is a current technological challenge. The technique which can break down the crosslink bonds in the rubber is called devulcanization. Strictly, devulcanization can be defined as a process where poly-, di-, and mono-sulfidic bonds, formed during vulcanization, are totally or partially broken. In the recent years, super critical carbon dioxide (scCO₂) was proposed as a green devulcanization atmosphere. This is because it is chemically inactive, nontoxic, nonflammable and inexpensive. Its critical point can be easily reached (31.1 °C and 7.38 MPa), and residual scCO₂ in the devulcanized rubber can be easily and rapidly removed by releasing pressure. In this study thermomechanical devulcanization of ground tire rubber (GTR) was performed in a twin screw extruder under diverse operation conditions. Supercritical CO₂ was added in different quantities to promote the devulcanization. Temperature, screw speed and quantity of CO₂ were the parameters that were varied during the process. The devulcanized rubber was characterized by its devulcanization percent and crosslink density by swelling in toluene. Infrared spectroscopy (FTIR) and Gel permeation chromatography (GPC) were also done, and the results were related with the Mooney viscosity. The results showed that the crosslink density decreases as the extruder temperature and speed increases, and, as expected, the soluble fraction increase with both parameters. The Mooney viscosity of the devulcanized rubber decreases as the extruder temperature increases. The reached values were in good correlation (R= 0.96) with de the soluble fraction. In order to analyze if the devulcanization was caused by main chains or crosslink scission, the Horikx's theory was used. Results showed that all tests fall in the curve that corresponds to the sulfur bond scission, which indicates that the devulcanization has successfully happened without degradation of the rubber. In the spectra obtained by FTIR, it was observed that none of the characteristic peaks of the GTR were modified by the different devulcanization conditions. This was expected, because due to the low sulfur content (~1.4 phr) and the multiphasic composition of the GTR, it is very difficult to evaluate the devulcanization by this technique. The lowest crosslink density was reached with 1 cm³/min of CO₂, and the power consumed in that process was also near to the minimum. These results encourage us to do further analyses to better understand the effect of the different conditions on the devulcanization process. The analysis is currently extended to monophasic rubbers as ethylene propylene diene monomer rubber (EPDM) and natural rubber (NR).

Keywords: devulcanization, recycling, rubber, waste

Procedia PDF Downloads 384
156 Design of a Small and Medium Enterprise Growth Prediction Model Based on Web Mining

Authors: Yiea Funk Te, Daniel Mueller, Irena Pletikosa Cvijikj

Abstract:

Small and medium enterprises (SMEs) play an important role in the economy of many countries. When the overall world economy is considered, SMEs represent 95% of all businesses in the world, accounting for 66% of the total employment. Existing studies show that the current business environment is characterized as highly turbulent and strongly influenced by modern information and communication technologies, thus forcing SMEs to experience more severe challenges in maintaining their existence and expanding their business. To support SMEs at improving their competitiveness, researchers recently turned their focus on applying data mining techniques to build risk and growth prediction models. However, data used to assess risk and growth indicators is primarily obtained via questionnaires, which is very laborious and time-consuming, or is provided by financial institutes, thus highly sensitive to privacy issues. Recently, web mining (WM) has emerged as a new approach towards obtaining valuable insights in the business world. WM enables automatic and large scale collection and analysis of potentially valuable data from various online platforms, including companies’ websites. While WM methods have been frequently studied to anticipate growth of sales volume for e-commerce platforms, their application for assessment of SME risk and growth indicators is still scarce. Considering that a vast proportion of SMEs own a website, WM bears a great potential in revealing valuable information hidden in SME websites, which can further be used to understand SME risk and growth indicators, as well as to enhance current SME risk and growth prediction models. This study aims at developing an automated system to collect business-relevant data from the Web and predict future growth trends of SMEs by means of WM and data mining techniques. The envisioned system should serve as an 'early recognition system' for future growth opportunities. In an initial step, we examine how structured and semi-structured Web data in governmental or SME websites can be used to explain the success of SMEs. WM methods are applied to extract Web data in a form of additional input features for the growth prediction model. The data on SMEs provided by a large Swiss insurance company is used as ground truth data (i.e. growth-labeled data) to train the growth prediction model. Different machine learning classification algorithms such as the Support Vector Machine, Random Forest and Artificial Neural Network are applied and compared, with the goal to optimize the prediction performance. The results are compared to those from previous studies, in order to assess the contribution of growth indicators retrieved from the Web for increasing the predictive power of the model.

Keywords: data mining, SME growth, success factors, web mining

Procedia PDF Downloads 266
155 Coordinative Remote Sensing Observation Technology for a High Altitude Barrier Lake

Authors: Zhang Xin

Abstract:

Barrier lakes are lakes formed by storing water in valleys, river valleys or riverbeds after being blocked by landslide, earthquake, debris flow, and other factors. They have great potential safety hazards. When the water is stored to a certain extent, it may burst in case of strong earthquake or rainstorm, and the lake water overflows, resulting in large-scale flood disasters. In order to ensure the safety of people's lives and property in the downstream, it is very necessary to monitor the barrier lake. However, it is very difficult and time-consuming to manually monitor the barrier lake in high altitude areas due to the harsh climate and steep terrain. With the development of earth observation technology, remote sensing monitoring has become one of the main ways to obtain observation data. Compared with a single satellite, multi-satellite remote sensing cooperative observation has more advantages; its spatial coverage is extensive, observation time is continuous, imaging types and bands are abundant, it can monitor and respond quickly to emergencies, and complete complex monitoring tasks. Monitoring with multi-temporal and multi-platform remote sensing satellites can obtain a variety of observation data in time, acquire key information such as water level and water storage capacity of the barrier lake, scientifically judge the situation of the barrier lake and reasonably predict its future development trend. In this study, The Sarez Lake, which formed on February 18, 1911, in the central part of the Pamir as a result of blockage of the Murgab River valley by a landslide triggered by a strong earthquake with magnitude of 7.4 and intensity of 9, is selected as the research area. Since the formation of Lake Sarez, it has aroused widespread international concern about its safety. At present, the use of mechanical methods in the international analysis of the safety of Lake Sarez is more common, and remote sensing methods are seldom used. This study combines remote sensing data with field observation data, and uses the 'space-air-ground' joint observation technology to study the changes in water level and water storage capacity of Lake Sarez in recent decades, and evaluate its safety. The situation of the collapse is simulated, and the future development trend of Lake Sarez is predicted. The results show that: 1) in recent decades, the water level of Lake Sarez has not changed much and remained at a stable level; 2) unless there is a strong earthquake or heavy rain, it is less likely that the Lake Sarez will be broken under normal conditions, 3) lake Sarez will remain stable in the future, but it is necessary to establish an early warning system in the Lake Sarez area for remote sensing of the area, 4) the coordinative remote sensing observation technology is feasible for the high altitude barrier lake of Sarez.

Keywords: coordinative observation, disaster, remote sensing, geographic information system, GIS

Procedia PDF Downloads 127
154 Mapping the Suitable Sites for Food Grain Crops Using Geographical Information System (GIS) and Analytical Hierarchy Process (AHP)

Authors: Md. Monjurul Islam, Tofael Ahamed, Ryozo Noguchi

Abstract:

Progress continues in the fight against hunger, yet an unacceptably large number of people still lack food they need for an active and healthy life. Bangladesh is one of the rising countries in the South-Asia but still lots of people are food insecure. In the last few years, Bangladesh has significant achievements in food grain production but still food security at national to individual levels remain a matter of major concern. Ensuring food security for all is one of the major challenges that Bangladesh faces today, especially production of rice in the flood and poverty prone areas. Northern part is more vulnerable than any other part of Bangladesh. To ensure food security, one of the best way is to increase domestic production. To increase production, it is necessary to secure lands for achieving optimum utilization of resources. One of the measures is to identify the vulnerable and potential areas using Land Suitability Assessment (LSA) to increase rice production in the poverty prone areas. Therefore, the aim of the study was to identify the suitable sites for food grain crop rice production in the poverty prone areas located at the northern part of Bangladesh. Lack of knowledge on the best combination of factors that suit production of rice has contributed to the low production. To fulfill the research objective, a multi-criteria analysis was done and produced a suitable map for crop production with the help of Geographical Information System (GIS) and Analytical Hierarchy Process (AHP). Primary and secondary data were collected from ground truth information and relevant offices. The suitability levels for each factor were ranked based on the structure of FAO land suitability classification as: Currently Not Suitable (N2), Presently Not Suitable (N1), Marginally Suitable (S3), Moderately Suitable (S2) and Highly Suitable (S1). The suitable sites were identified using spatial analysis and compared with the recent raster image from Google Earth Pro® to validate the reliability of suitability analysis. For producing a suitability map for rice farming using GIS and multi-criteria analysis tool, AHP was used to rank the relevant factors, and the resultant weights were used to create the suitability map using weighted sum overlay tool in ArcGIS 10.3®. Then, the suitability map for rice production in the study area was formed. The weighted overly was performed and found that 22.74 % (1337.02 km2) of the study area was highly suitable, while 28.54% (1678.04 km2) was moderately suitable, 14.86% (873.71 km2) was marginally suitable, and 1.19% (69.97 km2) was currently not suitable for rice farming. On the other hand, 32.67% (1920.87 km2) was permanently not suitable which occupied with settlements, rivers, water bodies and forests. This research provided information at local level that could be used by farmers to select suitable fields for rice production, and then it can be applied to other crops. It will also be helpful for the field workers and policy planner who serves in the agricultural sector.

Keywords: AHP, GIS, spatial analysis, land suitability

Procedia PDF Downloads 241
153 An Exploratory Study of Changing Organisational Practices of Third-Sector Organisations in Mandated Corporate Social Responsibility in India

Authors: Avadh Bihari

Abstract:

Corporate social responsibility (CSR) has become a global parameter to define corporates' ethical and responsible behaviour. It was a voluntary practice in India till 2013, driven by various guidelines, which has become a mandate since 2014 under the Companies Act, 2013. This has compelled the corporates to redesign their CSR strategies by bringing in structures, planning, accountability, and transparency in their processes with a mandate to 'comply or explain'. Based on the author's M.Phil. dissertation, this paper presents the changes in organisational practices and institutional mechanisms of third-sector organisations (TSOs) with the theoretical frameworks of institutionalism and co-optation. It became an interesting case as India is the only country to have a law on CSR, which is not only mandating the reporting but the spending too. The space of CSR in India is changing rapidly and affecting multiple institutions, in the context of the changing roles of the state, market, and TSOs. Several factors such as stringent regulation on foreign funding, mandatory CSR pushing corporates to look out for NGOs, and dependency of Indian NGOs on CSR funds have come to the fore almost simultaneously, which made it an important area of study. Further, the paper aims at addressing the gap in the literature on the effects of mandated CSR on the functioning of TSOs through the empirical and theoretical findings of this study. The author had adopted an interpretivist position in this study to explore changes in organisational practices from the participants' experiences. Data were collected through in-depth interviews with five corporate officials, eleven officials from six TSOs, and two academicians, located at Mumbai and Delhi, India. The findings of this study show the legislation has institutionalised CSR, and TSOs get co-opted in the process of implementing mandated CSR. Seventy percent of the corporates implement their CSR projects through TSOs in India; this has affected the organisational practices of TSOs to a large extent. They are compelled to recruit expert workforce, create new departments for monitoring & evaluation, communications, and adopt management practices of project implementation from corporates. These are attempts to institutionalise the TSOs so that they can produce calculated results as demanded by corporates. In this process, TSOs get co-opted in a struggle to secure funds and lose their autonomy. The normative, coercive, and mimetic isomorphisms of institutionalism come into play as corporates are mandated to take up CSR, thereby influencing the organisational practices of TSOs. These results suggest that corporates and TSOs require an understanding of each other's work culture to develop mutual respect and work towards the goal of sustainable development of the communities. Further, TSOs need to retain their autonomy and understanding of ground realities without which they become an extension of the corporate-funder. For a successful CSR project, engagement beyond funding is required from corporate, through their involvement and not interference. CSR-led community development can be structured by management practices to an extent, but cannot overshadow the knowledge and experience of TSOs.

Keywords: corporate social responsibility, institutionalism, organisational practices, third-sector organisations

Procedia PDF Downloads 114
152 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection

Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa

Abstract:

Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.

Keywords: classification, airborne LiDAR, parameters selection, support vector machine

Procedia PDF Downloads 147
151 Mapping Potential Soil Salinization Using Rule Based Object Oriented Image Analysis

Authors: Zermina Q., Wasif Y., Naeem S., Urooj S., Sajid R. A.

Abstract:

Land degradation, a leading environemtnal problem and a decrease in the quality of land has become a major global issue, caused by human activities. By land degradation, more than half of the world’s drylands are affected. The worldwide scope of main saline soils is approximately 955 M ha, whereas inferior salinization affected approximately 77 M ha. In irrigated areas, a total of 58% of these soils is found. As most of the vegetation types requires fertile soil for their growth and quality production, salinity causes serious problem to the production of these vegetation types and agriculture demands. This research aims to identify the salt affected areas in the selected part of Indus Delta, Sindh province, Pakistan. This particular mangroves dominating coastal belt is important to the local community for their crop growth. Object based image analysis approach has been adopted on Landsat TM imagery of year 2011 by incorporating different mathematical band ratios, thermal radiance and salinity index. Accuracy assessment of developed salinity landcover map was performed using Erdas Imagine Accuracy Assessment Utility. Rain factor was also considered before acquiring satellite imagery and conducting field survey, as wet soil can greatly affect the condition of saline soil of the area. Dry season considered best for the remote sensing based observation and monitoring of the saline soil. These areas were trained with the ground truth data w.r.t pH and electric condutivity of the soil samples. The results were obtained from the object based image analysis of Keti bunder and Kharo chan shows most of the region under low saline soil.Total salt affected soil was measured to be 46,581.7 ha in Keti Bunder, which represents 57.81 % of the total area of 80,566.49 ha. High Saline Area was about 7,944.68 ha (9.86%). Medium Saline Area was about 17,937.26 ha (22.26 %) and low Saline Area was about 20,699.77 ha (25.69%). Where as total salt affected soil was measured to be 52,821.87 ha in Kharo Chann, which represents 55.87 % of the total area of 94,543.54 ha. High Saline Area was about 5,486.55 ha (5.80 %). Medium Saline Area was about 13,354.72 ha (14.13 %) and low Saline Area was about 33980.61 ha (35.94 %). These results show that the area is low to medium saline in nature. Accuracy of the soil salinity map was found to be 83 % with the Kappa co-efficient of 0.77. From this research, it was evident that this area as a whole falls under the category of low to medium saline area and being close to coastal area, mangrove forest can flourish. As Mangroves are salt tolerant plant so this area is consider heaven for mangrove plantation. It would ultimately benefit both the local community and the environment. Increase in mangrove forest control the problem of soil salinity and prevent sea water to intrude more into coastal area. So deforestation of mangrove should be regularly monitored.

Keywords: indus delta, object based image analysis, soil salinity, thematic mapper

Procedia PDF Downloads 619
150 Elaboration and Characterization of in-situ CrC- Ni(Al, Cr) Composites Elaborated from Ni and Cr₂AlC Precursors

Authors: A. Chiker, A. Benamor, A. Haddad, Y. Hadji, M. Hadji

Abstract:

Metal matrix composites (MMCs) have been of big interest for a few decades. Their major drawback lies in their enhanced mechanical performance over unreinforced alloys. They found ground in many engineering fields, such as aeronautics, aerospace, automotive, and other structural applications. One of the most used alloys as a matrix is nickel alloys, which meet the need for high-temperature mechanical properties; some attempts have been made to develop nickel base composites reinforced by high melt point and high modulus particulates. Among the carbides used as reinforcing particulates, chromium carbide is interesting for wear applications; it is widely used as a tribological coating material in high-temperature applications requiring high wear resistance and hardness. Moreover, a set of properties make it suitable for use in MMCs, such as toughness, the good corrosion and oxidation resistance of its three polymorphs -the cubic (Cr23C6), the hexagonal (Cr7C3), and the orthorhombic (Cr3C2)-, and it’s coefficient of thermal expansion that is almost equal to that of metals. The in-situ synthesis of CrC-reinforced Ni matrix composites could be achieved by the powder metallurgy route. To ensure the in-situ reactions during the sintering process, the use of phase precursors is necessary. Recently, new precursor materials have been proposed; these materials are called MAX phases. The MAX phases are thermodynamically stable nano-laminated materials displaying unusual and sometimes unique properties. These novel phases possess Mn+1AXn chemistry, where n is 1, 2, or 3, M is an early transition metal element, A is an A-group element, and X is C or N. Herein, the pressureless sintering method is used to elaborate Ni/Cr2AlC composites. Four composites were elaborated from 5, 10, 15 and 20 wt% of Cr2AlC MAX phase precursor which fully reacted with Ni-matrix at 1100 °C sintering temperature for 4 h in argon atmosphere. XRD results showed that Cr2AlC MAX phase was totally decomposed forming chromium carbide Cr7C3, and the released Al and Cr atoms diffused in Ni matrix giving rise to γ-Ni(Al,Cr) solid solution and γ’-Ni3(Al,Cr) intermetallic. Scanning Electron Microscopy (SEM) of the elaborated samples showed the presence of nanosized Cr7C3 reinforcing particles embedded in the Ni metal matrix, which have a direct impact on the tribological properties of the composites and their hardness. All the composites exhibited higher hardness than pure Ni; whereas adding 15 wt% of Cr2AlC gives the highest hardness (1.85 GPa). Using a ball-on-disc tribometer, dry sliding tests for the elaborated composites against 100Cr6 steel ball were studied under different applied loads. The microstructures and worn surface characteristics were then analyzed using SEM and Raman spectroscopy. The results show that all the composites exhibited better wear resistance compared to pure Ni, which could be explained by the formation of a lubricious tribo-layer during sliding and the good bonding between the Ni matrix and the reinforcing phases.

Keywords: composites, microscopy, sintering, wear

Procedia PDF Downloads 70
149 Role of Empirical Evidence in Law-Making: Case Study from India

Authors: Kaushiki Sanyal, Rajesh Chakrabarti

Abstract:

In India, on average, about 60 Bills are passed every year in both Houses of Parliament – Lok Sabha and Rajya Sabha (calculated from information on websites of both Houses). These are debated in both Lok Sabha (House of Commons) and Rajya Sabha (Council of States) before they are passed. However, lawmakers rarely use empirical evidence to make a case for a law. Most of the time, they support a law on the basis of anecdote, intuition, and common sense. While these do play a role in law-making, without the necessary empirical evidence, laws often fail to achieve their desired results. The quality of legislative debates is an indicator of the efficacy of the legislative process through which a Bill is enacted. However, the study of legislative debates has not received much attention either in India or worldwide due to the difficulty of objectively measuring the quality of a debate. Broadly, three approaches have emerged in the study of legislative debates. The rational-choice or formal approach shows that speeches vary based on different institutional arrangements, intra-party politics, and the political culture of a country. The discourse approach focuses on the underlying rules and conventions and how they impact the content of the debates. The deliberative approach posits that legislative speech can be reasoned, respectful, and informed. This paper aims to (a) develop a framework to judge the quality of debates by using the deliberative approach; (b) examine the legislative debates of three Bills passed in different periods as a demonstration of the framework, and (c) examine the broader structural issues that disincentive MPs from scrutinizing Bills. The framework would include qualitative and quantitative indicators to judge a debate. The idea is that the framework would provide useful insights into the legislators’ knowledge of the subject, the depth of their scrutiny of Bills, and their inclination toward evidence-based research. The three Bills that the paper plans to examine are as follows: 1. The Narcotics Drugs and Psychotropic Substances Act, 1985: This act was passed to curb drug trafficking and abuse. However, it mostly failed to fulfill its purpose. Consequently, it was amended thrice but without much impact on the ground. 2. The Criminal Laws (Amendment) Act, 2013: This act amended the Indian Penal Code to add a section on human trafficking. The purpose was to curb trafficking and penalise traffickers, pimps, and middlemen. However, the crime rate remains high while the conviction rate is low. 3. The Surrogacy (Regulation) Act, 2021: This act bans commercial surrogacy allowing only relatives to act as surrogates as long as there is no monetary payment. Experts fear that instead of preventing commercial surrogacy, it would drive the activity underground. The consequences would be borne by the surrogate, who would not be protected by law. The purpose of the paper is to objectively analyse the quality of parliamentary debates, get insights into how MPs understand the evidence and deliberate on steps to incentivise them to use empirical evidence.

Keywords: legislature, debates, empirical, India

Procedia PDF Downloads 86
148 A Comparison of Videography Tools and Techniques in African and International Contexts

Authors: Enoch Ocran

Abstract:

Film Pertinence maintains consistency in storytelling by sustaining the natural flow of action while evoking a particular feeling or emotion from the viewers with selected motion pictures. This study presents a thorough investigation of "Film Pertinence" in videography that examines its influence in Africa and around the world. This research delves into the dynamic realm of visual storytelling through film, with a specific focus on the concept of Film Pertinence (FP). The study’s primary objectives are to conduct a comparative analysis of videography tools and techniques employed in both African and international contexts, examining how they contribute to the achievement of organizational goals and the enhancement of cultural awareness. The research methodology includes a comprehensive literature review, interviews with videographers from diverse backgrounds in Africa and the international arena, and the examination of pertinent case studies. The investigation aims to elucidate the multifaceted nature of videographic practices, with particular attention to equipment choices, visual storytelling techniques, cultural sensitivity, and adaptability. This study explores the impact of cultural differences on videography choices, aiming to promote understanding between African and foreign filmmakers and create more culturally sensitive films. It also explores the role of technology in advancing videography practices, resource allocation, and the influence of globalization on local filmmaking practices. The research also contributes to film studies by analyzing videography's impact on storytelling, guiding filmmakers to create more compelling narratives. The findings can inform film education, tailoring curricula to regional needs and opportunities. The study also encourages cross-cultural collaboration in the film industry by highlighting convergence and divergence in videography practices. At its core, this study seeks to explore the implications of film pertinence as a framework for videographic practice. It scrutinizes how cultural expression, education, and storytelling transcend geographical boundaries on a global scale. By analyzing the interplay between tools, techniques, and context, the research illuminates the ways in which videographers in Africa and worldwide apply film Pertinence principles to achieve cross-cultural communication and effectively capture the objectives of their clients. One notable focus of this paper is on the techniques employed by videographers in West Africa to emphasize storytelling and participant engagement, showcasing the relevance of FP in highlighting cultural awareness in visual storytelling. Additionally, the study highlights the prevalence of film pertinence in African agricultural documentaries produced for esteemed organizations such as the Roundtable on Sustainable Palm Oil (RSPO), Proforest, World Food Program, Fidelity Bank Ghana, Instituto BVRio, Aflatoun International, and the Solidaridad Network. These documentaries serve to promote prosperity, resilience, human rights, sustainable farming practices, community respect, and environmental preservation, underlining the vital role of film in conveying these critical messages. In summary, this research offers valuable insights into the evolving landscape of videography in different contexts, emphasizing the significance of film pertinence as a unifying principle in the pursuit of effective visual storytelling and cross-cultural communication.

Keywords: film pertinence, Africa, cultural awareness, videography tools

Procedia PDF Downloads 67
147 Monitoring of Rice Phenology and Agricultural Practices from Sentinel 2 Images

Authors: D. Courault, L. Hossard, V. Demarez, E. Ndikumana, D. Ho Tong Minh, N. Baghdadi, F. Ruget

Abstract:

In the global change context, efficient management of the available resources has become one of the most important topics, particularly for sustainable crop development. Timely assessment with high precision is crucial for water resource and pest management. Rice cultivated in Southern France in the Camargue region must face a challenge, reduction of the soil salinity by flooding and at the same time reduce the number of herbicides impacting negatively the environment. This context has lead farmers to diversify crop rotation and their agricultural practices. The objective of this study was to evaluate this crop diversity both in crop systems and in agricultural practices applied to rice paddy in order to quantify the impact on the environment and on the crop production. The proposed method is based on the combined use of crop models and multispectral data acquired from the recent Sentinel 2 satellite sensors launched by the European Space Agency (ESA) within the homework of the Copernicus program. More than 40 images at fine spatial resolution (10m in the optical range) were processed for 2016 and 2017 (with a revisit time of 5 days) to map crop types using random forest method and to estimate biophysical variables (LAI) retrieved by inversion of the PROSAIL canopy radiative transfer model. Thanks to the high revisit time of Sentinel 2 data, it was possible to monitor the soil labor before flooding and the second sowing made by some farmers to better control weeds. The temporal trajectories of remote sensing data were analyzed for various rice cultivars for defining the main parameters describing the phenological stages useful to calibrate two crop models (STICS and SAFY). Results were compared to surveys conducted with 10 farms. A large variability of LAI has been observed at farm scale (up to 2-3m²/m²) which induced a significant variability in the yields simulated (up to 2 ton/ha). Observations on more than 300 fields have also been collected on land use. Various maps were elaborated, land use, LAI, flooding and sowing, and harvest dates. All these maps allow proposing a new typology to classify these paddy crop systems. Key phenological dates can be estimated from inverse procedures and were validated against ground surveys. The proposed approach allowed to compare the years and to detect anomalies. The methods proposed here can be applied at different crops in various contexts and confirm the potential of remote sensing acquired at fine resolution such as the Sentinel2 system for agriculture applications and environment monitoring. This study was supported by the French national center of spatial studies (CNES, funded by the TOSCA).

Keywords: agricultural practices, remote sensing, rice, yield

Procedia PDF Downloads 274
146 Computational Approaches to Study Lineage Plasticity in Human Pancreatic Ductal Adenocarcinoma

Authors: Almudena Espin Perez, Tyler Risom, Carl Pelz, Isabel English, Robert M. Angelo, Rosalie Sears, Andrew J. Gentles

Abstract:

Pancreatic ductal adenocarcinoma (PDAC) is one of the most deadly malignancies. The role of the tumor microenvironment (TME) is gaining significant attention in cancer research. Despite ongoing efforts, the nature of the interactions between tumors, immune cells, and stromal cells remains poorly understood. The cell-intrinsic properties that govern cell lineage plasticity in PDAC and extrinsic influences of immune populations require technically challenging approaches due to the inherently heterogeneous nature of PDAC. Understanding the cell lineage plasticity of PDAC will improve the development of novel strategies that could be translated to the clinic. Members of the team have demonstrated that the acquisition of ductal to neuroendocrine lineage plasticity in PDAC confers therapeutic resistance and is a biomarker of poor outcomes in patients. Our approach combines computational methods for deconvolving bulk transcriptomic cancer data using CIBERSORTx and high-throughput single-cell imaging using Multiplexed Ion Beam Imaging (MIBI) to study lineage plasticity in PDAC and its relationship to the infiltrating immune system. The CIBERSORTx algorithm uses signature matrices from immune cells and stroma from sorted and single-cell data in order to 1) infer the fractions of different immune cell types and stromal cells in bulked gene expression data and 2) impute a representative transcriptome profile for each cell type. We studied a unique set of 300 genomically well-characterized primary PDAC samples with rich clinical annotation. We deconvolved the PDAC transcriptome profiles using CIBERSORTx, leveraging publicly available single-cell RNA-seq data from normal pancreatic tissue and PDAC to estimate cell type proportions in PDAC, and digitally reconstruct cell-specific transcriptional profiles from our study dataset. We built signature matrices and optimized by simulations and comparison to ground truth data. We identified cell-type-specific transcriptional programs that contribute to cancer cell lineage plasticity, especially in the ductal compartment. We also studied cell differentiation hierarchies using CytoTRACE and predict cell lineage trajectories for acinar and ductal cells that we believe are pinpointing relevant information on PDAC progression. Collaborators (Angelo lab, Stanford University) has led the development of the Multiplexed Ion Beam Imaging (MIBI) platform for spatial proteomics. We will use in the very near future MIBI from tissue microarray of 40 PDAC samples to understand the spatial relationship between cancer cell lineage plasticity and stromal cells focused on infiltrating immune cells, using the relevant markers of PDAC plasticity identified from the RNA-seq analysis.

Keywords: deconvolution, imaging, microenvironment, PDAC

Procedia PDF Downloads 128
145 The Effect of Soil-Structure Interaction on the Post-Earthquake Fire Performance of Structures

Authors: A. T. Al-Isawi, P. E. F. Collins

Abstract:

The behaviour of structures exposed to fire after an earthquake is not a new area of engineering research, but there remain a number of areas where further work is required. Such areas relate to the way in which seismic excitation is applied to a structure, taking into account the effect of soil-structure interaction (SSI) and the method of analysis, in addition to identifying the excitation load properties. The selection of earthquake data input for use in nonlinear analysis and the method of analysis are still challenging issues. Thus, realistic artificial ground motion input data must be developed to certify that site properties parameters adequately describe the effects of the nonlinear inelastic behaviour of the system and that the characteristics of these parameters are coherent with the characteristics of the target parameters. Conversely, ignoring the significance of some attributes, such as frequency content, soil site properties and earthquake parameters may lead to misleading results, due to the misinterpretation of required input data and the incorrect synthesise of analysis hypothesis. This paper presents a study of the post-earthquake fire (PEF) performance of a multi-storey steel-framed building resting on soft clay, taking into account the effects of the nonlinear inelastic behaviour of the structure and soil, and the soil-structure interaction (SSI). Structures subjected to an earthquake may experience various levels of damage; the geometrical damage, which indicates the change in the initial structure’s geometry due to the residual deformation as a result of plastic behaviour, and the mechanical damage which identifies the degradation of the mechanical properties of the structural elements involved in the plastic range of deformation. Consequently, the structure presumably experiences partial structural damage but is then exposed to fire under its new residual material properties, which may result in building failure caused by a decrease in fire resistance. This scenario would be more complicated if SSI was also considered. Indeed, most earthquake design codes ignore the probability of PEF as well as the effect that SSI has on the behaviour of structures, in order to simplify the analysis procedure. Therefore, the design of structures based on existing codes which neglect the importance of PEF and SSI can create a significant risk of structural failure. In order to examine the criteria for the behaviour of a structure under PEF conditions, a two-dimensional nonlinear elasto-plastic model is developed using ABAQUS software; the effects of SSI are included. Both geometrical and mechanical damages have been taken into account after the earthquake analysis step. For comparison, an identical model is also created, which does not include the effects of soil-structure interaction. It is shown that damage to structural elements is underestimated if SSI is not included in the analysis, and the maximum percentage reduction in fire resistance is detected in the case when SSI is included in the scenario. The results are validated using the literature.

Keywords: Abaqus Software, Finite Element Analysis, post-earthquake fire, seismic analysis, soil-structure interaction

Procedia PDF Downloads 121
144 Artificial Neural Network and Satellite Derived Chlorophyll Indices for Estimation of Wheat Chlorophyll Content under Rainfed Condition

Authors: Muhammad Naveed Tahir, Wang Yingkuan, Huang Wenjiang, Raheel Osman

Abstract:

Numerous models used in prediction and decision-making process but most of them are linear in natural environment, and linear models reach their limitations with non-linearity in data. Therefore accurate estimation is difficult. Artificial Neural Networks (ANN) found extensive acceptance to address the modeling of the complex real world for the non-linear environment. ANN’s have more general and flexible functional forms than traditional statistical methods can effectively deal with. The link between information technology and agriculture will become more firm in the near future. Monitoring crop biophysical properties non-destructively can provide a rapid and accurate understanding of its response to various environmental influences. Crop chlorophyll content is an important indicator of crop health and therefore the estimation of crop yield. In recent years, remote sensing has been accepted as a robust tool for site-specific management by detecting crop parameters at both local and large scales. The present research combined the ANN model with satellite-derived chlorophyll indices from LANDSAT 8 imagery for predicting real-time wheat chlorophyll estimation. The cloud-free scenes of LANDSAT 8 were acquired (Feb-March 2016-17) at the same time when ground-truthing campaign was performed for chlorophyll estimation by using SPAD-502. Different vegetation indices were derived from LANDSAT 8 imagery using ERADAS Imagine (v.2014) software for chlorophyll determination. The vegetation indices were including Normalized Difference Vegetation Index (NDVI), Green Normalized Difference Vegetation Index (GNDVI), Chlorophyll Absorbed Ratio Index (CARI), Modified Chlorophyll Absorbed Ratio Index (MCARI) and Transformed Chlorophyll Absorbed Ratio index (TCARI). For ANN modeling, MATLAB and SPSS (ANN) tools were used. Multilayer Perceptron (MLP) in MATLAB provided very satisfactory results. For training purpose of MLP 61.7% of the data, for validation purpose 28.3% of data and rest 10% of data were used to evaluate and validate the ANN model results. For error evaluation, sum of squares error and relative error were used. ANN model summery showed that sum of squares error of 10.786, the average overall relative error was .099. The MCARI and NDVI were revealed to be more sensitive indices for assessing wheat chlorophyll content with the highest coefficient of determination R²=0.93 and 0.90 respectively. The results suggested that use of high spatial resolution satellite imagery for the retrieval of crop chlorophyll content by using ANN model provides accurate, reliable assessment of crop health status at a larger scale which can help in managing crop nutrition requirement in real time.

Keywords: ANN, chlorophyll content, chlorophyll indices, satellite images, wheat

Procedia PDF Downloads 146
143 Data Science/Artificial Intelligence: A Possible Panacea for Refugee Crisis

Authors: Avi Shrivastava

Abstract:

In 2021, two heart-wrenching scenes, shown live on television screens across countries, painted a grim picture of refugees. One of them was of people clinging onto an airplane's wings in their desperate attempt to flee war-torn Afghanistan. They ultimately fell to their death. The other scene was the U.S. government authorities separating children from their parents or guardians to deter migrants/refugees from coming to the U.S. These events show the desperation refugees feel when they are trying to leave their homes in disaster zones. However, data paints a grave picture of the current refugee situation. It also indicates that a bleak future lies ahead for the refugees across the globe. Data and information are the two threads that intertwine to weave the shimmery fabric of modern society. Data and information are often used interchangeably, but they differ considerably. For example, information analysis reveals rationale, and logic, while data analysis, on the other hand, reveals a pattern. Moreover, patterns revealed by data can enable us to create the necessary tools to combat huge problems on our hands. Data analysis paints a clear picture so that the decision-making process becomes simple. Geopolitical and economic data can be used to predict future refugee hotspots. Accurately predicting the next refugee hotspots will allow governments and relief agencies to prepare better for future refugee crises. The refugee crisis does not have binary answers. Given the emotionally wrenching nature of the ground realities, experts often shy away from realistically stating things as they are. This hesitancy can cost lives. When decisions are based solely on data, emotions can be removed from the decision-making process. Data also presents irrefutable evidence and tells whether there is a solution or not. Moreover, it also responds to a nonbinary crisis with a binary answer. Because of all that, it becomes easier to tackle a problem. Data science and A.I. can predict future refugee crises. With the recent explosion of data due to the rise of social media platforms, data and insight into data has solved many social and political problems. Data science can also help solve many issues refugees face while staying in refugee camps or adopted countries. This paper looks into various ways data science can help solve refugee problems. A.I.-based chatbots can help refugees seek legal help to find asylum in the country they want to settle in. These chatbots can help them find a marketplace where they can find help from the people willing to help. Data science and technology can also help solve refugees' many problems, including food, shelter, employment, security, and assimilation. The refugee problem seems to be one of the most challenging for social and political reasons. Data science and machine learning can help prevent the refugee crisis and solve or alleviate some of the problems that refugees face in their journey to a better life. With the explosion of data in the last decade, data science has made it possible to solve many geopolitical and social issues.

Keywords: refugee crisis, artificial intelligence, data science, refugee camps, Afghanistan, Ukraine

Procedia PDF Downloads 72
142 The Lived Experiences and Coping Strategies of Women with Attention Deficit and Hyperactivity Disorder (ADHD)

Authors: Oli Sophie Meredith, Jacquelyn Osborne, Sarah Verdon, Jane Frawley

Abstract:

PROJECT OVERVIEW AND BACKGROUND: Over one million Australians are affected by ADHD at an economic and social cost of over $20 billion per annum. Despite health outcomes being significantly worse compared with men, women have historically been overlooked in ADHD diagnosis and treatment. While research suggests physical activity and other non-prescription options can help with ADHD symptoms, the frontline response to ADHD remains expensive stimulant medications that can have adverse side effects. By interviewing women with ADHD, this research will examine women’s self-directed approaches to managing symptoms, including alternatives to prescription medications. It will investigate barriers and affordances to potentially helpful approaches and identify any concerning strategies pursued in lieu of diagnosis. SIGNIFICANCE AND INNOVATION: Despite the economic and societal impact of ADHD on women, research investigating how women manage their symptoms is scant. This project is significant because although women’s ADHD symptoms are markedly different to those of men, mainstream treatment has been based on the experiences of men. Further, it is thought that in developing nuanced coping strategies, women may have masked their symptoms. Thus, this project will highlight strategies which women deem effective in ‘thriving’ rather than just ‘hiding’. By investigating the health service use, self-care and physical activity of women with ADHD, this research aligns with a priority research areas as identified by the November 2023 senate ADHD inquiry report. APPROACH AND METHODS: Semi-structured interviews will be conducted with up to 20 women with ADHD. Interviews will be conducted in person and online to capture experience across rural and metropolitan Australia. Participants will be recruited in partnership with the peak representative body, ADHD Australia. The research will use an intersectional framework, and data will be analysed thematically. This project is led by an interdisciplinary and cross-institutional team of women with ADHD. Reflexive interviewing skills will be employed to help interviewees feel more comfortable disclosing their experiences, especially where they share common ground ENGAGEMENT, IMPACT AND BENEFIT: This research will benefit women with ADHD by increasing knowledge of strategies and alternative treatments to prescription medications, reducing the social and economic burden of ADHD on Australia and on individuals. It will also benefit women by identifying risks involved with some self-directed approaches in lieu of medical advice. The project has an accessible impact plan to directly benefit end-users, which includes the development of a podcast and a PDF resource translating findings. The resources will reach a wide audience through ADHD Australia’s extensive national networks. We will collaborate with Charles Sturt’s Accessibility and Inclusion Division of Safety, Security and Well-being to create a targeted resource for students with ADHD.

Keywords: ADHD, women's health, self-directed strategies, health service use, physical activity, public health

Procedia PDF Downloads 72
141 In-Depth Investigations on the Sequences of Accidents of Powered Two Wheelers Based on Police Crash Reports of Medan, North Sumatera Province Indonesia, Using Decision Aiding Processes

Authors: Bangun F., Crevits B., Bellet T., Banet A., Boy G. A., Katili I.

Abstract:

This paper seeks the incoherencies in cognitive process during an accident of Powered Two Wheelers (PTW) by understanding the factual sequences of events and causal relations for each case of accident. The principle of this approach is undertaking in-depth investigations on case per case of PTW accidents based on elaborate data acquisitions on accident sites that officially stamped in Police Crash Report (PCRs) 2012 of Medan with criteria, involved at least one PTW and resulted in serious injury and fatalities. The analysis takes into account four modules: accident chronologies, perpetrator, and victims, injury surveillance, vehicles and road infrastructures, comprising of traffic facilities, road geometry, road alignments and weather. The proposal for improvement could have provided a favorable influence on the chain of functional processes and events leading to collision. Decision Aiding Processes (DAP) assists in structuring different entities at different decisional levels, as each of these entities has its own objectives and constraints. The entities (A) are classified into 6 groups of accidents: solo PTW accidents; PTW vs. PTW; PTW vs. pedestrian; PTW vs. motor-trishaw; and PTW vs. other vehicles and consecutive crashes. The entities are also distinguished into 4 decisional levels: level of road users and street systems; operational level (crash-attended police officers or CAPO and road engineers), tactical level (Regional Traffic Police, Department of Transportation, and Department of Public Work), and strategic level (Traffic Police Headquarters (TCPHI)), parliament, Ministry of Transportation and Ministry of Public Work). These classifications will lead to conceptualization of Problem Situations (P) and Problem Formulations (I) in DAP context. The DAP concerns the sequences process of the incidents until the time the accident occurs, which can be modelled in terms of five activities of procedural rationality: identification on initial human features (IHF), investigation on proponents attributes (PrAT), on Injury Surveillance (IS), on the interaction between IHF and PrAt and IS (intercorrelation), then unravel the sequences of incidents; filtering and disclosure, which include: what needs to activate, modify or change or remove, what is new and what is priority. These can relate to the activation or modification or new establishment of law. The PrAt encompasses the problems of environmental, road infrastructure, road and traffic facilities, and road geometry. The evaluation model (MP) is generated to bridge P and I since MP is produced by the intercorrelations among IHF, PrAT and IS extracted from the PCRs 2012 of Medan. There are 7 findings of incoherences: lack of knowledge and awareness on the traffic regulations and the risks of accidents, especially when riding between 0 < x < 10 km from house, riding between 22 p.m.–05.30 a.m.; lack of engagements on procurement of IHF Data by CAPO; lack of competency of CAPO on data procurement in accident-sites; no intercorrelation among IHF and PrAt and IS in the database systems of PCRs; lack of maintenance and supervision on the availabilities and the capacities of traffic facilities and road infrastructure; instrumental bias with wash-back impacts towards the TCPHI; technical robustness with wash-back impacts towards the CAPO and TCPHI.

Keywords: decision aiding processes, evaluation model, PTW accidents, police crash reports

Procedia PDF Downloads 158
140 Nuancing the Indentured Migration in Amitav Ghosh's Sea of Poppies

Authors: Murari Prasad

Abstract:

This paper is motivated by the implications of indentured migration depicted in Amitav Ghosh’s critically acclaimed novel, Sea of Poppies (2008). Ghosh’s perspective on the experiences of North Indian indentured labourers moving from their homeland to a distant and unknown location across the seas suggests a radical attitudinal change among the migrants on board the Ibis, a schooner chartered to carry the recruits from Calcutta to Mauritius in the late 1830s. The novel unfolds the life-altering trauma of the bonded servants, including their efforts to maintain a sense of self while negotiating significant social and cultural transformations during the voyage which leads to the breakdown of familiar life-worlds. Equally, the migrants are introduced to an alternative network of relationships to ensure their survival away from land. They relinquish their entrenched beliefs and prejudices and commit themselves to a new brotherhood formed by ‘ship siblings.’ With the official abolition of direct slavery in 1833, the supply of cheap labour to the sugar plantation in British colonies as far-flung as Mauritius and Fiji to East Africa and the Caribbean sharply declined. Around the same time, China’s attempt to prohibit the illegal importation of opium from British India into China threatened the lucrative opium trade. To run the ever-profitable plantation colonies with cheap labour, Indian peasants, wrenched from their village economies, were indentured to plantations as girmitiyas (vernacularized from ‘agreement’) by the colonial government using the ploy of an optional form of recruitment. After the British conquest of the Isle of France in 1810, Mauritius became Britain’s premier sugar colony bringing waves of Indian immigrants to the island. In the articulations of their subjectivities one notices how the recruits cope with the alienating drudgery of indenture, mitigate the hardships of the voyage and forge new ties with pragmatic acts of cultural syncretism in a forward-looking autonomous community of ‘ship-siblings’ following the fracture of traditional identities. This paper tests the hypothesis that Ghosh envisions a kind of futuristic/utopian political collectivity in a hierarchically rigid, racially segregated and identity-obsessed world. In order to ground the claim and frame the complex representations of alliance and love across the boundaries of caste, religion, gender and nation, the essential methodology here is a close textual analysis of the novel. This methodology will be geared to explicate the utopian futurity that the novel gestures towards by underlining new regulations of life during voyage and dissolution of multiple differences among the indentured migrants on board the Ibis.

Keywords: indenture, colonial, opium, sugar plantation

Procedia PDF Downloads 398
139 Stability of a Natural Weak Rock Slope under Rapid Water Drawdowns: Interaction between Guadalfeo Viaduct and Rules Reservoir, Granada, Spain

Authors: Sonia Bautista Carrascosa, Carlos Renedo Sanchez

Abstract:

The effect of a rapid drawdown is a classical scenario to be considered in slope stability under submerged conditions. This situation arises when totally or partially submerged slopes experience a descent of the external water level and is a typical verification to be done in a dam engineering discipline, as reservoir water levels commonly fluctuate noticeably during seasons and due to operational reasons. Although the scenario is well known and predictable in general, site conditions can increase the complexity of its assessment and external factors are not always expected, can cause a reduction in the stability or even a failure in a slope under a rapid drawdown situation. The present paper describes and discusses the interaction between two different infrastructures, a dam and a highway, and the impact on the stability of a natural rock slope overlaid by the north abutment of a viaduct of the A-44 Highway due to the rapid drawdown of the Rules Dam, in the province of Granada (south of Spain). In the year 2011, with both infrastructures, the A-44 Highway and the Rules Dam already constructed, delivered and under operation, some movements start to be recorded in the approximation embankment and north abutment of the Guadalfeo Viaduct, included in the highway and developed to solve the crossing above the tail of the reservoir. The embankment and abutment were founded in a low-angle natural rock slope formed by grey graphic phyllites, distinctly weathered and intensely fractured, with pre-existing fault and weak planes. After the first filling of the reservoir, to a relative level of 243m, three consecutive drawdowns were recorded in the autumns 2010, 2011 and 2012, to relative levels of 234m, 232m and 225m. To understand the effect of these drawdowns in the weak rock mass strength and in its stability, a new geological model was developed, after reviewing all the available ground investigations, updating the geological mapping of the area and supplemented with an additional geotechnical and geophysical investigations survey. Together with all this information, rainfall and reservoir level evolution data have been reviewed in detail to incorporate into the monitoring interpretation. The analysis of the monitoring data and the new geological and geotechnical interpretation, supported by the use of limit equilibrium software Slide2, concludes that the movement follows the same direction as the schistosity of the phyllitic rock mass, coincident as well with the direction of the natural slope, indicating a deep-seated movement of the whole slope towards the reservoir. As part of these conclusions, the solutions considered to reinstate the highway infrastructure to the required FoS will be described, and the geomechanical characterization of these weak rocks discussed, together with the influence of water level variations, not only in the water pressure regime but in its geotechnical behavior, by the modification of the strength parameters and deformability.

Keywords: monitoring, rock slope stability, water drawdown, weak rock

Procedia PDF Downloads 160
138 Improving Fingerprinting-Based Localization System Using Generative AI

Authors: Getaneh Berie Tarekegn, Li-Chia Tai

Abstract:

With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarms, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 42
137 TRAC: A Software Based New Track Circuit for Traffic Regulation

Authors: Jérôme de Reffye, Marc Antoni

Abstract:

Following the development of the ERTMS system, we think it is interesting to develop another software-based track circuit system which would fit secondary railway lines with an easy-to-work implementation and a low sensitivity to rail-wheel impedance variations. We called this track circuit 'Track Railway by Automatic Circuits.' To be internationally implemented, this system must not have any mechanical component and must be compatible with existing track circuit systems. For example, the system is independent from the French 'Joints Isolants Collés' that isolate track sections from one another, and it is equally independent from component used in Germany called 'Counting Axles,' in French 'compteur d’essieux.' This track circuit is fully interoperable. Such universality is obtained by replacing the train detection mechanical system with a space-time filtering of train position. The various track sections are defined by the frequency of a continuous signal. The set of frequencies related to the track sections is a set of orthogonal functions in a Hilbert Space. Thus the failure probability of track sections separation is precisely calculated on the basis of signal-to-noise ratio. SNR is a function of the level of traction current conducted by rails. This is the reason why we developed a very powerful algorithm to reject noise and jamming to obtain an SNR compatible with the precision required for the track circuit and SIL 4 level. The SIL 4 level is thus reachable by an adjustment of the set of orthogonal functions. Our major contributions to railway engineering signalling science are i) Train space localization is precisely defined by a calibration system. The operation bypasses the GSM-R radio system of the ERTMS system. Moreover, the track circuit is naturally protected against radio-type jammers. After the calibration operation, the track circuit is autonomous. ii) A mathematical topology adapted to train space localization by following the train through a linear time filtering of the received signal. Track sections are numerically defined and can be modified with a software update. The system was numerically simulated, and results were beyond our expectations. We achieved a precision of one meter. Rail-ground and rail-wheel impedance sensitivity analysis gave excellent results. Results are now complete and ready to be published. This work was initialised as a research project of the French Railways developed by the Pi-Ramses Company under SNCF contract and required five years to obtain the results. This track circuit is already at Level 3 of the ERTMS system, and it will be much cheaper to implement and to work. The traffic regulation is based on variable length track sections. As the traffic growths, the maximum speed is reduced, and the track section lengths are decreasing. It is possible if the elementary track section is correctly defined for the minimum speed and if every track section is able to emit with variable frequencies.

Keywords: track section, track circuits, space-time crossing, adaptive track section, automatic railway signalling

Procedia PDF Downloads 331
136 Gauging Floral Resources for Pollinators Using High Resolution Drone Imagery

Authors: Nicholas Anderson, Steven Petersen, Tom Bates, Val Anderson

Abstract:

Under the multiple-use management regime established in the United States for federally owned lands, government agencies have come under pressure from commercial apiaries to grant permits for the summer pasturing of honeybees on government lands. Federal agencies have struggled to integrate honeybees into their management plans and have little information to make regulations that resolve how many colonies should be allowed in a single location and at what distance sets of hives should be placed. Many conservation groups have voiced their concerns regarding the introduction of honeybees to these natural lands, as they may outcompete and displace native pollinating species. Assessing the quality of an area in regard to its floral resources, pollen, and nectar can be important when attempting to create regulations for the integration of commercial honeybee operations into a native ecosystem. Areas with greater floral resources may be able to support larger numbers of honeybee colonies, while poorer resource areas may be less resilient to introduced disturbances. Attempts are made in this study to determine flower cover using high resolution drone imagery to help assess the floral resource availability to pollinators in high elevation, tall forb communities. This knowledge will help in determining the potential that different areas may have for honeybee pasturing and honey production. Roughly 700 images were captured at 23m above ground level using a drone equipped with a Sony QX1 RGB 20-megapixel camera. These images were stitched together using Pix4D, resulting in a 60m diameter high-resolution mosaic of a tall forb meadow. Using the program ENVI, a supervised maximum likelihood classification was conducted to calculate the percentage of total flower cover and flower cover by color (blue, white, and yellow). A complete vegetation inventory was taken on site, and the major flowers contributing to each color class were noted. An accuracy assessment was performed on the classification yielding an 89% overall accuracy and a Kappa Statistic of 0.855. With this level of accuracy, drones provide an affordable and time efficient method for the assessment of floral cover in large areas. The proximal step of this project will now be to determine the average pollen and nectar loads carried by each flower species. The addition of this knowledge will result in a quantifiable method of measuring pollen and nectar resources of entire landscapes. This information will not only help land managers determine stocking rates for honeybees on public lands but also has applications in the agricultural setting, aiding producers in the determination of the number of honeybee colonies necessary for proper pollination of fruit and nut crops.

Keywords: honeybee, flower, pollinator, remote sensing

Procedia PDF Downloads 140
135 Improving Fingerprinting-Based Localization (FPL) System Using Generative Artificial Intelligence (GAI)

Authors: Getaneh Berie Tarekegn, Li-Chia Tai

Abstract:

With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 47
134 Assessment of Heavy Metals Contamination Levels in Groundwater: A Case Study of the Bafia Agricultural Area, Centre Region Cameroon

Authors: Carine Enow-Ayor Tarkang, Victorine Neh Akenji, Dmitri Rouwet, Jodephine Njdma, Andrew Ako Ako, Franco Tassi, Jules Remy Ngoupayou Ndam

Abstract:

Groundwater is the major water resource in the whole of Bafia used for drinking, domestic, poultry and agricultural purposes, and being an area of intense agriculture, there is a great necessity to do a quality assessment. Bafia is one of the main food suppliers in the Centre region of Cameroon, and so to meet their demands, the farmers make use of fertilizers and other agrochemicals to increase their yield. Less than 20% of the population in Bafia has access to piped-borne water due to the national shortage, according to the authors best knowledge very limited studies have been carried out in the area to increase awareness of the groundwater resources. The aim of this study was to assess heavy metal contamination levels in ground and surface waters and to evaluate the effects of agricultural inputs on water quality in the Bafia area. 57 water samples (including 31 wells, 20 boreholes, 4 rivers and 2 springs) were analyzed for their physicochemical parameters, while collected samples were filtered, acidified with HNO3 and analyzed by ICP-MS for their heavy metal content (Fe, Ti, Sr, Al, Mn). Results showed that most of the water samples are acidic to slightly neutral and moderately mineralized. Ti concentration was significantly high in the area (mean value 130µg/L), suggesting another Ti source besides the natural input from Titanium oxides. The high amounts of Mn and Al in some cases also pointed to additional input, probably from fertilizers that are used in the farmlands. Most of the water samples were found to be significantly contaminated with heavy metals exceeding the WHO allowable limits (Ti-94.7%, Al-19.3%, Mn-14%, Fe-5.2% and Sr-3.5% above limits), especially around farmlands and topographic low areas. The heavy metal concentration was evaluated using the heavy metal pollution index (HPI), heavy metal evaluation index (HEI) and degree of contamination (Cd), while the Ficklin diagram was used for the water based on changes in metal content and pH. The high mean values of HPI and Cd (741 and 5, respectively), which exceeded the critical limit, indicate that the water samples are highly contaminated, with intense pollution from Ti, Al and Mn. Based on the HPI and Cd, 93% and 35% of the samples, respectively, are unacceptable for drinking purposes. The lowest HPI value point also had the lowest EC (50 µS/cm), indicating lower mineralization and less anthropogenic influence. According to the Ficklin diagram, 89% of the samples fell within the near-neutral low-metal domain, while 9% fell in the near-neutral extreme-metal domain. Two significant factors were extracted from the PCA, explaining 70.6% of the total variance. The first factor revealed intense anthropogenic activity (especially from fertilizers), while the second factor revealed water-rock interactions. Agricultural activities thus have an impact on the heavy metal content of groundwater in the area; hence, much attention should be given to the affected areas in order to protect human health/life and thus sustainably manage this precious resource.

Keywords: Bafia, contamination, degree of contamination, groundwater, heavy metal pollution index

Procedia PDF Downloads 86
133 Inflation and Deflation of Aircraft's Tire with Intelligent Tire Pressure Regulation System

Authors: Masoud Mirzaee, Ghobad Behzadi Pour

Abstract:

An aircraft tire is designed to tolerate extremely heavy loads for a short duration. The number of tires increases with the weight of the aircraft, as it is needed to be distributed more evenly. Generally, aircraft tires work at high pressure, up to 200 psi (14 bar; 1,400 kPa) for airliners and higher for business jets. Tire assemblies for most aircraft categories provide a recommendation of compressed nitrogen that supports the aircraft’s weight on the ground, including a mechanism for controlling the aircraft during taxi, takeoff; landing; and traction for braking. Accurate tire pressure is a key factor that enables tire assemblies to perform reliably under high static and dynamic loads. Concerning ambient temperature change, considering the condition in which the temperature between the origin and destination airport was different, tire pressure should be adjusted and inflated to the specified operating pressure at the colder airport. This adjustment superseding the normal tire over an inflation limit of 5 percent at constant ambient temperature is required because the inflation pressure remains constant to support the load of a specified aircraft configuration. On the other hand, without this adjustment, a tire assembly would be significantly under/over-inflated at the destination. Due to an increase of human errors in the aviation industry, exorbitant costs are imposed on the airlines for providing consumable parts such as aircraft tires. The existence of an intelligent system to adjust the aircraft tire pressure based on weight, load, temperature, and weather conditions of origin and destination airports, could have a significant effect on reducing the aircraft maintenance costs, aircraft fuel and further improving the environmental issues related to the air pollution. An intelligent tire pressure regulation system (ITPRS) contains a processing computer, a nitrogen bottle with 1800 psi, and distribution lines. Nitrogen bottle’s inlet and outlet valves are installed in the main wheel landing gear’s area and are connected through nitrogen lines to main wheels and nose wheels assy. Controlling and monitoring of nitrogen will be performed by a computer, which is adjusted according to the calculations of received parameters, including the temperature of origin and destination airport, the weight of cargo loads and passengers, fuel quantity, and wind direction. Correct tire inflation and deflation are essential in assuring that tires can withstand the centrifugal forces and heat of normal operations, with an adequate margin of safety for unusual operating conditions such as rejected takeoff and hard landings. ITPRS will increase the performance of the aircraft in all phases of takeoff, landing, and taxi. Moreover, this system will reduce human errors, consumption materials, and stresses imposed on the aircraft body.

Keywords: avionic system, improve efficiency, ITPRS, human error, reduced cost, tire pressure

Procedia PDF Downloads 249
132 Production and Characterization of Biochars from Torrefaction of Biomass

Authors: Serdar Yaman, Hanzade Haykiri-Acma

Abstract:

Biomass is a CO₂-neutral fuel that is renewable and sustainable along with having very huge global potential. Efficient use of biomass in power generation and production of biomass-based biofuels can mitigate the greenhouse gasses (GHG) and reduce dependency on fossil fuels. There are also other beneficial effects of biomass energy use such as employment creation and pollutant reduction. However, most of the biomass materials are not capable of competing with fossil fuels in terms of energy content. High moisture content and high volatile matter yields of biomass make it low calorific fuel, and it is very significant concern over fossil fuels. Besides, the density of biomass is generally low, and it brings difficulty in transportation and storage. These negative aspects of biomass can be overcome by thermal pretreatments that upgrade the fuel property of biomass. That is, torrefaction is such a thermal process in which biomass is heated up to 300ºC under non-oxidizing conditions to avoid burning of the material. The treated biomass is called as biochar that has considerably lower contents of moisture, volatile matter, and oxygen compared to the parent biomass. Accordingly, carbon content and the calorific value of biochar increase to the level which is comparable with that of coal. Moreover, hydrophilic nature of untreated biomass that leads decay in the structure is mostly eliminated, and the surface properties of biochar turn into hydrophobic character upon torrefaction. In order to investigate the effectiveness of torrefaction process on biomass properties, several biomass species such as olive milling residue (OMR), Rhododendron (small shrubby tree with bell-shaped flowers), and ash tree (timber tree) were chosen. The fuel properties of these biomasses were analyzed through proximate and ultimate analyses as well as higher heating value (HHV) determination. For this, samples were first chopped and ground to a particle size lower than 250 µm. Then, samples were subjected to torrefaction in a horizontal tube furnace by heating from ambient up to temperatures of 200, 250, and 300ºC at a heating rate of 10ºC/min. The biochars obtained from this process were also tested by the methods applied to the parent biomass species. Improvement in the fuel properties was interpreted. That is, increasing torrefaction temperature led to regular increases in the HHV in OMR, and the highest HHV (6065 kcal/kg) was gained at 300ºC. Whereas, torrefaction at 250ºC was seen optimum for Rhododendron and ash tree since torrefaction at 300ºC had a detrimental effect on HHV. On the other hand, the increase in carbon contents and reduction in oxygen contents were determined. Burning characteristics of the biochars were also studied using thermal analysis technique. For this purpose, TA Instruments SDT Q600 model thermal analyzer was used and the thermogravimetric analysis (TGA), derivative thermogravimetry (DTG), differential scanning calorimetry (DSC), and differential thermal analysis (DTA) curves were compared and interpreted. It was concluded that torrefaction is an efficient method to upgrade the fuel properties of biomass and the biochars from which have superior characteristics compared to the parent biomasses.

Keywords: biochar, biomass, fuel upgrade, torrefaction

Procedia PDF Downloads 373