Search results for: automatic mapping
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2002

Search results for: automatic mapping

1432 Investigation of Different Machine Learning Algorithms in Large-Scale Land Cover Mapping within the Google Earth Engine

Authors: Amin Naboureh, Ainong Li, Jinhu Bian, Guangbin Lei, Hamid Ebrahimy

Abstract:

Large-scale land cover mapping has become a new challenge in land change and remote sensing field because of involving a big volume of data. Moreover, selecting the right classification method, especially when there are different types of landscapes in the study area is quite difficult. This paper is an attempt to compare the performance of different machine learning (ML) algorithms for generating a land cover map of the China-Central Asia–West Asia Corridor that is considered as one of the main parts of the Belt and Road Initiative project (BRI). The cloud-based Google Earth Engine (GEE) platform was used for generating a land cover map for the study area from Landsat-8 images (2017) by applying three frequently used ML algorithms including random forest (RF), support vector machine (SVM), and artificial neural network (ANN). The selected ML algorithms (RF, SVM, and ANN) were trained and tested using reference data obtained from MODIS yearly land cover product and very high-resolution satellite images. The finding of the study illustrated that among three frequently used ML algorithms, RF with 91% overall accuracy had the best result in producing a land cover map for the China-Central Asia–West Asia Corridor whereas ANN showed the worst result with 85% overall accuracy. The great performance of the GEE in applying different ML algorithms and handling huge volume of remotely sensed data in the present study showed that it could also help the researchers to generate reliable long-term land cover change maps. The finding of this research has great importance for decision-makers and BRI’s authorities in strategic land use planning.

Keywords: land cover, google earth engine, machine learning, remote sensing

Procedia PDF Downloads 113
1431 Comparison of Support Vector Machines and Artificial Neural Network Classifiers in Characterizing Threatened Tree Species Using Eight Bands of WorldView-2 Imagery in Dukuduku Landscape, South Africa

Authors: Galal Omer, Onisimo Mutanga, Elfatih M. Abdel-Rahman, Elhadi Adam

Abstract:

Threatened tree species (TTS) play a significant role in ecosystem functioning and services, land use dynamics, and other socio-economic aspects. Such aspects include ecological, economic, livelihood, security-based, and well-being benefits. The development of techniques for mapping and monitoring TTS is thus critical for understanding the functioning of ecosystems. The advent of advanced imaging systems and supervised learning algorithms has provided an opportunity to classify TTS over fragmenting landscape. Recently, vegetation maps have been produced using advanced imaging systems such as WorldView-2 (WV-2) and robust classification algorithms such as support vectors machines (SVM) and artificial neural network (ANN). However, delineation of TTS in a fragmenting landscape using high resolution imagery has widely remained elusive due to the complexity of the species structure and their distribution. Therefore, the objective of the current study was to examine the utility of the advanced WV-2 data for mapping TTS in the fragmenting Dukuduku indigenous forest of South Africa using SVM and ANN classification algorithms. The results showed the robustness of the two machine learning algorithms with an overall accuracy (OA) of 77.00% (total disagreement = 23.00%) for SVM and 75.00% (total disagreement = 25.00%) for ANN using all eight bands of WV-2 (8B). This study concludes that SVM and ANN classification algorithms with WV-2 8B have the potential to classify TTS in the Dukuduku indigenous forest. This study offers relatively accurate information that is important for forest managers to make informed decisions regarding management and conservation protocols of TTS.

Keywords: artificial neural network, threatened tree species, indigenous forest, support vector machines

Procedia PDF Downloads 515
1430 Music Genre Classification Based on Non-Negative Matrix Factorization Features

Authors: Soyon Kim, Edward Kim

Abstract:

In order to retrieve information from the massive stream of songs in the music industry, music search by title, lyrics, artist, mood, and genre has become more important. Despite the subjectivity and controversy over the definition of music genres across different nations and cultures, automatic genre classification systems that facilitate the process of music categorization have been developed. Manual genre selection by music producers is being provided as statistical data for designing automatic genre classification systems. In this paper, an automatic music genre classification system utilizing non-negative matrix factorization (NMF) is proposed. Short-term characteristics of the music signal can be captured based on the timbre features such as mel-frequency cepstral coefficient (MFCC), decorrelated filter bank (DFB), octave-based spectral contrast (OSC), and octave band sum (OBS). Long-term time-varying characteristics of the music signal can be summarized with (1) the statistical features such as mean, variance, minimum, and maximum of the timbre features and (2) the modulation spectrum features such as spectral flatness measure, spectral crest measure, spectral peak, spectral valley, and spectral contrast of the timbre features. Not only these conventional basic long-term feature vectors, but also NMF based feature vectors are proposed to be used together for genre classification. In the training stage, NMF basis vectors were extracted for each genre class. The NMF features were calculated in the log spectral magnitude domain (NMF-LSM) as well as in the basic feature vector domain (NMF-BFV). For NMF-LSM, an entire full band spectrum was used. However, for NMF-BFV, only low band spectrum was used since high frequency modulation spectrum of the basic feature vectors did not contain important information for genre classification. In the test stage, using the set of pre-trained NMF basis vectors, the genre classification system extracted the NMF weighting values of each genre as the NMF feature vectors. A support vector machine (SVM) was used as a classifier. The GTZAN multi-genre music database was used for training and testing. It is composed of 10 genres and 100 songs for each genre. To increase the reliability of the experiments, 10-fold cross validation was used. For a given input song, an extracted NMF-LSM feature vector was composed of 10 weighting values that corresponded to the classification probabilities for 10 genres. An NMF-BFV feature vector also had a dimensionality of 10. Combined with the basic long-term features such as statistical features and modulation spectrum features, the NMF features provided the increased accuracy with a slight increase in feature dimensionality. The conventional basic features by themselves yielded 84.0% accuracy, but the basic features with NMF-LSM and NMF-BFV provided 85.1% and 84.2% accuracy, respectively. The basic features required dimensionality of 460, but NMF-LSM and NMF-BFV required dimensionalities of 10 and 10, respectively. Combining the basic features, NMF-LSM and NMF-BFV together with the SVM with a radial basis function (RBF) kernel produced the significantly higher classification accuracy of 88.3% with a feature dimensionality of 480.

Keywords: mel-frequency cepstral coefficient (MFCC), music genre classification, non-negative matrix factorization (NMF), support vector machine (SVM)

Procedia PDF Downloads 303
1429 The Status of Precision Agricultural Technology Adoption on Row Crop Farms vs. Specialty Crop Farms

Authors: Shirin Ghatrehsamani

Abstract:

Higher efficiency and lower environmental impact are the consequence of using advanced technology in farming. They also help to decrease yield variability by diminishing weather variability impact, optimizing nutrient and pest management as well as reducing competition from weeds. A better understanding of the pros and cons of applying technology and finding the main reason for preventing the utilization of the technology has a significant impact on developing technology adoption among farmers and producers in the digital agriculture era. The results from two surveys carried out in 2019 and 2021 were used to investigate whether the crop types had an impact on the willingness to utilize technology on the farms. The main focus of the questionnaire was on utilizing precision agriculture (PA) technologies among farmers in some parts of the united states. Collected data was analyzed to determine the practical application of various technologies. The survey results showed more similarities in the main reason not to use PA between the two crop types, but the present application of using technology in specialty crops is generally five times larger than in row crops. GPS receiver applications were reported similar for both types of crops. Lack of knowledge and high cost of data handling were cited as the main problems. The most significant difference was among using variable rate technology, which was 43% for specialty crops while was reported 0% for row crops. Pest scouting and mapping were commonly used for specialty crops, while they were rarely applied for row crops. Survey respondents found yield mapping, soil sampling map, and irrigation scheduling were more valuable for specialty crops than row crops in management decisions. About 50% of the respondents would like to share the PA data in both types of crops. Almost 50 % of respondents got their PA information from retailers in both categories, and as the second source, using extension agents were more common in specialty crops than row crops.

Keywords: precision agriculture, smart farming, digital agriculture, technology adoption

Procedia PDF Downloads 116
1428 E-Learning Platform for School Kids

Authors: Gihan Thilakarathna, Fernando Ishara, Rathnayake Yasith, Bandara A. M. R. Y.

Abstract:

E-learning is a crucial component of intelligent education. Even in the midst of a pandemic, E-learning is becoming increasingly important in the educational system. Several e-learning programs are accessible for students. Here, we decided to create an e-learning framework for children. We've found a few issues that teachers are having with their online classes. When there are numerous students in an online classroom, how does a teacher recognize a student's focus on academics and below-the-surface behaviors? Some kids are not paying attention in class, and others are napping. The teacher is unable to keep track of each and every student. Key challenge in e-learning is online exams. Because students can cheat easily during online exams. Hence there is need of exam proctoring is occurred. In here we propose an automated online exam cheating detection method using a web camera. The purpose of this project is to present an E-learning platform for math education and include games for kids as an alternative teaching method for math students. The game will be accessible via a web browser. The imagery in the game is drawn in a cartoonish style. This will help students learn math through games. Everything in this day and age is moving towards automation. However, automatic answer evaluation is only available for MCQ-based questions. As a result, the checker has a difficult time evaluating the theory solution. The current system requires more manpower and takes a long time to evaluate responses. It's also possible to mark two identical responses differently and receive two different grades. As a result, this application employs machine learning techniques to provide an automatic evaluation of subjective responses based on the keyword provided to the computer as student input, resulting in a fair distribution of marks. In addition, it will save time and manpower. We used deep learning, machine learning, image processing and natural language technologies to develop these research components.

Keywords: math, education games, e-learning platform, artificial intelligence

Procedia PDF Downloads 157
1427 Ancient Cities of Deltaic Bengal: Origin and Nature on the Riverine Bed of Ganges Valley

Authors: Sajid Bin Doza

Abstract:

A town or a city contributes a lot to human mankind. City evolves memory, ambition, frustration and achievement. The city is something that offers life, as the character of the city is. A city is having confined image to the human being. Time place and matter generate this vive, city celebrates with its inhabitant, belongs and to care for each other. Apart from all these; although city and settlements are the contentious and changing phenomenon; the origin of the city in the very delta land started with unique and strategic sequences. Religious belief, topography, availability of resource and connection with commercial hub make the potential of the settlement. Ancient cities of Bengal are not the exception from these phenomenologies. From time immemorial; Bengal is enriched with numerous cities and notorious settlements. These cities and settlements were connected with other inland ports and Bengal became an important trade route, trailed by the Riverine connections. The delta land formation is valued for its geographic situation, consequences of this position; a new story or a new conception could be found in origin of an ancient city. However, the objective of this research is to understand the origin and spirit of the ancient city of Bengal, the research would also try to unfold the authentic and rational meaning of soul of the city, this research addresses the interest to elaborate the soul of the ancient sites of Riverine Delta. As rivers used to have the common character in this very landform; river supported community generated as well. River gives people wealth, sometimes fall us in sorrow. The river provides us commerce and trading. River gives us faith and religion. All these potentials have evolved from the Riverine excel. So the research would approach thoroughly to justify the riverine value as the soul for the ancient cities of Bengal. Cartographic information and illustration would be the preferred language for this research. Preferably, the historic mapping would be the unique folio of this study.

Keywords: memory of the city, riverine network, ancient cities, cartographic mapping, settlement pattern

Procedia PDF Downloads 294
1426 Digi-Buddy: A Smart Cane with Artificial Intelligence and Real-Time Assistance

Authors: Amaladhithyan Krishnamoorthy, Ruvaitha Banu

Abstract:

Vision is considered as the most important sense in humans, without which leading a normal can be often difficult. There are many existing smart canes for visually impaired with obstacle detection using ultrasonic transducer to help them navigate. Though the basic smart cane increases the safety of the users, it does not help in filling the void of visual loss. This paper introduces the concept of Digi-Buddy which is an evolved smart cane for visually impaired. The cane consists for several modules, apart from the basic obstacle detection features; the Digi-Buddy assists the user by capturing video/images and streams them to the server using a wide-angled camera, which then detects the objects using Deep Convolutional Neural Network. In addition to determining what the particular image/object is, the distance of the object is assessed by the ultrasonic transducer. The sound generation application, modelled with the help of Natural Language Processing is used to convert the processed images/object into audio. The object detected is signified by its name which is transmitted to the user with the help of Bluetooth hear phones. The object detection is extended to facial recognition which maps the faces of the person the user meets in the database of face images and alerts the user about the person. One of other crucial function consists of an automatic-intimation-alarm which is triggered when the user is in an emergency. If the user recovers within a set time, a button is provisioned in the cane to stop the alarm. Else an automatic intimation is sent to friends and family about the whereabouts of the user using GPS. In addition to safety and security by the existing smart canes, the proposed concept devices to be implemented as a prototype helping visually-impaired visualize their surroundings through audio more in an amicable way.

Keywords: artificial intelligence, facial recognition, natural language processing, internet of things

Procedia PDF Downloads 355
1425 A Method for Multimedia User Interface Design for Mobile Learning

Authors: Shimaa Nagro, Russell Campion

Abstract:

Mobile devices are becoming ever more widely available, with growing functionality, and are increasingly used as an enabling technology to give students access to educational material anytime and anywhere. However, the design of educational material user interfaces for mobile devices is beset by many unresolved research issues such as those arising from emphasising the information concepts then mapping this information to appropriate media (modelling information then mapping media effectively). This report describes a multimedia user interface design method for mobile learning. The method covers specification of user requirements and information architecture, media selection to represent the information content, design for directing attention to important information, and interaction design to enhance user engagement based on Human-Computer Interaction design strategies (HCI). The method will be evaluated by three different case studies to prove the method is suitable for application to different areas / applications, these are; an application to teach about major computer networking concepts, an application to deliver a history-based topic; (after these case studies have been completed, the method will be revised to remove deficiencies and then used to develop a third case study), an application to teach mathematical principles. At this point, the method will again be revised into its final format. A usability evaluation will be carried out to measure the usefulness and effectiveness of the method. The investigation will combine qualitative and quantitative methods, including interviews and questionnaires for data collection and three case studies for validating the MDMLM method. The researcher has successfully produced the method at this point which is now under validation and testing procedures. From this point forward in the report, the researcher will refer to the method using the MDMLM abbreviation which means Multimedia Design Mobile Learning Method.

Keywords: human-computer interaction, interface design, mobile learning, education

Procedia PDF Downloads 247
1424 Audio-Visual Co-Data Processing Pipeline

Authors: Rita Chattopadhyay, Vivek Anand Thoutam

Abstract:

Speech is the most acceptable means of communication where we can quickly exchange our feelings and thoughts. Quite often, people can communicate orally but cannot interact or work with computers or devices. It’s easy and quick to give speech commands than typing commands to computers. In the same way, it’s easy listening to audio played from a device than extract output from computers or devices. Especially with Robotics being an emerging market with applications in warehouses, the hospitality industry, consumer electronics, assistive technology, etc., speech-based human-machine interaction is emerging as a lucrative feature for robot manufacturers. Considering this factor, the objective of this paper is to design the “Audio-Visual Co-Data Processing Pipeline.” This pipeline is an integrated version of Automatic speech recognition, a Natural language model for text understanding, object detection, and text-to-speech modules. There are many Deep Learning models for each type of the modules mentioned above, but OpenVINO Model Zoo models are used because the OpenVINO toolkit covers both computer vision and non-computer vision workloads across Intel hardware and maximizes performance, and accelerates application development. A speech command is given as input that has information about target objects to be detected and start and end times to extract the required interval from the video. Speech is converted to text using the Automatic speech recognition QuartzNet model. The summary is extracted from text using a natural language model Generative Pre-Trained Transformer-3 (GPT-3). Based on the summary, essential frames from the video are extracted, and the You Only Look Once (YOLO) object detection model detects You Only Look Once (YOLO) objects on these extracted frames. Frame numbers that have target objects (specified objects in the speech command) are saved as text. Finally, this text (frame numbers) is converted to speech using text to speech model and will be played from the device. This project is developed for 80 You Only Look Once (YOLO) labels, and the user can extract frames based on only one or two target labels. This pipeline can be extended for more than two target labels easily by making appropriate changes in the object detection module. This project is developed for four different speech command formats by including sample examples in the prompt used by Generative Pre-Trained Transformer-3 (GPT-3) model. Based on user preference, one can come up with a new speech command format by including some examples of the respective format in the prompt used by the Generative Pre-Trained Transformer-3 (GPT-3) model. This pipeline can be used in many projects like human-machine interface, human-robot interaction, and surveillance through speech commands. All object detection projects can be upgraded using this pipeline so that one can give speech commands and output is played from the device.

Keywords: OpenVINO, automatic speech recognition, natural language processing, object detection, text to speech

Procedia PDF Downloads 80
1423 Detecting Hate Speech And Cyberbullying Using Natural Language Processing

Authors: Nádia Pereira, Paula Ferreira, Sofia Francisco, Sofia Oliveira, Sidclay Souza, Paula Paulino, Ana Margarida Veiga Simão

Abstract:

Social media has progressed into a platform for hate speech among its users, and thus, there is an increasing need to develop automatic detection classifiers of offense and conflicts to help decrease the prevalence of such incidents. Online communication can be used to intentionally harm someone, which is why such classifiers could be essential in social networks. A possible application of these classifiers is the automatic detection of cyberbullying. Even though identifying the aggressive language used in online interactions could be important to build cyberbullying datasets, there are other criteria that must be considered. Being able to capture the language, which is indicative of the intent to harm others in a specific context of online interaction is fundamental. Offense and hate speech may be the foundation of online conflicts, which have become commonly used in social media and are an emergent research focus in machine learning and natural language processing. This study presents two Portuguese language offense-related datasets which serve as examples for future research and extend the study of the topic. The first is similar to other offense detection related datasets and is entitled Aggressiveness dataset. The second is a novelty because of the use of the history of the interaction between users and is entitled the Conflicts/Attacks dataset. Both datasets were developed in different phases. Firstly, we performed a content analysis of verbal aggression witnessed by adolescents in situations of cyberbullying. Secondly, we computed frequency analyses from the previous phase to gather lexical and linguistic cues used to identify potentially aggressive conflicts and attacks which were posted on Twitter. Thirdly, thorough annotation of real tweets was performed byindependent postgraduate educational psychologists with experience in cyberbullying research. Lastly, we benchmarked these datasets with other machine learning classifiers.

Keywords: aggression, classifiers, cyberbullying, datasets, hate speech, machine learning

Procedia PDF Downloads 229
1422 Recurrent Torsades de Pointes Post Direct Current Cardioversion for Atrial Fibrillation with Rapid Ventricular Response

Authors: Taikchan Lildar, Ayesha Samad, Suraj Sookhu

Abstract:

Atrial fibrillation with rapid ventricular response results in the loss of atrial kick and shortened ventricular filling time, which often leads to decompensated heart failure. Pharmacologic rhythm control is the treatment of choice, and patients frequently benefit from the restoration of sinus rhythm. When pharmacologic treatment is unsuccessful or a patient declines hemodynamically, direct cardioversion is the treatment of choice. Torsades de pointes or “twisting of the points'' in French, is a rare but under-appreciated risk of cardioversion therapy and accounts for a significant number of sudden cardiac death each year. A 61-year-old female with no significant past medical history presented to the Emergency Department with worsening dyspnea. An electrocardiogram showed atrial fibrillation with rapid ventricular response, and a chest X-ray was significant for bilateral pulmonary vascular congestion. Full-dose anticoagulation and diuresis were initiated with moderate improvement in symptoms. A transthoracic echocardiogram revealed biventricular systolic dysfunction with a left ventricular ejection fraction of 30%. After consultation with an electrophysiologist, the consensus was to proceed with the restoration of sinus rhythm, which would likely improve the patient’s heart failure symptoms and possibly the ejection fraction. A transesophageal echocardiogram was negative for left atrial appendage thrombus; the patient was treated with a loading dose of amiodarone and underwent successful direct current cardioversion with 200 Joules. The patient was placed on telemetry monitoring for 24 hours and was noted to have frequent premature ventricular contractions with subsequent degeneration to torsades de pointes. The patient was found unresponsive and pulseless; cardiopulmonary resuscitation was initiated with cardioversion, and return of spontaneous circulation was achieved after four minutes to normal sinus rhythm. Post-cardiac arrest electrocardiogram showed sinus bradycardia with heart-rate corrected QT interval of 592 milliseconds. The patient continued to have frequent premature ventricular contractions and required two additional cardioversions to achieve a return of spontaneous circulation with intravenous magnesium and lidocaine. An automatic implantable cardioverter-defibrillator was subsequently implanted for secondary prevention of sudden cardiac death. The backup pacing rate of the automatic implantable cardioverter-defibrillator was set higher than usual in an attempt to prevent premature ventricular contractions-induced torsades de pointes. The patient did not have any further ventricular arrhythmias after implantation of the automatic implantable cardioverter-defibrillator. Overdrive pacing is a method utilized to treat premature ventricular contractions-induced torsades de pointes by preventing a patient’s susceptibility to R on T-wave-induced ventricular arrhythmias. Pacing at a rate of 90 beats per minute succeeded in controlling the arrhythmia without the need for traumatic cardiac defibrillation. In our patient, conversion of atrial fibrillation with rapid ventricular response to normal sinus rhythm resulted in a slower heart rate and an increased probability of premature ventricular contraction occurring on the T-wave and ensuing ventricular arrhythmia. This case highlights direct current cardioversion for atrial fibrillation with rapid ventricular response resulting in persistent ventricular arrhythmia requiring an automatic implantable cardioverter-defibrillator placement with overdrive pacing to prevent a recurrence.

Keywords: refractory atrial fibrillation, atrial fibrillation, overdrive pacing, torsades de pointes

Procedia PDF Downloads 149
1421 Real-Space Mapping of Surface Trap States in Cigse Nanocrystals Using 4D Electron Microscopy

Authors: Riya Bose, Ashok Bera, Manas R. Parida, Anirudhha Adhikari, Basamat S. Shaheen, Erkki Alarousu, Jingya Sun, Tom Wu, Osman M. Bakr, Omar F. Mohammed

Abstract:

This work reports visualization of charge carrier dynamics on the surface of copper indium gallium selenide (CIGSe) nanocrystals in real space and time using four-dimensional scanning ultrafast electron microscopy (4D S-UEM) and correlates it with the optoelectronic properties of the nanocrystals. The surface of the nanocrystals plays a key role in controlling their applicability for light emitting and light harvesting purposes. Typically for quaternary systems like CIGSe, which have many desirable attributes to be used for optoelectronic applications, relative abundance of surface trap states acting as non-radiative recombination centre for charge carriers remains as a major bottleneck preventing further advancements and commercial exploitation of these nanocrystals devices. Though ultrafast spectroscopic techniques allow determining the presence of picosecond carrier trapping channels, because of relative larger penetration depth of the laser beam, only information mainly from the bulk of the nanocrystals is obtained. Selective mapping of such ultrafast dynamical processes on the surfaces of nanocrystals remains as a key challenge, so far out of reach of purely optical probing time-resolved laser techniques. In S-UEM, the optical pulse generated from a femtosecond (fs) laser system is used to generate electron packets from the tip of the scanning electron microscope, instead of the continuous electron beam used in the conventional setup. This pulse is synchronized with another optical excitation pulse that initiates carrier dynamics in the sample. The principle of S-UEM is to detect the secondary electrons (SEs) generated in the sample, which is emitted from the first few nanometers of the top surface. Constructed at different time delays between the optical and electron pulses, these SE images give direct and precise information about the carrier dynamics on the surface of the material of interest. In this work, we report selective mapping of surface dynamics in real space and time of CIGSe nanocrystals applying 4D S-UEM. We show that the trap states can be considerably passivated by ZnS shelling of the nanocrystals, and the carrier dynamics can be significantly slowed down. We also compared and discussed the S-UEM kinetics with the carrier dynamics obtained from conventional ultrafast time-resolved techniques. Additionally, a direct effect of the state trap removal can be observed in the enhanced photoresponse of the nanocrystals after shelling. Direct observation of surface dynamics will not only provide a profound understanding of the photo-physical mechanisms on nanocrystals’ surfaces but also enable to unlock their full potential for light emitting and harvesting applications.

Keywords: 4D scanning ultrafast microscopy, charge carrier dynamics, nanocrystals, optoelectronics, surface passivation, trap states

Procedia PDF Downloads 295
1420 Description of the Non-Iterative Learning Algorithm of Artificial Neuron

Authors: B. S. Akhmetov, S. T. Akhmetova, A. I. Ivanov, T. S. Kartbayev, A. Y. Malygin

Abstract:

The problem of training of a network of artificial neurons in biometric appendices is that this process has to be completely automatic, i.e. the person operator should not participate in it. Therefore, this article discusses the issues of training the network of artificial neurons and the description of the non-iterative learning algorithm of artificial neuron.

Keywords: artificial neuron, biometrics, biometrical applications, learning of neuron, non-iterative algorithm

Procedia PDF Downloads 496
1419 Identification and Classification of Stakeholders in the Transition to 3D Cadastre

Authors: Qiaowen Lin

Abstract:

The 3D cadastre is an inevitable choice to meet the needs of real cadastral management. Nowadays, more attention is given to the technical aspects of 3D cadastre, resulting in the imbalance within this field. To fulfill this research gap, the stakeholder, which has been regarded as the determining factor in cadastral change has been studied. Delphi method, Michael rating, and stakeholder mapping are used to identify and classify the stakeholders in 3D cadastre. It is concluded that the project managers should pay more attention to the interesting appeal of the key stakeholders and different coping strategies should be adopted to facilitate the transition to 3D cadastre.

Keywords: stakeholders, three dimension, cadastre, transtion

Procedia PDF Downloads 290
1418 Automatic Approach for Estimating the Protection Elements of Electric Power Plants

Authors: Mahmoud Mohammad Salem Al-Suod, Ushkarenko O. Alexander, Dorogan I. Olga

Abstract:

New algorithms using microprocessor systems have been proposed for protection the diesel-generator unit in autonomous power systems. The software structure is designed to enhance the control automata of the system, in which every protection module of diesel-generator encapsulates the finite state machine.

Keywords: diesel-generator unit, protection, state diagram, control system, algorithm, software components

Procedia PDF Downloads 420
1417 Surveying Apps in Dam Excavation

Authors: Ali Mohammadi

Abstract:

Whenever there is a need to dig the ground, the presence of a surveyor is required to control the map. In projects such as dams and tunnels, these controls are more important because any mistakes can increase the cost. Also, time is great importance in These projects have and one of the ways to reduce the drilling time is to use techniques that can reduce the mapping time in these projects. Nowadays, with the existence of mobile phones, we can design apps that perform calculations and drawing for us on the mobile phone. Also, if we have a device that requires a computer to access its information, by designing an app, we can transfer its information to the mobile phone and use it, so we will not need to go to the office.

Keywords: app, tunnel, excavation, dam

Procedia PDF Downloads 69
1416 Global Solar Irradiance: Data Imputation to Analyze Complementarity Studies of Energy in Colombia

Authors: Jeisson A. Estrella, Laura C. Herrera, Cristian A. Arenas

Abstract:

The Colombian electricity sector has been transforming through the insertion of new energy sources to generate electricity, one of them being solar energy, which is being promoted by companies interested in photovoltaic technology. The study of this technology is important for electricity generation in general and for the planning of the sector from the perspective of energy complementarity. Precisely in this last approach is where the project is located; we are interested in answering the concerns about the reliability of the electrical system when climatic phenomena such as El Niño occur or in defining whether it is viable to replace or expand thermoelectric plants. Reliability of the electrical system when climatic phenomena such as El Niño occur, or to define whether it is viable to replace or expand thermoelectric plants with renewable electricity generation systems. In this regard, some difficulties related to the basic information on renewable energy sources from measured data must first be solved, as these come from automatic weather stations. Basic information on renewable energy sources from measured data, since these come from automatic weather stations administered by the Institute of Hydrology, Meteorology and Environmental Studies (IDEAM) and, in the range of study (2005-2019), have significant amounts of missing data. For this reason, the overall objective of the project is to complete the global solar irradiance datasets to obtain time series to develop energy complementarity analyses in a subsequent project. Global solar irradiance data sets to obtain time series that will allow the elaboration of energy complementarity analyses in the following project. The filling of the databases will be done through numerical and statistical methods, which are basic techniques for undergraduate students in technical areas who are starting out as researchers technical areas who are starting out as researchers.

Keywords: time series, global solar irradiance, imputed data, energy complementarity

Procedia PDF Downloads 71
1415 A Convolution Neural Network Approach to Predict Pes-Planus Using Plantar Pressure Mapping Images

Authors: Adel Khorramrouz, Monireh Ahmadi Bani, Ehsan Norouzi, Morvarid Lalenoor

Abstract:

Background: Plantar pressure distribution measurement has been used for a long time to assess foot disorders. Plantar pressure is an important component affecting the foot and ankle function and Changes in plantar pressure distribution could indicate various foot and ankle disorders. Morphologic and mechanical properties of the foot may be important factors affecting the plantar pressure distribution. Accurate and early measurement may help to reduce the prevalence of pes planus. With recent developments in technology, new techniques such as machine learning have been used to assist clinicians in predicting patients with foot disorders. Significance of the study: This study proposes a neural network learning-based flat foot classification methodology using static foot pressure distribution. Methodologies: Data were collected from 895 patients who were referred to a foot clinic due to foot disorders. Patients with pes planus were labeled by an experienced physician based on clinical examination. Then all subjects (with and without pes planus) were evaluated for static plantar pressures distribution. Patients who were diagnosed with the flat foot in both feet were included in the study. In the next step, the leg length was normalized and the network was trained for plantar pressure mapping images. Findings: From a total of 895 image data, 581 were labeled as pes planus. A computational neural network (CNN) ran to evaluate the performance of the proposed model. The prediction accuracy of the basic CNN-based model was performed and the prediction model was derived through the proposed methodology. In the basic CNN model, the training accuracy was 79.14%, and the test accuracy was 72.09%. Conclusion: This model can be easily and simply used by patients with pes planus and doctors to predict the classification of pes planus and prescreen for possible musculoskeletal disorders related to this condition. However, more models need to be considered and compared for higher accuracy.

Keywords: foot disorder, machine learning, neural network, pes planus

Procedia PDF Downloads 364
1414 Silicon-Photonic-Sensor System for Botulinum Toxin Detection in Water

Authors: Binh T. T. Nguyen, Zhenyu Li, Eric Yap, Yi Zhang, Ai-Qun Liu

Abstract:

Silicon-photonic-sensor system is an emerging class of analytical technologies that use evanescent field wave to sensitively measure the slight difference in the surrounding environment. The wavelength shift induced by local refractive index change is used as an indicator in the system. These devices can be served as sensors for a wide variety of chemical or biomolecular detection in clinical and environmental fields. In our study, a system including a silicon-based micro-ring resonator, microfluidic channel, and optical processing is designed, fabricated for biomolecule detection. The system is demonstrated to detect Clostridium botulinum type A neurotoxin (BoNT) in different water sources. BoNT is one of the most toxic substances known and relatively easily obtained from a cultured bacteria source. The toxin is extremely lethal with LD50 of about 0.1µg/70kg intravenously, 1µg/ 70 kg by inhalation, and 70µg/kg orally. These factors make botulinum neurotoxins primary candidates as bioterrorism or biothreat agents. It is required to have a sensing system which can detect BoNT in a short time, high sensitive and automatic. For BoNT detection, silicon-based micro-ring resonator is modified with a linker for the immobilization of the anti-botulinum capture antibody. The enzymatic reaction is employed to increase the signal hence gains sensitivity. As a result, a detection limit to 30 pg/mL is achieved by our silicon-photonic sensor within a short period of 80 min. The sensor also shows high specificity versus the other type of botulinum. In the future, by designing the multifunctional waveguide array with fully automatic control system, it is simple to simultaneously detect multi-biomaterials at a low concentration within a short period. The system has a great potential to apply for online, real-time and high sensitivity for the label-free bimolecular rapid detection.

Keywords: biotoxin, photonic, ring resonator, sensor

Procedia PDF Downloads 117
1413 Brain Connectome of Glia, Axons, and Neurons: Cognitive Model of Analogy

Authors: Ozgu Hafizoglu

Abstract:

An analogy is an essential tool of human cognition that enables connecting diffuse and diverse systems with physical, behavioral, principal relations that are essential to learning, discovery, and innovation. The Cognitive Model of Analogy (CMA) leads and creates patterns of pathways to transfer information within and between domains in science, just as happens in the brain. The connectome of the brain shows how the brain operates with mental leaps between domains and mental hops within domains and the way how analogical reasoning mechanism operates. This paper demonstrates the CMA as an evolutionary approach to science, technology, and life. The model puts forward the challenges of deep uncertainty about the future, emphasizing the need for flexibility of the system in order to enable reasoning methodology to adapt to changing conditions in the new era, especially post-pandemic. In this paper, we will reveal how to draw an analogy to scientific research to discover new systems that reveal the fractal schema of analogical reasoning within and between the systems like within and between the brain regions. Distinct phases of the problem-solving processes are divided thusly: stimulus, encoding, mapping, inference, and response. Based on the brain research so far, the system is revealed to be relevant to brain activation considering each of these phases with an emphasis on achieving a better visualization of the brain’s mechanism in macro context; brain and spinal cord, and micro context: glia and neurons, relative to matching conditions of analogical reasoning and relational information, encoding, mapping, inference and response processes, and verification of perceptual responses in four-term analogical reasoning. Finally, we will relate all these terminologies with these mental leaps, mental maps, mental hops, and mental loops to make the mental model of CMA clear.

Keywords: analogy, analogical reasoning, brain connectome, cognitive model, neurons and glia, mental leaps, mental hops, mental loops

Procedia PDF Downloads 165
1412 Prospectivity Mapping of Orogenic Lode Gold Deposits Using Fuzzy Models: A Case Study of Saqqez Area, Northwestern Iran

Authors: Fanous Mohammadi, Majid H. Tangestani, Mohammad H. Tayebi

Abstract:

This research aims to evaluate and compare Geographical Information Systems (GIS)-based fuzzy models for producing orogenic gold prospectivity maps in the Saqqez area, NW of Iran. Gold occurrences are hosted in sericite schist and mafic to felsic meta-volcanic rocks in this area and are associated with hydrothermal alterations that extend over ductile to brittle shear zones. The predictor maps, which represent the Pre-(Source/Trigger/Pathway), syn-(deposition/physical/chemical traps) and post-mineralization (preservation/distribution of indicator minerals) subsystems for gold mineralization, were generated using empirical understandings of the specifications of known orogenic gold deposits and gold mineral systems and were then pre-processed and integrated to produce mineral prospectivity maps. Five fuzzy logic operators, including AND, OR, Fuzzy Algebraic Product (FAP), Fuzzy Algebraic Sum (FAS), and GAMMA, were applied to the predictor maps in order to find the most efficient prediction model. Prediction-Area (P-A) plots and field observations were used to assess and evaluate the accuracy of prediction models. Mineral prospectivity maps generated by AND, OR, FAP, and FAS operators were inaccurate and, therefore, unable to pinpoint the exact location of discovered gold occurrences. The GAMMA operator, on the other hand, produced acceptable results and identified potentially economic target sites. The P-A plot revealed that 68 percent of known orogenic gold deposits are found in high and very high potential regions. The GAMMA operator was shown to be useful in predicting and defining cost-effective target sites for orogenic gold deposits, as well as optimizing mineral deposit exploitation.

Keywords: mineral prospectivity mapping, fuzzy logic, GIS, orogenic gold deposit, Saqqez, Iran

Procedia PDF Downloads 124
1411 Using Hyperspectral Camera and Deep Learning to Identify the Ripeness of Sugar Apples

Authors: Kuo-Dung Chiou, Yen-Xue Chen, Chia-Ying Chang

Abstract:

This study uses AI technology to establish an expert system and establish a fruit appearance database for pineapples and custard apples. It collects images based on appearance defects and fruit maturity. It uses deep learning to detect the location of the fruit and can detect the appearance of the fruit in real-time. Flaws and maturity. In addition, a hyperspectral camera was used to scan pineapples and custard apples, and the light reflection at different frequency bands was used to find the key frequency band for pectin softening in post-ripe fruits. Conducted a large number of multispectral image collection and data analysis to establish a database of Pineapple Custard Apple and Big Eyed Custard Apple, which includes a high-definition color image database, a hyperspectral database in the 377~1020 nm frequency band, and five frequency band images (450, 500, 670, 720, 800nm) multispectral database, which collects 4896 images and manually labeled ground truth; 26 hyperspectral pineapple custard apple fruits (520 images each); multispectral custard apple 168 fruits (5 images each). Using the color image database to train deep learning Yolo v4's pre-training network architecture and adding the training weights established by the fruit database, real-time detection performance is achieved, and the recognition rate reaches over 97.96%. We also used multispectral to take a large number of continuous shots and calculated the difference and average ratio of the fruit in the 670 and 720nm frequency bands. They all have the same trend. The value increases until maturity, and the value will decrease after maturity. Subsequently, the sub-bands will be added to analyze further the numerical analysis of sugar content and moisture, and the absolute value of maturity and the data curve of maturity will be found.

Keywords: hyperspectral image, fruit firmness, deep learning, automatic detection, automatic measurement, intelligent labor saving

Procedia PDF Downloads 3
1410 Mapping of Forest Cover Change in the Democratic Republic of the Congo

Authors: Armand Okende, Benjamin Beaumont

Abstract:

Introduction: Deforestation is a change in the structure and composition of flora and fauna, which leads to a loss of biodiversity, production of goods and services and an increase in fires. It concerns vast territories in tropical zones particularly; this is the case of the territory of Bolobo in the current province of Maï- Ndombe in the Democratic Republic of Congo. Indeed, through this study between 2001 and 2018, we believe that it was important to show and analyze quantitatively the important forests changes and analyze quantitatively. It’s the overall objective of this study because, in this area, we are witnessing significant deforestation. Methodology: Mapping and quantification are the methodological approaches that we have put forward to assess the deforestation or forest changes through satellite images or raster layers. These satellites data from Global Forest Watch are integrated into the GIS software (GRASS GIS and Quantum GIS) to represent the loss of forest cover that has occurred and the various changes recorded (e.g., forest gain) in the territory of Bolobo. Results: The results obtained show, in terms of quantifying deforestation for the periods 2001-2006, 2007-2012 and 2013-2018, the loss of forest area in hectares each year. The different change maps produced during different study periods mentioned above show that the loss of forest areas is gradually increasing. Conclusion: With this study, knowledge of forest management and protection is a challenge to ensure good management of forest resources. To do this, it is wise to carry out more studies that would optimize the monitoring of forests to guarantee the ecological and economic functions they provide in the Congo Basin, particularly in the Democratic Republic of Congo. In addition, the cartographic approach, coupled with the geographic information system and remote sensing proposed by Global Forest Watch using raster layers, provides interesting information to explain the loss of forest areas.

Keywords: deforestation, loss year, forest change, remote sensing, drivers of deforestation

Procedia PDF Downloads 133
1409 Development of an Automatic Control System for ex vivo Heart Perfusion

Authors: Pengzhou Lu, Liming Xin, Payam Tavakoli, Zhonghua Lin, Roberto V. P. Ribeiro, Mitesh V. Badiwala

Abstract:

Ex vivo Heart Perfusion (EVHP) has been developed as an alternative strategy to expand cardiac donation by enabling resuscitation and functional assessment of hearts donated from marginal donors, which were previously not accepted. EVHP parameters, such as perfusion flow (PF) and perfusion pressure (PP) are crucial for optimal organ preservation. However, with the heart’s constant physiological changes during EVHP, such as coronary vascular resistance, manual control of these parameters is rendered imprecise and cumbersome for the operator. Additionally, low control precision and the long adjusting time may lead to irreversible damage to the myocardial tissue. To solve this problem, an automatic heart perfusion system was developed by applying a Human-Machine Interface (HMI) and a Programmable-Logic-Controller (PLC)-based circuit to control PF and PP. The PLC-based control system collects the data of PF and PP through flow probes and pressure transducers. It has two control modes: the RPM-flow mode and the pressure mode. The RPM-flow control mode is an open-loop system. It influences PF through providing and maintaining the desired speed inputted through the HMI to the centrifugal pump with a maximum error of 20 rpm. The pressure control mode is a closed-loop system where the operator selects a target Mean Arterial Pressure (MAP) to control PP. The inputs of the pressure control mode are the target MAP, received through the HMI, and the real MAP, received from the pressure transducer. A PID algorithm is applied to maintain the real MAP at the target value with a maximum error of 1mmHg. The precision and control speed of the RPM-flow control mode were examined by comparing the PLC-based system to an experienced operator (EO) across seven RPM adjustment ranges (500, 1000, 2000 and random RPM changes; 8 trials per range) tested in a random order. System’s PID algorithm performance in pressure control was assessed during 10 EVHP experiments using porcine hearts. Precision was examined through monitoring the steady-state pressure error throughout perfusion period, and stabilizing speed was tested by performing two MAP adjustment changes (4 trials per change) of 15 and 20mmHg. A total of 56 trials were performed to validate the RPM-flow control mode. Overall, the PLC-based system demonstrated the significantly faster speed than the EO in all trials (PLC 1.21±0.03, EO 3.69±0.23 seconds; p < 0.001) and greater precision to reach the desired RPM (PLC 10±0.7, EO 33±2.7 mean RPM error; p < 0.001). Regarding pressure control, the PLC-based system has the median precision of ±1mmHg error and the median stabilizing times in changing 15 and 20mmHg of MAP are 15 and 19.5 seconds respectively. The novel PLC-based control system was 3 times faster with 60% less error than the EO for RPM-flow control. In pressure control mode, it demonstrates a high precision and fast stabilizing speed. In summary, this novel system successfully controlled perfusion flow and pressure with high precision, stability and a fast response time through a user-friendly interface. This design may provide a viable technique for future development of novel heart preservation and assessment strategies during EVHP.

Keywords: automatic control system, biomedical engineering, ex-vivo heart perfusion, human-machine interface, programmable logic controller

Procedia PDF Downloads 175
1408 Precise CNC Machine for Multi-Tasking

Authors: Haroon Jan Khan, Xian-Feng Xu, Syed Nasir Shah, Anooshay Niazi

Abstract:

CNC machines are not only used on a large scale but also now become a prominent necessity among households and smaller businesses. Printed Circuit Boards manufactured by the chemical process are not only risky and unsafe but also expensive and time-consuming. A 3-axis precise CNC machine has been developed, which not only fabricates PCB but has also been used for multi-tasks just by changing the materials used and tools, making it versatile. The advanced CNC machine takes data from CAM software. The TB-6560 controller is used in the CNC machine to adjust variation in the X, Y, and Z axes. The advanced machine is efficient in automatic drilling, engraving, and cutting.

Keywords: CNC, G-code, CAD, CAM, Proteus, FLATCAM, Easel

Procedia PDF Downloads 162
1407 Lexical Bundles in the Alexiad of Anna Comnena: Computational and Discourse Analysis Approach

Authors: Georgios Alexandropoulos

Abstract:

The purpose of this study is to examine the historical text of Alexiad by Anna Comnena using computational tools for the extraction of lexical bundles containing the name of her father, Alexius Comnenus. For this reason, in this research we apply corpus linguistics techniques for the automatic extraction of lexical bundles and through them we will draw conclusions about how these lexical bundles serve her support provided to her father.

Keywords: lexical bundles, computational literature, critical discourse analysis, Alexiad

Procedia PDF Downloads 625
1406 Biotechnological Interventions for Crop Improvement in Nutricereal Pearl Millet

Authors: Supriya Ambawat, Subaran Singh, C. Tara Satyavathi, B. S. Rajpurohit, Ummed Singh, Balraj Singh

Abstract:

Pearl millet [Pennisetum glaucum (L.) R. Br.] is an important staple food of the arid and semiarid tropical regions of Asia, Africa, and Latin America. It is rightly termed as nutricereal as it has high nutrition value and a good source of carbohydrate, protein, fat, ash, dietary fiber, potassium, magnesium, iron, zinc, etc. Pearl millet has low prolamine fraction and is gluten free which is useful for people having a gluten allergy. It has several health benefits like reduction in blood pressure, thyroid, diabe¬tes, cardiovascular and celiac diseases but its direct consumption as food has significantly declined due to several reasons. Keeping this in view, it is important to reorient the ef¬forts to generate demand through value-addition and quality improvement and create awareness on the nutritional merits of pearl millet. In India, through Indian Council of Agricultural Research-All India Coordinated Research Project on Pearl millet, multilocational coordinated trials for developed hybrids were conducted at various centers. The gene banks of pearl millet contain varieties with high levels of iron and zinc which were used to produce new pearl millet varieties with elevated iron levels bred with the high‐yielding varieties. Thus, using breeding approaches and biochemical analysis, a total of 167 hybrids and 61 varieties were identified and released for cultivation in different agro-ecological zones of the country which also includes some biofortified hybrids rich in Fe and Zn. Further, using several biotechnological interventions such as molecular markers, next-generation sequencing (NGS), association mapping, nested association mapping (NAM), MAGIC populations, genome editing, genotyping by sequencing (GBS), genome wide association studies (GWAS) advancement in millet improvement has become possible by identifying and tagging of genes underlying a trait in the genome. Using DArT markers very high density linkage maps were constructed for pearl millet. Improved HHB67 has been released using marker assisted selection (MAS) strategies, and genomic tools were used to identify Fe-Zn Quantitative Trait Loci (QTL). The draft genome sequence of millet has also opened various ways to explore pearl millet. Further, genomic positions of significantly associated simple sequence repeat (SSR) markers with iron and zinc content in the consensus map is being identified and research is in progress towards mapping QTLs for flour rancidity. The sequence information is being used to explore genes and enzymatic pathways responsible for rancidity of flour. Thus, development and application of several biotechnological approaches along with biofortification can accelerate the genetic gain targets for pearl millet improvement and help improve its quality.

Keywords: Biotechnological approaches, genomic tools, malnutrition, MAS, nutricereal, pearl millet, sequencing.

Procedia PDF Downloads 186
1405 Study on Safety Management of Deep Foundation Pit Construction Site Based on Building Information Modeling

Authors: Xuewei Li, Jingfeng Yuan, Jianliang Zhou

Abstract:

The 21st century has been called the century of human exploitation of underground space. Due to the characteristics of large quantity, tight schedule, low safety reserve and high uncertainty of deep foundation pit engineering, accidents frequently occur in deep foundation pit engineering, causing huge economic losses and casualties. With the successful application of information technology in the construction industry, building information modeling has become a research hotspot in the field of architectural engineering. Therefore, the application of building information modeling (BIM) and other information communication technologies (ICTs) in construction safety management is of great significance to improve the level of safety management. This research summed up the mechanism of the deep foundation pit engineering accident through the fault tree analysis to find the control factors of deep foundation pit engineering safety management, the deficiency existing in the traditional deep foundation pit construction site safety management. According to the accident cause mechanism and the specific process of deep foundation pit construction, the hazard information of deep foundation pit engineering construction site was identified, and the hazard list was obtained, including early warning information. After that, the system framework was constructed by analyzing the early warning information demand and early warning function demand of the safety management system of deep foundation pit. Finally, the safety management system of deep foundation pit construction site based on BIM through combing the database and Web-BIM technology was developed, so as to realize the three functions of real-time positioning of construction site personnel, automatic warning of entering a dangerous area, real-time monitoring of deep foundation pit structure deformation and automatic warning. This study can initially improve the current situation of safety management in the construction site of deep foundation pit. Additionally, the active control before the occurrence of deep foundation pit accidents and the whole process dynamic control in the construction process can be realized so as to prevent and control the occurrence of safety accidents in the construction of deep foundation pit engineering.

Keywords: Web-BIM, safety management, deep foundation pit, construction

Procedia PDF Downloads 154
1404 A Reading Attempt of the Urban Memory of Jordan University of Science and Technology Campus by Cognitive Mapping

Authors: Bsma Adel Bany Mohammad

Abstract:

The University campuses are a small city containing basic city functions such as educational spaces, accommodations, services and transportation. They are spaces of functional and social life with different activities, different occupants. The campus designed and transformed like cities so both experienced and memorized in same way. Campus memory is the ability of individuals to maintain and reveal the spatial components of designed physical spaces, which form the understandings, experiences, sensations of the environment in all. ‘Cognitive mapping’ is used to decode the physical interaction and emotional relationship between individuals and the city; Cognitive maps are created graphically using geometric and verbal elements on paper by remembering the images of the Urban Environment. In this study, to determine the emotional urban identity belonging to Jordan University of science and technology Campus, architecture students Asked to identify the areas they interact with in the campus by drawing a cognitive map. ‘Campus memory items’ are identified by analyzing the cognitive maps of the campus, then the spatial identity result of such data. The analysis based on the five basic elements of Lynch: paths, districts, edges, nodes, and landmarks. As a result of this analysis, it found that Spatial Identity constructed by the shared elements of the maps. The memory of most students listed the gates structure- which is a large desirable structure, located at the main entrances within the campus defined as major landmarks, then the square spaces defined as nodes, in addition to both stairs and corridors defined as paths. Finally, the districts, edges of educational buildings and service spaces are listed correspondingly in cognitive maps. Findings suggest that the spatial identity of the campus design is related mainly to the gates structures, squares and stairs.

Keywords: cognitive maps, university campus, urban memory, identity

Procedia PDF Downloads 149
1403 Application of Sentinel-2 Data to Evaluate the Role of Mangrove Conservation and Restoration on Aboveground Biomass

Authors: Raheleh Farzanmanesh, Christopher J. Weston

Abstract:

Mangroves are forest ecosystems located in the inter-tidal regions of tropical and subtropical coastlines that provide many valuable economic and ecological benefits for millions of people, such as preventing coastal erosion, providing breeding, and feeding grounds, improving water quality, and supporting the well-being of local communities. In addition, mangroves capture and store high amounts of carbon in biomass and soils that play an important role in combating climate change. The decline in mangrove area has prompted government and private sector interest in mangrove conservation and restoration projects to achieve multiple Sustainable Development Goals, from reducing poverty to improving life on land. Mangrove aboveground biomass plays an essential role in the global carbon cycle, climate change mitigation and adaptation by reducing CO2 emissions. However, little information is available about the effectiveness of mangrove sustainable management on mangrove change area and aboveground biomass (AGB). Here, we proposed a method for mapping, modeling, and assessing mangrove area and AGB in two Global Environment Facility (GEF) blue forests projects based on Sentinel-2 Level 1C imagery during their conservation lifetime. The SVR regression model was used to estimate AGB in Tahiry Honko project in Madagascar and the Abu Dhabi Blue Carbon Demonstration Project (Abu Dhabi Emirates. The results showed that mangrove forests and AGB declined in the Tahiry Honko project, while in the Abu Dhabi project increased after the conservation initiative was established. The results provide important information on the impact of mangrove conservation activities and contribute to the development of remote sensing applications for mapping and assessing mangrove forests in blue carbon initiatives.

Keywords: blue carbon, mangrove forest, REDD+, aboveground biomass, Sentinel-2

Procedia PDF Downloads 74