Search results for: accuracy assessment.
7053 Analytical Study and Conservation Processes of Scribe Box from Old Kingdom
Authors: Mohamed Moustafa, Medhat Abdallah, Ramy Magdy, Ahmed Abdrabou, Mohamed Badr
Abstract:
The scribe box under study dates back to the old kingdom. It was excavated by the Italian expedition in Qena (1935-1937). The box consists of 2pieces, the lid and the body. The inner side of the lid is decorated with ancient Egyptian inscriptions written with a black pigment. The box was made using several panels assembled together by wooden dowels and secured with plant ropes. The entire box is covered with a red pigment. This study aims to use analytical techniques in order to identify and have deep understanding for the box components. Moreover, the authors were significantly interested in using infrared reflectance transmission imaging (RTI-IR) to improve the hidden inscriptions on the lid. The identification of wood species included in this study. The visual observation and assessment were done to understand the condition of this box. 3Ddimensions and 2D programs were used to illustrate wood joints techniques. Optical microscopy (OM), X-ray diffraction (XRD), X-ray fluorescence portable (XRF) and Fourier Transform Infrared spectroscopy (FTIR) were used in this study in order to identify wood species, remains of insects bodies, red pigment, fibers plant and previous conservation adhesives, also RTI-IR technique was very effective to improve hidden inscriptions. The analysis results proved that wooden panels and dowels were identified as Acacia nilotica, wooden rail was Salix sp. the insects were identified as Lasioderma serricorne and Gibbium psylloids, the red pigment was Hematite, while the fiber plants were linen, previous adhesive was identified as cellulose nitrates. The historical study for the inscriptions proved that it’s a Hieratic writings of a funerary Text. After its transportation from the Egyptian museum storage to the wood conservation laboratory of the Grand Egyptian museum –conservation center (GEM-CC), conservation techniques were applied with high accuracy in order to restore the object including cleaning , consolidating of friable pigments and writings, removal of previous adhesive and reassembly, finally the conservation process that were applied were extremely effective for this box which became ready for display or storage in the grand Egyptian museum.Keywords: scribe box, hieratic, 3D program, Acacia nilotica, XRD, cellulose nitrate, conservation
Procedia PDF Downloads 2697052 Quantitative Assessment of Different Formulations of Antimalarials in Sentinel Sites of India
Authors: Taruna Katyal Arora, Geeta Kumari, Hari Shankar, Neelima Mishra
Abstract:
Substandard and counterfeit antimalarials is a major problem in malaria endemic areas. The availability of counterfeit/ substandard medicines is not only decreasing the efficacy in patients, but it is also one of the contributing factors for developing antimalarial drug resistance. Owing to this, a pilot study was conducted to survey quality of drugs collected from different malaria endemic areas of India. Artesunate+Sulphadoxine-Pyrimethamine (AS+SP), Artemether-Lumefantrine (AL), Chloroquine (CQ) tablets were randomly picked from public health facilities in selected states of India. The quality of antimalarial drugs from these areas was assessed by using Global Pharma Health Fund Minilab test kit. This includes physical/visual inspection and disintegration test. Thin-layer chromatography (TLC) was carried out for semi-quantitative assessment of active pharmaceutical ingredients. A total of 45 brands, out of which 21 were for CQ, 14 for AL and 10 for AS+SP were tested from Uttar Pradesh (U.P.), Mizoram, Meghalaya and Gujrat states. One out of 45 samples showed variable disintegration and retension factor. The variable disintegration and retention factor which would have been due to substandard quality or other factors including storage. However, HPLC analysis confirms standard active pharmaceutical ingredient, but may be due to humid temperature and moisture in storage may account for the observed result.Keywords: antimalarial medicines, counterfeit, substandard, TLC
Procedia PDF Downloads 3207051 Implementation of Fuzzy Version of Block Backward Differentiation Formulas for Solving Fuzzy Differential Equations
Authors: Z. B. Ibrahim, N. Ismail, K. I. Othman
Abstract:
Fuzzy Differential Equations (FDEs) play an important role in modelling many real life phenomena. The FDEs are used to model the behaviour of the problems that are subjected to uncertainty, vague or imprecise information that constantly arise in mathematical models in various branches of science and engineering. These uncertainties have to be taken into account in order to obtain a more realistic model and many of these models are often difficult and sometimes impossible to obtain the analytic solutions. Thus, many authors have attempted to extend or modified the existing numerical methods developed for solving Ordinary Differential Equations (ODEs) into fuzzy version in order to suit for solving the FDEs. Therefore, in this paper, we proposed the development of a fuzzy version of three-point block method based on Block Backward Differentiation Formulas (FBBDF) for the numerical solution of first order FDEs. The three-point block FBBDF method are implemented in uniform step size produces three new approximations simultaneously at each integration step using the same back values. Newton iteration of the FBBDF is formulated and the implementation is based on the predictor and corrector formulas in the PECE mode. For greater efficiency of the block method, the coefficients of the FBBDF are stored at the start of the program. The proposed FBBDF is validated through numerical results on some standard problems found in the literature and comparisons are made with the existing fuzzy version of the Modified Simpson and Euler methods in terms of the accuracy of the approximated solutions. The numerical results show that the FBBDF method performs better in terms of accuracy when compared to the Euler method when solving the FDEs.Keywords: block, backward differentiation formulas, first order, fuzzy differential equations
Procedia PDF Downloads 3187050 Detecting Indigenous Languages: A System for Maya Text Profiling and Machine Learning Classification Techniques
Authors: Alejandro Molina-Villegas, Silvia Fernández-Sabido, Eduardo Mendoza-Vargas, Fátima Miranda-Pestaña
Abstract:
The automatic detection of indigenous languages in digital texts is essential to promote their inclusion in digital media. Underrepresented languages, such as Maya, are often excluded from language detection tools like Google’s language-detection library, LANGDETECT. This study addresses these limitations by developing a hybrid language detection solution that accurately distinguishes Maya (YUA) from Spanish (ES). Two strategies are employed: the first focuses on creating a profile for the Maya language within the LANGDETECT library, while the second involves training a Naive Bayes classification model with two categories, YUA and ES. The process includes comprehensive data preprocessing steps, such as cleaning, normalization, tokenization, and n-gram counting, applied to text samples collected from various sources, including articles from La Jornada Maya, a major newspaper in Mexico and the only media outlet that includes a Maya section. After the training phase, a portion of the data is used to create the YUA profile within LANGDETECT, which achieves an accuracy rate above 95% in identifying the Maya language during testing. Additionally, the Naive Bayes classifier, trained and tested on the same database, achieves an accuracy close to 98% in distinguishing between Maya and Spanish, with further validation through F1 score, recall, and logarithmic scoring, without signs of overfitting. This strategy, which combines the LANGDETECT profile with a Naive Bayes model, highlights an adaptable framework that can be extended to other underrepresented languages in future research. This fills a gap in Natural Language Processing and supports the preservation and revitalization of these languages.Keywords: indigenous languages, language detection, Maya language, Naive Bayes classifier, natural language processing, low-resource languages
Procedia PDF Downloads 157049 Heliport Remote Safeguard System Based on Real-Time Stereovision 3D Reconstruction Algorithm
Authors: Ł. Morawiński, C. Jasiński, M. Jurkiewicz, S. Bou Habib, M. Bondyra
Abstract:
With the development of optics, electronics, and computers, vision systems are increasingly used in various areas of life, science, and industry. Vision systems have a huge number of applications. They can be used in quality control, object detection, data reading, e.g., QR-code, etc. A large part of them is used for measurement purposes. Some of them make it possible to obtain a 3D reconstruction of the tested objects or measurement areas. 3D reconstruction algorithms are mostly based on creating depth maps from data that can be acquired from active or passive methods. Due to the specific appliance in airfield technology, only passive methods are applicable because of other existing systems working on the site, which can be blinded on most spectral levels. Furthermore, reconstruction is required to work long distances ranging from hundreds of meters to tens of kilometers with low loss of accuracy even with harsh conditions such as fog, rain, or snow. In response to those requirements, HRESS (Heliport REmote Safeguard System) was developed; which main part is a rotational head with a two-camera stereovision rig gathering images around the head in 360 degrees along with stereovision 3D reconstruction and point cloud combination. The sub-pixel analysis introduced in the HRESS system makes it possible to obtain an increased distance measurement resolution and accuracy of about 3% for distances over one kilometer. Ultimately, this leads to more accurate and reliable measurement data in the form of a point cloud. Moreover, the program algorithm introduces operations enabling the filtering of erroneously collected data in the point cloud. All activities from the programming, mechanical and optical side are aimed at obtaining the most accurate 3D reconstruction of the environment in the measurement area.Keywords: airfield monitoring, artificial intelligence, stereovision, 3D reconstruction
Procedia PDF Downloads 1217048 Environmental Cost and Benefits Analysis of Different Electricity Option: A Case Study of Kuwait
Authors: Mohammad Abotalib, Hamid Alhamadi
Abstract:
In Kuwait, electricity is generated from two primary sources that are heavy fuel combustion and natural gas combustion. As Kuwait relies mainly on petroleum-based products for electricity generation, identifying and understanding the environmental trade-off of such operations should be carefully investigated. The life cycle assessment (LCA) tool is applied to identify the potential environmental impact of electricity generation under three scenarios by considering the material flow in various stages involved, such as raw-material extraction, transportation, operations, and waste disposal. The three scenarios investigated represent current and futuristic electricity grid mixes. The analysis targets six environmental impact categories: (1) global warming potential (GWP), (2) acidification potential (AP), (3) water depletion (WD), (4) acidification potential (AP), (4) eutrophication potential (EP), (5) human health particulate matter (HHPM), and (6) smog air (SA) per one kWh of electricity generated. Results indicate that one kWh of electricity generated would have a GWP (881-1030) g CO₂-eq, mainly from the fuel combustion process, water depletion (0.07-0.1) m³ of water, about 68% from cooling processes, AP (15.3-17.9) g SO₂-eq, EP (0.12-0.14) g N eq., HHPA (1.13- 1.33)g PM₂.₅ eq., and SA (64.8-75.8) g O₃ eq. The variation in results depend on the scenario investigated. It can be observed from the analysis that introducing solar photovoltaic and wind to the electricity grid mix improves the performance of scenarios 2 and 3 where 15% of the electricity comes from renewables correspond to a further decrease in LCA results.Keywords: energy, functional uni, global warming potential, life cycle assessment, energy, functional unit
Procedia PDF Downloads 1337047 Integration of Information and Communication Technology (ICT) for Effective Education of Adult Learners in Developing Communities in South-West Nigeria
Authors: Omotoke Omosalewa Owolowo
Abstract:
Mass literacy adult and non-formal education are part of the provisions of Nigeria’s National policy on Education. The advent of Information and Communication Technology (ICT), especially in this era of industrial revolution, calls for approaching these literacy and adult education in different perspective for community development. There is dire need of Needs Assessment for effective training of rural dwellers to actualize the policy requirement and for the purpose of aligning with the Sustainable Development Goals in South - West Nigeria. The present study is a preliminary survey designed to determine level of awareness, use and familiarity of community dwellers of social media. Adult dwellers from 24 communities from four states in Southern Nigeria constitute the sample, a total of 578 adults (380 females, 198 males) with age range between 21 and 52 years. The survey shows that 68% are aware of SMS, 21% of WhatsApp, 14% of Facebook while the remaining could not say precisely what social medium is their favorite. However, most of them (80%) could not see how their phones can be used to boost their status, improve their vacations or be used to develop them in their respective community. The study is expected to lead to a more elaborate training program on assessment of knowledge acquisition, participation and attitude of adult literate and non- literate members in communities for empowerment and to integrate ICT techniques. The results of this study provides a database for the larger study.Keywords: mass literacy, community development, information and communication technology, adult learners
Procedia PDF Downloads 507046 Understanding the Impact of Out-of-Sequence Thrust Dynamics on Earthquake Mitigation: Implications for Hazard Assessment and Disaster Planning
Authors: Rajkumar Ghosh
Abstract:
Earthquakes pose significant risks to human life and infrastructure, highlighting the importance of effective earthquake mitigation strategies. Traditional earthquake modelling and mitigation efforts have largely focused on the primary fault segments and their slip behaviour. However, earthquakes can exhibit complex rupture dynamics, including out-of-sequence thrust (OOST) events, which occur on secondary or subsidiary faults. This abstract examines the impact of OOST dynamics on earthquake mitigation strategies and their implications for hazard assessment and disaster planning. OOST events challenge conventional seismic hazard assessments by introducing additional fault segments and potential rupture scenarios that were previously unrecognized or underestimated. Consequently, these events may increase the overall seismic hazard in affected regions. The study reviews recent case studies and research findings that illustrate the occurrence and characteristics of OOST events. It explores the factors contributing to OOST dynamics, such as stress interactions between fault segments, fault geometry, and mechanical properties of fault materials. Moreover, it investigates the potential triggers and precursory signals associated with OOST events to enhance early warning systems and emergency response preparedness. The abstract also highlights the significance of incorporating OOST dynamics into seismic hazard assessment methodologies. It discusses the challenges associated with accurately modelling OOST events, including the need for improved understanding of fault interactions, stress transfer mechanisms, and rupture propagation patterns. Additionally, the abstract explores the potential for advanced geophysical techniques, such as high-resolution imaging and seismic monitoring networks, to detect and characterize OOST events. Furthermore, the abstract emphasizes the practical implications of OOST dynamics for earthquake mitigation strategies and urban planning. It addresses the need for revising building codes, land-use regulations, and infrastructure designs to account for the increased seismic hazard associated with OOST events. It also underscores the importance of public awareness campaigns to educate communities about the potential risks and safety measures specific to OOST-induced earthquakes. This sheds light on the impact of out-of-sequence thrust dynamics in earthquake mitigation. By recognizing and understanding OOST events, researchers, engineers, and policymakers can improve hazard assessment methodologies, enhance early warning systems, and implement effective mitigation measures. By integrating knowledge of OOST dynamics into urban planning and infrastructure development, societies can strive for greater resilience in the face of earthquakes, ultimately minimizing the potential for loss of life and infrastructure damage.Keywords: earthquake mitigation, out-of-sequence thrust, seismic, satellite imagery
Procedia PDF Downloads 877045 Numerical Simulation of the Fractional Flow Reserve in the Coronary Artery with Serial Stenoses of Varying Configuration
Authors: Mariia Timofeeva, Andrew Ooi, Eric K. W. Poon, Peter Barlis
Abstract:
Atherosclerotic plaque build-up, commonly known as stenosis, limits blood flow and hence oxygen and nutrient supplies to the heart muscle. Thus, assessment of its severity is of great interest to health professionals. Numerical simulation of the fractional flow reserve (FFR) has proved to be well correlated with invasively measured FFR used for physiological assessment of the severity of coronary stenosis in arteries. Atherosclerosis may impact the diseased artery in several locations causing serial stenoses, which is a complicated subset of coronary artery disease that requires careful treatment planning. However, hemodynamic of the serial sequential stenoses in coronary arteries has not been extensively studied. The hemodynamics of the serial stenoses is complex because the stenoses in the series interact and affect the flow through each other. To address this, serial stenoses in a 3.4 mm left anterior descending (LAD) artery are examined in this study. Two diameter stenoses (DS) are considered, 30 and 50 percent of the reference diameter. Serial stenoses configurations are divided into three groups based on the order of the stenoses in the series, spacing between them, and deviation of the stenoses’ symmetry (eccentricity). A patient-specific pulsatile waveform is used in the simulations. Blood flow within the stenotic artery is assumed to be laminar, Newtonian, and incompressible. Results for the FFR are reported. Based on the simulation results, it can be deduced that the larger drop in pressure (smaller value of the FFR) is expected when the percentage of the second stenosis in the series is bigger. Varying the distance between the stenoses affects the location of the maximum drop in the pressure, while the minimal FFR in the artery remains unchanged. Eccentric serial stenoses are characterized by a noticeably larger decrease in pressure through the stenoses and by the development of the chaotic flow downstream of the stenoses. The largest drop in the pressure (about 4% difference compared to the axisymmetric case) is obtained for the serial stenoses, where both the stenoses are highly eccentric with the centerlines deflected to the different sides of the LAD. In conclusion, varying configuration of the sequential serial stenoses results in a different distribution of FFR through the LAD. Results presented in this study provide insight into the clinical assessment of the severity of the coronary serial stenoses, which is proved to depend on the relative position of the stenoses and the deviation of the stenoses’ symmetry.Keywords: computational fluid dynamics, coronary artery, fractional flow reserve, serial stenoses
Procedia PDF Downloads 1827044 Development and Validation of the Dimensional Social Anxiety Scale: Assessment for the Offensive Type of Social Anxiety
Authors: Ryotaro Ishikawa
Abstract:
Social Anxiety Disorder (SAD) is marked by the persistent fear of social or performance situations in which embarrassment may occur. In contrast, SA in Japan and in China is understood differently. Taijin Kyofusho (TKS) is a culture-bound subtype of SAD which has been the focus of recent research. TKS refers to a unique form of SAD found in Japanese and East Asian cultures characterized by a fear of offending others, in contrast to prototypical SAD in which the source of fear is typically concerned about one’s own embarrassment, humiliation, or rejection by others. Criteria for TKS partially overlap with but are distinct from SAD; a primary factor distinguishing TKS from SAD appears to be individualistic versus interdependent or collectivistic self-construals. The aim of this study was to develop a scale to assess the typical SAD and offensive type of SAD (TKS). This study aimed to test the internal consistency and validity of the scale (Dimensional Social Anxiety Scale: DSAS) using university students sample. For this, 148 university students were enrolled (male=90, female=58, age=19.77, Standard Deviation=1.04). As a result of confirmatory factor analysis, three-factor models of DSAS were verified (χ2(74) =128.36). These three factors were named ‘general’, ‘perfomance’, and ‘offensive’. DSAS were significantly correlated with the Liebowitz Social Anxiety Scale (r = .538, p < .001). Good internal consistencies were indicated on the three subscales (α = .76 to 89). In conclusion, this study indicated DSAS has adequate internal consistency and validity for assessing of multi-type of SADs.Keywords: social anxiety, cognitive theory, assessment, anxiety disorder
Procedia PDF Downloads 1137043 Comparing Stability Index MAPping (SINMAP) Landslide Susceptibility Models in the Río La Carbonera, Southeast Flank of Pico de Orizaba Volcano, Mexico
Authors: Gabriel Legorreta Paulin, Marcus I. Bursik, Lilia Arana Salinas, Fernando Aceves Quesada
Abstract:
In volcanic environments, landslides and debris flows occur continually along stream systems of large stratovolcanoes. This is the case on Pico de Orizaba volcano, the highest mountain in Mexico. The volcano has a great potential to impact and damage human settlements and economic activities by landslides. People living along the lower valleys of Pico de Orizaba volcano are in continuous hazard by the coalescence of upstream landslide sediments that increased the destructive power of debris flows. These debris flows not only produce floods, but also cause the loss of lives and property. Although the importance of assessing such process, there is few landslide inventory maps and landslide susceptibility assessment. As a result in México, no landslide susceptibility models assessment has been conducted to evaluate advantage and disadvantage of models. In this study, a comprehensive study of landslide susceptibility models assessment using GIS technology is carried out on the SE flank of Pico de Orizaba volcano. A detailed multi-temporal landslide inventory map in the watershed is used as framework for the quantitative comparison of two landslide susceptibility maps. The maps are created based on 1) the Stability Index MAPping (SINMAP) model by using default geotechnical parameters and 2) by using findings of volcanic soils geotechnical proprieties obtained in the field. SINMAP combines the factor of safety derived from the infinite slope stability model with the theory of a hydrologic model to produce the susceptibility map. It has been claimed that SINMAP analysis is reasonably successful in defining areas that intuitively appear to be susceptible to landsliding in regions with sparse information. The validations of the resulting susceptibility maps are performed by comparing them with the inventory map under LOGISNET system which provides tools to compare by using a histogram and a contingency table. Results of the experiment allow for establishing how the individual models predict the landslide location, advantages, and limitations. The results also show that although the model tends to improve with the use of calibrated field data, the landslide susceptibility map does not perfectly represent existing landslides.Keywords: GIS, landslide, modeling, LOGISNET, SINMAP
Procedia PDF Downloads 3127042 Comparison of Microbiological Assessment of Non-adhesive Use and the Use of Adhesive on Complete Dentures
Authors: Hyvee Gean Cabuso, Arvin Taruc, Danielle Villanueva, Channela Anais Hipolito, Jia Bianca Alfonso
Abstract:
Introduction: Denture adhesive aids to provide additional retention, support and comfort for patients with loose dentures, as well as for patients who seek to achieve optimal denture adhesion. But due to its growing popularity, arising oral health issues should be considered, including its possible impact that may alter the microbiological condition of the denture. Changes as such may further resolve to denture-related oral diseases that can affect the day-to-day lives of patients. Purpose: The study aims to assess and compare the microbiological status of dentures without adhesives versus dentures when adhesives were applied. The study also intends to identify the presence of specific microorganisms, their colony concentration and their possible effects on the oral microflora. This study also aims to educate subjects by introducing an alternative denture cleaning method as well as denture and oral health care. Methodology: Edentulous subjects age 50-80 years old, both physically and medically fit, were selected to participate. Before obtaining samples for the study, the alternative cleaning method was introduced by demonstrating a step-by-step cleaning process. Samples were obtained by swabbing the intaglio surface of their upper and lower prosthesis. These swabs were placed in a thioglycollate broth, which served as a transport and enrichment medium. The swabs were then processed through bacterial culture. The colony-forming units (CFUs) were calculated on MacConkey Agar Plate (MAP) and Blood Agar Plate (BAP) in order to identify and assess the microbiological status, including species identification and microbial counting. Result: Upon evaluation and analysis of collected data, the microbiological assessment of the upper dentures with adhesives showed little to no difference compared to dentures without adhesives, but for the lower dentures, (P=0.005), which is less than α = 0.05; therefore, the researchers reject (Ho) and that there is a significant difference between the mean ranks of the lower denture without adhesive to those with, implying that there is a significant decrease in the bacterial count. Conclusion: These results findings may implicate the possibility that the addition of denture adhesives may contribute to the significant decrease of microbial colonization on the dentures.Keywords: denture, denture adhesive, denture-related, microbiological assessment
Procedia PDF Downloads 1277041 Simple Infrastructure in Measuring Countries e-Government
Authors: Sukhbaatar Dorj, Erdenebaatar Altangerel
Abstract:
As alternative to existing e-government measuring models, here proposed a new customer centric, service oriented, simple approach for measuring countries e-Governments. If successfully implemented, built infrastructure will provide a single e-government index number for countries. Main schema is as follows. Country CIO or equal position government official, at the beginning of each year will provide to United Nations dedicated web site 4 numbers on behalf of own country: 1) Ratio of available online public services, to total number of public services, 2) Ratio of interagency inter ministry online public services to total number of available online public services, 3) Ratio of total number of citizen and business entities served online annually to total number of citizen and business entities served annually online and physically on those services, 4) Simple index for geographical spread of online served citizen and business entities. 4 numbers then combined into one index number by mathematical Average function. In addition to 4 numbers 5th number can be introduced as service quality indicator of online public services. If in ordering of countries index number is equal, 5th criteria will be used. Notice: This approach is for country’s current e-government achievement assessment, not for e-government readiness assessment.Keywords: countries e-government index, e-government, infrastructure for measuring e-government, measuring e-government
Procedia PDF Downloads 3287040 A Statistical Approach to Predict and Classify the Commercial Hatchability of Chickens Using Extrinsic Parameters of Breeders and Eggs
Authors: M. S. Wickramarachchi, L. S. Nawarathna, C. M. B. Dematawewa
Abstract:
Hatchery performance is critical for the profitability of poultry breeder operations. Some extrinsic parameters of eggs and breeders cause to increase or decrease the hatchability. This study aims to identify the affecting extrinsic parameters on the commercial hatchability of local chicken's eggs and determine the most efficient classification model with a hatchability rate greater than 90%. In this study, seven extrinsic parameters were considered: egg weight, moisture loss, breeders age, number of fertilised eggs, shell width, shell length, and shell thickness. Multiple linear regression was performed to determine the most influencing variable on hatchability. First, the correlation between each parameter and hatchability were checked. Then a multiple regression model was developed, and the accuracy of the fitted model was evaluated. Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART), k-Nearest Neighbors (kNN), Support Vector Machines (SVM) with a linear kernel, and Random Forest (RF) algorithms were applied to classify the hatchability. This grouping process was conducted using binary classification techniques. Hatchability was negatively correlated with egg weight, breeders' age, shell width, shell length, and positive correlations were identified with moisture loss, number of fertilised eggs, and shell thickness. Multiple linear regression models were more accurate than single linear models regarding the highest coefficient of determination (R²) with 94% and minimum AIC and BIC values. According to the classification results, RF, CART, and kNN had performed the highest accuracy values 0.99, 0.975, and 0.972, respectively, for the commercial hatchery process. Therefore, the RF is the most appropriate machine learning algorithm for classifying the breeder outcomes, which are economically profitable or not, in a commercial hatchery.Keywords: classification models, egg weight, fertilised eggs, multiple linear regression
Procedia PDF Downloads 867039 Local Directional Encoded Derivative Binary Pattern Based Coral Image Classification Using Weighted Distance Gray Wolf Optimization Algorithm
Authors: Annalakshmi G., Sakthivel Murugan S.
Abstract:
This paper presents a local directional encoded derivative binary pattern (LDEDBP) feature extraction method that can be applied for the classification of submarine coral reef images. The classification of coral reef images using texture features is difficult due to the dissimilarities in class samples. In coral reef image classification, texture features are extracted using the proposed method called local directional encoded derivative binary pattern (LDEDBP). The proposed approach extracts the complete structural arrangement of the local region using local binary batten (LBP) and also extracts the edge information using local directional pattern (LDP) from the edge response available in a particular region, thereby achieving extra discriminative feature value. Typically the LDP extracts the edge details in all eight directions. The process of integrating edge responses along with the local binary pattern achieves a more robust texture descriptor than the other descriptors used in texture feature extraction methods. Finally, the proposed technique is applied to an extreme learning machine (ELM) method with a meta-heuristic algorithm known as weighted distance grey wolf optimizer (GWO) to optimize the input weight and biases of single-hidden-layer feed-forward neural networks (SLFN). In the empirical results, ELM-WDGWO demonstrated their better performance in terms of accuracy on all coral datasets, namely RSMAS, EILAT, EILAT2, and MLC, compared with other state-of-the-art algorithms. The proposed method achieves the highest overall classification accuracy of 94% compared to the other state of art methods.Keywords: feature extraction, local directional pattern, ELM classifier, GWO optimization
Procedia PDF Downloads 1637038 Investigation of Different Machine Learning Algorithms in Large-Scale Land Cover Mapping within the Google Earth Engine
Authors: Amin Naboureh, Ainong Li, Jinhu Bian, Guangbin Lei, Hamid Ebrahimy
Abstract:
Large-scale land cover mapping has become a new challenge in land change and remote sensing field because of involving a big volume of data. Moreover, selecting the right classification method, especially when there are different types of landscapes in the study area is quite difficult. This paper is an attempt to compare the performance of different machine learning (ML) algorithms for generating a land cover map of the China-Central Asia–West Asia Corridor that is considered as one of the main parts of the Belt and Road Initiative project (BRI). The cloud-based Google Earth Engine (GEE) platform was used for generating a land cover map for the study area from Landsat-8 images (2017) by applying three frequently used ML algorithms including random forest (RF), support vector machine (SVM), and artificial neural network (ANN). The selected ML algorithms (RF, SVM, and ANN) were trained and tested using reference data obtained from MODIS yearly land cover product and very high-resolution satellite images. The finding of the study illustrated that among three frequently used ML algorithms, RF with 91% overall accuracy had the best result in producing a land cover map for the China-Central Asia–West Asia Corridor whereas ANN showed the worst result with 85% overall accuracy. The great performance of the GEE in applying different ML algorithms and handling huge volume of remotely sensed data in the present study showed that it could also help the researchers to generate reliable long-term land cover change maps. The finding of this research has great importance for decision-makers and BRI’s authorities in strategic land use planning.Keywords: land cover, google earth engine, machine learning, remote sensing
Procedia PDF Downloads 1127037 Verification Protocols for the Lightning Protection of a Large Scale Scientific Instrument in Harsh Environments: A Case Study
Authors: Clara Oliver, Oibar Martinez, Jose Miguel Miranda
Abstract:
This paper is devoted to the study of the most suitable protocols to verify the lightning protection and ground resistance quality in a large-scale scientific facility located in a harsh environment. We illustrate this work by reviewing a case study: the largest telescopes of the Northern Hemisphere Cherenkov Telescope Array, CTA-N. This array hosts sensitive and high-speed optoelectronics instrumentation and sits on a clear, free from obstacle terrain at around 2400 m above sea level. The site offers a top-quality sky but also features challenging conditions for a lightning protection system: the terrain is volcanic and has resistivities well above 1 kOhm·m. In addition, the environment often exhibits humidities well below 5%. On the other hand, the high complexity of a Cherenkov telescope structure does not allow a straightforward application of lightning protection standards. CTA-N has been conceived as an array of fourteen Cherenkov Telescopes of two different sizes, which will be constructed in La Palma Island, Spain. Cherenkov Telescopes can provide valuable information on different astrophysical sources from the gamma rays reaching the Earth’s atmosphere. The largest telescopes of CTA are called LST’s, and the construction of the first one was finished in October 2018. The LST has a shape which resembles a large parabolic antenna, with a 23-meter reflective surface supported by a tubular structure made of carbon fibers and steel tubes. The reflective surface has 400 square meters and is made of an array of segmented mirrors that can be controlled individually by a subsystem of actuators. This surface collects and focuses the Cherenkov photons into the camera, where 1855 photo-sensors convert the light in electrical signals that can be processed by dedicated electronics. We describe here how the risk assessment of direct strike impacts was made and how down conductors and ground system were both tested. The verification protocols which should be applied for the commissioning and operation phases are then explained. We stress our attention on the ground resistance quality assessment.Keywords: grounding, large scale scientific instrument, lightning risk assessment, lightning standards and safety
Procedia PDF Downloads 1227036 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sublfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of filters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-filter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying filter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The significance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II filters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the filter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic filter, aspect ratios (AR) ranging from 1 to 16 in LES filters are evaluated. The findings highlight the DDM's proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as filter anisotropy intensify, the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all filter-anisotropy scenarios. The findings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 757035 Comparison of Different Artificial Intelligence-Based Protein Secondary Structure Prediction Methods
Authors: Jamerson Felipe Pereira Lima, Jeane Cecília Bezerra de Melo
Abstract:
The difficulty and cost related to obtaining of protein tertiary structure information through experimental methods, such as X-ray crystallography or NMR spectroscopy, helped raising the development of computational methods to do so. An approach used in these last is prediction of tridimensional structure based in the residue chain, however, this has been proved an NP-hard problem, due to the complexity of this process, explained by the Levinthal paradox. An alternative solution is the prediction of intermediary structures, such as the secondary structure of the protein. Artificial Intelligence methods, such as Bayesian statistics, artificial neural networks (ANN), support vector machines (SVM), among others, were used to predict protein secondary structure. Due to its good results, artificial neural networks have been used as a standard method to predict protein secondary structure. Recent published methods that use this technique, in general, achieved a Q3 accuracy between 75% and 83%, whereas the theoretical accuracy limit for protein prediction is 88%. Alternatively, to achieve better results, support vector machines prediction methods have been developed. The statistical evaluation of methods that use different AI techniques, such as ANNs and SVMs, for example, is not a trivial problem, since different training sets, validation techniques, as well as other variables can influence the behavior of a prediction method. In this study, we propose a prediction method based on artificial neural networks, which is then compared with a selected SVM method. The chosen SVM protein secondary structure prediction method is the one proposed by Huang in his work Extracting Physico chemical Features to Predict Protein Secondary Structure (2013). The developed ANN method has the same training and testing process that was used by Huang to validate his method, which comprises the use of the CB513 protein data set and three-fold cross-validation, so that the comparative analysis of the results can be made comparing directly the statistical results of each method.Keywords: artificial neural networks, protein secondary structure, protein structure prediction, support vector machines
Procedia PDF Downloads 6197034 School Discipline Starts Early: Mindfulness as a Self-discipline Tool in the Preschool
Authors: Ioanna Koumi
Abstract:
The aim of the intervention presented is to show the positive effects a mindfulness programme can have on the behaviour of preschoolers (years 4-6). The programme was implemented as part of the psychologist's work in 5 preschool units on the Greek island of Chios. Classroom-based activities of mindfulness were shown and practiced in 5 sessions, in collaboration with teachers, in order to make preschoolers aware of how their brain affects their behaviour, as well as of how they can have more positive behaviours, especially in instances of negative feelings. The outcomes of the intervention were assessed via questionnaire completion before and after the sessions by the teachers, as well as focus groups procedures with students, teachers, and parents. Implications of how mindfulness programmes can also be implemented at home are further discussed. School year in which the programme is being implemented: 2022-23 Intervention method: based on basic mindfulness theory and practice, the 220 students (age 4-6) in 11 classes of the 5 preschools that participated were given lessons of how to become aware of their states of focusing, regulation, attention, emotional situation, as well as body and social situations. Furthermore, the preschoolers were encouraged to make more mindful choices when it came to negative situations and emotions. Assessment method: The school as a caring community Profile II – Questionnaire completed by 20 preschool teachers prior to and after the intervention, Focus group sessions with teachers, students, parents at the end of the intervention Results: the assessment will be completed in May 2023.Keywords: preschool, mindfulness training, self-awareness, social-emotional development
Procedia PDF Downloads 957033 Environmental Exposure Assessment among Refuellers at Brussels South Charleroi Airport
Authors: Mostosi C., Stéphenne J., Kempeneers E.
Abstract:
Introduction: Refuellers from Brussels South Charleroi Airport (BSCA) expressed concerns about the risks involved in handling JET-A1 fuel. The HSE Manager of BSCA, in collaboration with the occupational physician and the industrial hygiene unit of the External Service of Occupational Medicine, decided to assess the toxicological exposure of these workers. Materials and methods: Two measurement methods were used. The first was to assay three types of metabolites in urine to highlight the exposure to xylenes, toluene, and benzene in aircraft fuels. Out of 32 refuellers in the department, 26 participated in the sampling, and 23 samples were exploited. The second method targeted the assessment of environmental exposure to certain potentially hazardous substances that refuellers are likely to breathe in work areas at the airport. It was decided to carry out two ambient air measurement campaigns, using static systems on the one hand and, on the other hand, using individual sensors worn by the refuellers at the level of the respiratory tract. Volatile organic compounds and diesel particles were analyzed. Results: Despite the fears that motivated these analyzes, the overall results showed low levels of exposure, far below the existing limit values, both in air quality and in urinary measurements. Conclusion: These results are comparable to a study carried out in several French airports. The staff could be reassured, and then the medical surveillance was modified by the occupational physician. With the aviation development at BSCA, equipment and methods are evolving. Their exposure will have to be reassessed.Keywords: refuelling, airport, exposure, fuel, occupational health, air quality
Procedia PDF Downloads 847032 Therapeutic Hypothermia Post Cardiac Arrest
Authors: Tahsien Mohamed Okasha
Abstract:
We hypothesized that Post cardiac arrest patients with Glasgow coma scale (GCS) score of less than (8) and who will be exposed to therapeutic hypothermia protocol will exhibit improvement in their neurological performance. Purposive sample of 17 patients who were fulfilling the inclusion criteria during one year collected. The study carried out using Quasi-experimental research design. Four Tools used for data collection of this study: Demographic and medical data sheet, Post cardiac arrest health assessment sheet, Bedside Shivering Assessment Scale (BSAS), and Glasgow Pittsburgh cerebral performance category scale (CPC). Result: the mean age was X̅ ± SD = 53 ± 8.122 years, 47.1% were arrested because of cardiac etiology. 35.3% with initial arrest rhythm ventricular tachycardia (VT), 23.5% with ventricular fibrillation (VF), and 29.4% with A-Systole. Favorable neurological outcome was seen among 70.6%. There was significant statistical difference in WBC, Platelets, blood gases value, random blood sugar. Also Initial arrest rhythm, etiology of cardiac arrest, and shivering status were significantly correlated with cerebral performance categories score. therapeutic hypothermia has positive effects on neurological performance among post cardiac arrest patients with GCS score of less than (8). replication of the study on larger probability sample, with randomized control trial design. Further study for suggesting nursing protocol for patients undergoing therapeutic hypothermia.Keywords: therapeutic hypothermia, neurological performance, after resuscitation from cardiac arrest., resuscitation
Procedia PDF Downloads 957031 An Automated Stock Investment System Using Machine Learning Techniques: An Application in Australia
Authors: Carol Anne Hargreaves
Abstract:
A key issue in stock investment is how to select representative features for stock selection. The objective of this paper is to firstly determine whether an automated stock investment system, using machine learning techniques, may be used to identify a portfolio of growth stocks that are highly likely to provide returns better than the stock market index. The second objective is to identify the technical features that best characterize whether a stock’s price is likely to go up and to identify the most important factors and their contribution to predicting the likelihood of the stock price going up. Unsupervised machine learning techniques, such as cluster analysis, were applied to the stock data to identify a cluster of stocks that was likely to go up in price – portfolio 1. Next, the principal component analysis technique was used to select stocks that were rated high on component one and component two – portfolio 2. Thirdly, a supervised machine learning technique, the logistic regression method, was used to select stocks with a high probability of their price going up – portfolio 3. The predictive models were validated with metrics such as, sensitivity (recall), specificity and overall accuracy for all models. All accuracy measures were above 70%. All portfolios outperformed the market by more than eight times. The top three stocks were selected for each of the three stock portfolios and traded in the market for one month. After one month the return for each stock portfolio was computed and compared with the stock market index returns. The returns for all three stock portfolios was 23.87% for the principal component analysis stock portfolio, 11.65% for the logistic regression portfolio and 8.88% for the K-means cluster portfolio while the stock market performance was 0.38%. This study confirms that an automated stock investment system using machine learning techniques can identify top performing stock portfolios that outperform the stock market.Keywords: machine learning, stock market trading, logistic regression, cluster analysis, factor analysis, decision trees, neural networks, automated stock investment system
Procedia PDF Downloads 1557030 Evaluation of Ceres Wheat and Rice Model for Climatic Conditions in Haryana, India
Authors: Mamta Rana, K. K. Singh, Nisha Kumari
Abstract:
The simulation models with its soil-weather-plant atmosphere interacting system are important tools for assessing the crops in changing climate conditions. The CERES-Wheat & Rice vs. 4.6 DSSAT was calibrated and evaluated for one of the major producers of wheat and rice state- Haryana, India. The simulation runs were made under irrigated conditions and three fertilizer applications dose of N-P-K to estimate crop yield and other growth parameters along with the phenological development of the crop. The genetic coefficients derived by iteratively manipulating the relevant coefficients that characterize the phenological process of wheat and rice crop to the best fit match between the simulated and observed anthesis, physological maturity and final grain yield. The model validated by plotting the simulated and remote sensing derived LAI. LAI product from remote sensing provides the edge of spatial, timely and accurate assessment of crop. For validating the yield and yield components, the error percentage between the observed and simulated data was calculated. The analysis shows that the model can be used to simulate crop yield and yield components for wheat and rice cultivar under different management practices. During the validation, the error percentage was less than 10%, indicating the utility of the calibrated model for climate risk assessment in the selected region.Keywords: simulation model, CERES-wheat and rice model, crop yield, genetic coefficient
Procedia PDF Downloads 3037029 Performance Comparison and Visualization of COMSOL Multiphysics, Matlab, and Fortran for Predicting the Reservoir Pressure on Oil Production in a Multiple Leases Reservoir with Boundary Element Method
Authors: N. Alias, W. Z. W. Muhammad, M. N. M. Ibrahim, M. Mohamed, H. F. S. Saipol, U. N. Z. Ariffin, N. A. Zakaria, M. S. Z. Suardi
Abstract:
This paper presents the performance comparison of some computation software for solving the boundary element method (BEM). BEM formulation is the numerical technique and high potential for solving the advance mathematical modeling to predict the production of oil well in arbitrarily shaped based on multiple leases reservoir. The limitation of data validation for ensuring that a program meets the accuracy of the mathematical modeling is considered as the research motivation of this paper. Thus, based on this limitation, there are three steps involved to validate the accuracy of the oil production simulation process. In the first step, identify the mathematical modeling based on partial differential equation (PDE) with Poisson-elliptic type to perform the BEM discretization. In the second step, implement the simulation of the 2D BEM discretization using COMSOL Multiphysic and MATLAB programming languages. In the last step, analyze the numerical performance indicators for both programming languages by using the validation of Fortran programming. The performance comparisons of numerical analysis are investigated in terms of percentage error, comparison graph and 2D visualization of pressure on oil production of multiple leases reservoir. According to the performance comparison, the structured programming in Fortran programming is the alternative software for implementing the accurate numerical simulation of BEM. As a conclusion, high-level language for numerical computation and numerical performance evaluation are satisfied to prove that Fortran is well suited for capturing the visualization of the production of oil well in arbitrarily shaped.Keywords: performance comparison, 2D visualization, COMSOL multiphysic, MATLAB, Fortran, modelling and simulation, boundary element method, reservoir pressure
Procedia PDF Downloads 4907028 Data Centers’ Temperature Profile Simulation Optimized by Finite Elements and Discretization Methods
Authors: José Alberto García Fernández, Zhimin Du, Xinqiao Jin
Abstract:
Nowadays, data center industry faces strong challenges for increasing the speed and data processing capacities while at the same time is trying to keep their devices a suitable working temperature without penalizing that capacity. Consequently, the cooling systems of this kind of facilities use a large amount of energy to dissipate the heat generated inside the servers, and developing new cooling techniques or perfecting those already existing would be a great advance in this type of industry. The installation of a temperature sensor matrix distributed in the structure of each server would provide the necessary information for collecting the required data for obtaining a temperature profile instantly inside them. However, the number of temperature probes required to obtain the temperature profiles with sufficient accuracy is very high and expensive. Therefore, other less intrusive techniques are employed where each point that characterizes the server temperature profile is obtained by solving differential equations through simulation methods, simplifying data collection techniques but increasing the time to obtain results. In order to reduce these calculation times, complicated and slow computational fluid dynamics simulations are replaced by simpler and faster finite element method simulations which solve the Burgers‘ equations by backward, forward and central discretization techniques after simplifying the energy and enthalpy conservation differential equations. The discretization methods employed for solving the first and second order derivatives of the obtained Burgers‘ equation after these simplifications are the key for obtaining results with greater or lesser accuracy regardless of the characteristic truncation error.Keywords: Burgers' equations, CFD simulation, data center, discretization methods, FEM simulation, temperature profile
Procedia PDF Downloads 1677027 Maintenance Wrench Time Improvement Project
Authors: Awadh O. Al-Anazi
Abstract:
As part of the organizational needs toward successful maintaining activities, a proper management system need to be put in place, ensuring the effectiveness of maintenance activities. The management system shall clearly describes the process of identifying, prioritizing, planning, scheduling, execution, and providing valuable feedback for all maintenance activities. Completion and accuracy of the system with proper implementation shall provide the organization with a strong platform for effective maintenance activities that are resulted in efficient outcomes toward business success. The purpose of this research was to introduce a practical tool for measuring the maintenance efficiency level within Saudi organizations. A comprehensive study was launched across many maintenance professionals throughout Saudi leading organizations. The study covered five main categories: work process, identification, planning and scheduling, execution, and performance monitoring. Each category was evaluated across many dimensions to determine its current effectiveness through a five-level scale from 'process is not there' to 'mature implementation'. Wide participation was received, responses were analyzed, and the study was concluded by highlighting major gaps and improvement opportunities within Saudi organizations. One effective implementation of the efficiency enhancement efforts was deployed in Saudi Kayan (one of Sabic affiliates). Below details describes the project outcomes: SK overall maintenance wrench time was measured at 20% (on average) from the total daily working time. The assessment indicates the appearance of several organizational gaps, such as a high amount of reactive work, poor coordination and teamwork, Unclear roles and responsibilities, as well as underutilization of resources. Multidiscipline team was assigned to design and implement an appropriate work process that is capable to govern the execution process, improve the maintenance workforce efficiency, and maximize wrench time (targeting > 50%). The enhanced work process was introduced through brainstorming and wide benchmarking, incorporated with a proper change management plan and leadership sponsorship. The project was completed in 2018. Achieved Results: SK WT was improved to 50%, which resulted in 1) reducing the Average Notification completion time. 2) reducing maintenance expenses on OT and manpower support (3.6 MSAR Actual Saving from Budget within 6 months).Keywords: efficiency, enhancement, maintenance, work force, wrench time
Procedia PDF Downloads 1457026 A Multi-Output Network with U-Net Enhanced Class Activation Map and Robust Classification Performance for Medical Imaging Analysis
Authors: Jaiden Xuan Schraut, Leon Liu, Yiqiao Yin
Abstract:
Computer vision in medical diagnosis has achieved a high level of success in diagnosing diseases with high accuracy. However, conventional classifiers that produce an image to-label result provides insufficient information for medical professionals to judge and raise concerns over the trust and reliability of a model with results that cannot be explained. In order to gain local insight into cancerous regions, separate tasks such as imaging segmentation need to be implemented to aid the doctors in treating patients, which doubles the training time and costs which renders the diagnosis system inefficient and difficult to be accepted by the public. To tackle this issue and drive AI-first medical solutions further, this paper proposes a multi-output network that follows a U-Net architecture for image segmentation output and features an additional convolutional neural networks (CNN) module for auxiliary classification output. Class activation maps are a method of providing insight into a convolutional neural network’s feature maps that leads to its classification but in the case of lung diseases, the region of interest is enhanced by U-net-assisted Class Activation Map (CAM) visualization. Therefore, our proposed model combines image segmentation models and classifiers to crop out only the lung region of a chest X-ray’s class activation map to provide a visualization that improves the explainability and is able to generate classification results simultaneously which builds trust for AI-led diagnosis systems. The proposed U-Net model achieves 97.61% accuracy and a dice coefficient of 0.97 on testing data from the COVID-QU-Ex Dataset which includes both diseased and healthy lungs.Keywords: multi-output network model, U-net, class activation map, image classification, medical imaging analysis
Procedia PDF Downloads 2007025 Natural Regeneration Assessment of a Double Bunrt Mediterranean Coniferous Forest: A Pilot Study from West Peloponnisos, Greece
Authors: Dionisios Panagiotaras, Ioannis P. Kokkoris, Dionysios Koulougliotis, Dimitra Lekka, Alexandra Skalioti
Abstract:
In the summer of 2021, Greece was affected by devastating forest fires in various regions of the country, resulting in human losses, destruction or degradation of the natural environment, infrastructure, livestock and cultivations. The present study concerns a pilot assessment of natural vegetation regeneration in the second, in terms of area, fire-affected region for 2021, at Ancient Olympia area, located in West Peloponnisos (Ilia Prefecture), Greece. A standardised field sampling protocol for assessing natural regeneration was implemented at selected sites where the forest fire had occurred previously (in 2007), and the vegetation (Pinus halepensis forest) had regenerated naturally. The results of the study indicate the loss of the established natural regeneration of Pinus halepensis forest, as well as of the tree-layer in total. Post-fire succession species are recorded to the shrub and the herb layer, with a varying cover. Present findings correspond to the results of field work and analysis one year after the fire, which will form the basis for further research and conclusions on taking action for restoration schemes in areas that have been affected by fire more than once within a 20-year period.Keywords: forest, pinus halepensis, ancient olympia, post fire vegetation
Procedia PDF Downloads 927024 Assessing the Impact of Human Behaviour on Water Resource Systems Performance: A Conceptual Framework
Authors: N. J. Shanono, J. G. Ndiritu
Abstract:
The poor performance of water resource systems (WRS) has been reportedly linked to not only climate variability and the water demand dynamics but also human behaviour-driven unlawful activities. Some of these unlawful activities that have been adversely affecting water sector include unauthorized water abstractions, water wastage behaviour, refusal of water re‐use measures, excessive operational losses, discharging untreated or improperly treated wastewater, over‐application of chemicals by agricultural users and fraudulent WRS operation. Despite advances in WRS planning, operation, and analysis incorporating such undesirable human activities to quantitatively assess their impact on WRS performance remain elusive. This study was then inspired by the need to develop a methodological framework for WRS performance assessment that integrates the impact of human behaviour with WRS performance assessment analysis. We, therefore, proposed a conceptual framework for assessing the impact of human behaviour on WRS performance using the concept of socio-hydrology. The framework identifies and couples four major sources of WRS-related values (water values, water systems, water managers, and water users) using three missing links between human and water in the management of WRS (interactions, outcomes, and feedbacks). The framework is to serve as a database for choosing relevant social and hydrological variables and to understand the intrinsic relations between the selected variables to study a specific human-water problem in the context of WRS management.Keywords: conceptual framework, human behaviour; socio-hydrology; water resource systems
Procedia PDF Downloads 133