Search results for: information coefficient
3148 The Effect of Support Program Based on The Health Belief Model on Reproductive Health Behavior in Women with Orthopedic Disabled
Authors: Eda Yakit Ak, Ergül Aslan
Abstract:
The study was conducted using the quasi-experimental design to determine the influence of the nursing support program prepared according to the Health Belief Model on reproductive health behaviors of orthopedically disabled women in the physical therapy and rehabilitation clinic at a university hospital between August 2019-October, 2020. The research sample included 50 women (35 in the control group and 15 in the experimental group with orthopedic disability). A 3-week nursing support program was applied to the experimental group of women. To collect the data, Introductory Information Form and Scale for Determining the Protective Attitudes of Married Women towards Reproductive Health (SDPAMW) were applied. The evaluation was made with a follow-up form for four months. In the first evaluation, the total SDPAMW scores were 119.93±20.59 for the experimental group and 122.20±16.71 for the control group. In the final evaluation, the total SDPAMW scores were 144.27±11.95 for the experimental group and 118.00±16.43 for the control group. The difference between the groups regarding the first and final evaluations for the total SDPAMW scores was statistically significant (p<0.01). In the experimental group, between the first and final evaluations regarding the sub-dimensions of SDPAMW, an increase was found in the behavior of seeing the doctor on reproductive health issues, protection from reproductive organ and breast cancer, general health behaviors to protect reproductive health, and protection from genital tract infections (p<0.05). Consequently, the nursing support program based on the Health Belief Model applied to orthopedically disabled women positively affected reproductive health behaviors.Keywords: orthopedically disabled, woman, reproductive health, nursing support program, health belief model
Procedia PDF Downloads 1513147 Groupthink: The Dark Side of Team Cohesion
Authors: Farhad Eizakshiri
Abstract:
The potential for groupthink to explain the issues contributing to deterioration of decision-making ability within the unitary team and so to cause poor outcomes attracted a great deal of attention from a variety of disciplines, including psychology, social and organizational studies, political science, and others. Yet what remains unclear is how and why the team members’ strivings for unanimity and cohesion override their motivation to realistically appraise alternative courses of action. In this paper, the findings of a sequential explanatory mixed-methods research containing an experiment with thirty groups of three persons each and interviews with all experimental groups to investigate this issue is reported. The experiment sought to examine how individuals aggregate their views in order to reach a consensual group decision concerning the completion time of a task. The results indicated that groups made better estimates when they had no interaction between members in comparison with the situation that groups collectively agreed on time estimates. To understand the reasons, the qualitative data and informal observations collected during the task were analyzed through conversation analysis, thus leading to four reasons that caused teams to neglect divergent viewpoints and reduce the number of ideas being considered. Reasons found were the concurrence-seeking tendency, pressure on dissenters, self-censorship, and the illusion of invulnerability. It is suggested that understanding the dynamics behind the aforementioned reasons of groupthink will help project teams to avoid making premature group decisions by enhancing careful evaluation of available information and analysis of available decision alternatives and choices.Keywords: groupthink, group decision, cohesiveness, project teams, mixed-methods research
Procedia PDF Downloads 3973146 Geometry, the language of Manifestation of Tabriz School’s Mystical Thoughts in Architecture (Case Study: Dome of Soltanieh)
Authors: Lida Balilan, Dariush Sattarzadeh, Rana Koorepaz
Abstract:
In the Ilkhanid era, the mystical school of Tabriz manifested itself as an art school in various aspects, including miniatures, architecture, urban planning and design, simultaneously with the expansion of the many sciences of its time. In this era, mysticism, both in form and in poetry and prose, as well as in works of art reached its peak. Mysticism, as an inner belief and thought, brought the audience to the artistic and aesthetical view using allegorical and symbolic expression of the religion and had a direct impact on the formation of the intellectual and cultural layers of the society. At the same time, Mystic school of Tabriz could create a symbolic and allegorical language to create magnificent works of architecture with the expansion of science in various fields and using various sciences such as mathematics, geometry, science of numbers and by Abjad letters. In this era, geometry is the middle link between mysticism and architecture and it is divided into two categories, including intellectual and sensory geometry and based on its function. Soltaniyeh dome is one of the prominent buildings of the Tabriz school with the shrine land use. In this article, information is collected using a historical-interpretive method and the results are analyzed using an analytical-comparative method. The results of the study suggest that the designers and builders of the Soltaniyeh dome have used shapes, colors, numbers, letters and words in the form of motifs, geometric patterns as well as lines and writings in levels and layers ranging from plans to decorations and arrays for architectural symbolization and encryption to express and transmit mystical ideas.Keywords: geometry, Tabriz school, mystical thoughts, dome of Soltaniyeh
Procedia PDF Downloads 873145 Object-Based Image Analysis for Gully-Affected Area Detection in the Hilly Loess Plateau Region of China Using Unmanned Aerial Vehicle
Authors: Hu Ding, Kai Liu, Guoan Tang
Abstract:
The Chinese Loess Plateau suffers from serious gully erosion induced by natural and human causes. Gully features detection including gully-affected area and its two dimension parameters (length, width, area et al.), is a significant task not only for researchers but also for policy-makers. This study aims at gully-affected area detection in three catchments of Chinese Loess Plateau, which were selected in Changwu, Ansai, and Suide by using unmanned aerial vehicle (UAV). The methodology includes a sequence of UAV data generation, image segmentation, feature calculation and selection, and random forest classification. Two experiments were conducted to investigate the influences of segmentation strategy and feature selection. Results showed that vertical and horizontal root-mean-square errors were below 0.5 and 0.2 m, respectively, which were ideal for the Loess Plateau region. The segmentation strategy adopted in this paper, which considers the topographic information, and optimal parameter combination can improve the segmentation results. Besides, the overall extraction accuracy in Changwu, Ansai, and Suide achieved was 84.62%, 86.46%, and 93.06%, respectively, which indicated that the proposed method for detecting gully-affected area is more objective and effective than traditional methods. This study demonstrated that UAV can bridge the gap between field measurement and satellite-based remote sensing, obtaining a balance in resolution and efficiency for catchment-scale gully erosion research.Keywords: unmanned aerial vehicle (UAV), object-analysis image analysis, gully erosion, gully-affected area, Loess Plateau, random forest
Procedia PDF Downloads 2193144 Key Success Factors and Enterprise Resource Planning Implementation in Higher Education Institutions: Multiple Case Studies of Jordanian Universities
Authors: Abdallah Abu Madi, Dongmei Cao, Alexeis Garcia-Perez, Qile He
Abstract:
The failure of Enterprise Resource Planning (ERP) implementation in higher education institutions (HEIs) worldwide is much higher in comparison to other sectors, such as banking or manufacturing, to our knowledge limited research has been conducted on this issue. To date, prior literature has identified some key success factors (KSFs) mostly either in the domain of information and system (IS) or in the industrial context. However, evidence of ERP implementation in the higher education sector has had little attention in the extant literature. Hence, this paper identifies and categories KSFs of ERP implementation in HEIs. Semi-structured face-to-face interviews were conducted with technicians and managers from three Jordanian HEIs. From these case studies, three new sector- and context-specific KSFs were identified and categorized according to two dimensions: organizational and technical. The first new KSF is the selection of the ERP system, which is an influential factor in the organizational dimension. Results show that an ERP solution that is suitable to one context may be disastrous in another. The second new KSF, which falls under the technical dimension, is the relationship between vendors and HEIs. This must be fair and impartial to enable successful decision-making and thus the achievement of pre-defined goals. Also within the technical dimension is the third factor: in-house maintenance. Once an appropriate system is selected and a strong relationship is established, the ERP system requires continuous maintenance for effective operation. HEIs should ensure that qualified IT support is in place and in-house to avoid excessive running expenses.Keywords: Enterprise Resource Planning (ERP)implementation, key success factors, higher education institutions, Jordanian higher education
Procedia PDF Downloads 2113143 Municipal Solid Waste Management Using Life Cycle Assessment Approach: Case Study of Maku City, Iran
Authors: L. Heidari, M. Jalili Ghazizade
Abstract:
This paper aims to determine the best environmental and economic scenario for Municipal Solid Waste (MSW) management of the Maku city by using Life Cycle Assessment (LCA) approach. The functional elements of this study are collection, transportation, and disposal of MSW in Maku city. Waste composition and density, as two key parameters of MSW, have been determined by field sampling, and then, the other important specifications of MSW like chemical formula, thermal energy and water content were calculated. These data beside other information related to collection and disposal facilities are used as a reliable source of data to assess the environmental impacts of different waste management options, including landfills, composting, recycling and energy recovery. The environmental impact of MSW management options has been investigated in 15 different scenarios by Integrated Waste Management (IWM) software. The photochemical smog, greenhouse gases, acid gases, toxic emissions, and energy consumption of each scenario are measured. Then, the environmental indices of each scenario are specified by weighting these parameters. Economic costs of scenarios have been also compared with each other based on literature. As final result, since the organic materials make more than 80% of the waste, compost can be a suitable method. Although the major part of the remaining 20% of waste can be recycled, due to the high cost of necessary equipment, the landfill option has been suggested. Therefore, the scenario with 80% composting and 20% landfilling is selected as superior environmental and economic scenario. This study shows that, to select a scenario with practical applications, simultaneously environmental and economic aspects of different scenarios must be considered.Keywords: IWM software, life cycle assessment, Maku, municipal solid waste management
Procedia PDF Downloads 2413142 The Study on Corpse Floating Time in Shanghai Region of China
Authors: Hang Meng, Wen-Bin Liu, Bi Xiao, Kai-Jun Ma, Jian-Hui Xie, Geng Fei, Tian-Ye Zhang, Lu-Yi Xu, Dong-Chuan Zhang
Abstract:
The victims in water are often found in the coastal region, along river region or the region with lakes. In China, the examination for the bodies of victims in the water is conducted by forensic doctors working in the public security bureau. Because the enter water time for most of the victims are not clear, and often lack of monitor images and other information, so to find out the corpse enter water time for victims is very difficult. After the corpse of the victim enters the water, it sinks first, then corruption gas produces, which can make the density of the corpse to be less than water, and thus rise again. So the factor that determines the corpse floating time is temperature. On the basis of the temperature data obtained in Shanghai region of China (Shanghai is a north subtropical marine monsoon climate, with an average annual temperature of about 17.1℃. The hottest month is July, the average monthly temperature is 28.6℃, and the coldest month is January, the average monthly temperature is 4.8℃). This study selected about 100 cases with definite corpse enter water time and corpse floating time, analyzed the cases and obtained the empirical law of the corpse floating time. For example, in the Shanghai region, on June 15th and October 15th, the corpse floating time is about 1.5 days. In early December, the bodies who entered the water will go up around January 1st of the following year, and the bodies who enter water in late December will float in March of next year. The results of this study can be used to roughly estimate the water enter time of the victims in Shanghai. Forensic doctors around the world can also draw on the results of this study to infer the time when the corpses of the victims in the water go up.Keywords: corpse enter water time, corpse floating time, drowning, forensic pathology, victims in the water
Procedia PDF Downloads 1983141 Fully Automated Methods for the Detection and Segmentation of Mitochondria in Microscopy Images
Authors: Blessing Ojeme, Frederick Quinn, Russell Karls, Shannon Quinn
Abstract:
The detection and segmentation of mitochondria from fluorescence microscopy are crucial for understanding the complex structure of the nervous system. However, the constant fission and fusion of mitochondria and image distortion in the background make the task of detection and segmentation challenging. In the literature, a number of open-source software tools and artificial intelligence (AI) methods have been described for analyzing mitochondrial images, achieving remarkable classification and quantitation results. However, the availability of combined expertise in the medical field and AI required to utilize these tools poses a challenge to its full adoption and use in clinical settings. Motivated by the advantages of automated methods in terms of good performance, minimum detection time, ease of implementation, and cross-platform compatibility, this study proposes a fully automated framework for the detection and segmentation of mitochondria using both image shape information and descriptive statistics. Using the low-cost, open-source python and openCV library, the algorithms are implemented in three stages: pre-processing, image binarization, and coarse-to-fine segmentation. The proposed model is validated using the mitochondrial fluorescence dataset. Ground truth labels generated using a Lab kit were also used to evaluate the performance of our detection and segmentation model. The study produces good detection and segmentation results and reports the challenges encountered during the image analysis of mitochondrial morphology from the fluorescence mitochondrial dataset. A discussion on the methods and future perspectives of fully automated frameworks conclude the paper.Keywords: 2D, binarization, CLAHE, detection, fluorescence microscopy, mitochondria, segmentation
Procedia PDF Downloads 3593140 Biosignal Recognition for Personal Identification
Authors: Hadri Hussain, M.Nasir Ibrahim, Chee-Ming Ting, Mariani Idroas, Fuad Numan, Alias Mohd Noor
Abstract:
A biometric security system has become an important application in client identification and verification system. A conventional biometric system is normally based on unimodal biometric that depends on either behavioural or physiological information for authentication purposes. The behavioural biometric depends on human body biometric signal (such as speech) and biosignal biometric (such as electrocardiogram (ECG) and phonocardiogram or heart sound (HS)). The speech signal is commonly used in a recognition system in biometric, while the ECG and the HS have been used to identify a person’s diseases uniquely related to its cluster. However, the conventional biometric system is liable to spoof attack that will affect the performance of the system. Therefore, a multimodal biometric security system is developed, which is based on biometric signal of ECG, HS, and speech. The biosignal data involved in the biometric system is initially segmented, with each segment Mel Frequency Cepstral Coefficients (MFCC) method is exploited for extracting the feature. The Hidden Markov Model (HMM) is used to model the client and to classify the unknown input with respect to the modal. The recognition system involved training and testing session that is known as client identification (CID). In this project, twenty clients are tested with the developed system. The best overall performance at 44 kHz was 93.92% for ECG and the worst overall performance was ECG at 88.47%. The results were compared to the best overall performance at 44 kHz for (20clients) to increment of clients, which was 90.00% for HS and the worst overall performance falls at ECG at 79.91%. It can be concluded that the difference multimodal biometric has a substantial effect on performance of the biometric system and with the increment of data, even with higher frequency sampling, the performance still decreased slightly as predicted.Keywords: electrocardiogram, phonocardiogram, hidden markov model, mel frequency cepstral coeffiecients, client identification
Procedia PDF Downloads 2803139 Feasibility Study of MongoDB and Radio Frequency Identification Technology in Asset Tracking System
Authors: Mohd Noah A. Rahman, Afzaal H. Seyal, Sharul T. Tajuddin, Hartiny Md Azmi
Abstract:
Taking into consideration the real time situation specifically the higher academic institutions, small, medium to large companies, public to private sectors and the remaining sectors, do experience the inventory or asset shrinkages due to theft, loss or even inventory tracking errors. This happening is due to a zero or poor security systems and measures being taken and implemented in their organizations. Henceforth, implementing the Radio Frequency Identification (RFID) technology into any manual or existing web-based system or web application can simply deter and will eventually solve certain major issues to serve better data retrieval and data access. Having said, this manual or existing system can be enhanced into a mobile-based system or application. In addition to that, the availability of internet connections can aid better services of the system. Such involvement of various technologies resulting various privileges to individuals or organizations in terms of accessibility, availability, mobility, efficiency, effectiveness, real-time information and also security. This paper will look deeper into the integration of mobile devices with RFID technologies with the purpose of asset tracking and control. Next, it is to be followed by the development and utilization of MongoDB as the main database to store data and its association with RFID technology. Finally, the development of a web based system which can be viewed in a mobile based formation with the aid of Hypertext Preprocessor (PHP), MongoDB, Hyper-Text Markup Language 5 (HTML5), Android, JavaScript and AJAX programming language.Keywords: RFID, asset tracking system, MongoDB, NoSQL
Procedia PDF Downloads 3073138 Perceived Structural Empowerment and Work Commitment among Intensive Care nurses in SMC
Authors: Ridha Abdulla Al Hammam
Abstract:
Purpose: to measure the extent of perceived structural empowerment and work commitment the intensive care unit in SMC have in their work place. Background: nurses’ access to power structures (information, recourses, opportunity, and support) directly influences their productivity, retention, and job satisfaction. Exploring nurses’ level and sources of work commitment (affective, normative, and continuance) is very essential to guide nursing leaders making decisions to improve work environment to facilitate effective nursing care. Both concepts (Structural Empowerment and Work Commitment) were never investigated in our critical care unit. Methods: a sample of 50 nurses attained from the Intensive Care Unit (Adult). Conditions for Workplace Effectiveness Questionnaire and Three-Component Model Employee Commitment Survey were used to measure the two concepts respectively. The study is quantitative, descriptive, and correlational in design. Results: the participants reported moderate structural empowerment provided by their work place (M=15 out of 20). The sample perceived high access to opportunity mainly through gaining more skills (M=4.45 out of 5) where the rest power structures were perceived with moderate accessibility. The participants’ affective commitment (M=5.6 out of 7) to work in the ICU overweighed their normative and continuance commitment (M=5.1, M=4.9 out of 7) implying a stronger emotional connection with their unit. Strong positive and significant correlations were observed between the participants’ structural empowerment scores and all work commitment sources. Conclusion: these results provided an insight on aspects of work environment that need to be fostered and improved in our intensive care unit which have a direct linkage to nurses’ work commitment and potentially to their quality of care they provide.Keywords: structural empowerment, commitment, intensive care, nurses
Procedia PDF Downloads 2893137 Impact of Climate Change on Sea Level Rise along the Coastline of Mumbai City, India
Authors: Chakraborty Sudipta, A. R. Kambekar, Sarma Arnab
Abstract:
Sea-level rise being one of the most important impacts of anthropogenic induced climate change resulting from global warming and melting of icebergs at Arctic and Antarctic, the investigations done by various researchers both on Indian Coast and elsewhere during the last decade has been reviewed in this paper. The paper aims to ascertain the propensity of consistency of different suggested methods to predict the near-accurate future sea level rise along the coast of Mumbai. Case studies at East Coast, Southern Tip and West and South West coast of India have been reviewed. Coastal Vulnerability Index of several important international places has been compared, which matched with Intergovernmental Panel on Climate Change forecasts. The application of Geographic Information System mapping, use of remote sensing technology, both Multi Spectral Scanner and Thematic Mapping data from Landsat classified through Iterative Self-Organizing Data Analysis Technique for arriving at high, moderate and low Coastal Vulnerability Index at various important coastal cities have been observed. Instead of data driven, hindcast based forecast for Significant Wave Height, additional impact of sea level rise has been suggested. Efficacy and limitations of numerical methods vis-à-vis Artificial Neural Network has been assessed, importance of Root Mean Square error on numerical results is mentioned. Comparing between various computerized methods on forecast results obtained from MIKE 21 has been opined to be more reliable than Delft 3D model.Keywords: climate change, Coastal Vulnerability Index, global warming, sea level rise
Procedia PDF Downloads 1333136 Reducing Defects through Organizational Learning within a Housing Association Environment
Authors: T. Hopkin, S. Lu, P. Rogers, M. Sexton
Abstract:
Housing Associations (HAs) contribute circa 20% of the UK’s housing supply. HAs are however under increasing pressure as a result of funding cuts and rent reductions. Due to the increased pressure, a number of processes are currently being reviewed by HAs, especially how they manage and learn from defects. Learning from defects is considered a useful approach to achieving defect reduction within the UK housebuilding industry. This paper contributes to our understanding of how HAs learn from defects by undertaking an initial round table discussion with key HA stakeholders as part of an ongoing collaborative research project with the National House Building Council (NHBC) to better understand how house builders and HAs learn from defects to reduce their prevalence. The initial discussion shows that defect information runs through a number of groups, both internal and external of a HA during both the defects management process and organizational learning (OL) process. Furthermore, HAs are reliant on capturing and recording defect data as the foundation for the OL process. During the OL process defect data analysis is the primary enabler to recognizing a need for a change to organizational routines. When a need for change has been recognized, new options are typically pursued to design out defects via updates to a HAs Employer’s Requirements. Proposed solutions are selected by a review board and committed to organizational routine. After implementing a change, both structured and unstructured feedback is sought to establish the change’s success. The findings from the HA discussion demonstrates that OL can achieve defect reduction within the house building sector in the UK. The paper concludes by outlining a potential ‘learning from defects model’ for the housebuilding industry as well as describing future work.Keywords: defects, new homes, housing association, organizational learning
Procedia PDF Downloads 3173135 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings
Authors: Jude K. Safo
Abstract:
Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics
Procedia PDF Downloads 693134 Architecture for QoS Based Service Selection Using Local Approach
Authors: Gopinath Ganapathy, Chellammal Surianarayanan
Abstract:
Services are growing rapidly and generally they are aggregated into a composite service to accomplish complex business processes. There may be several services that offer the same required function of a particular task in a composite service. Hence a choice has to be made for selecting suitable services from alternative functionally similar services. Quality of Service (QoS)plays as a discriminating factor in selecting which component services should be selected to satisfy the quality requirements of a user during service composition. There are two categories of approaches for QoS based service selection, namely global and local approaches. Global approaches are known to be Non-Polynomial (NP) hard in time and offer poor scalability in large scale composition. As an alternative to global methods, local selection methods which reduce the search space by breaking up the large/complex problem of selecting services for the workflow into independent sub problems of selecting services for individual tasks are coming up. In this paper, distributed architecture for selecting services based on QoS using local selection is presented with an overview of local selection methodology. The architecture describes the core components, namely, selection manager and QoS manager needed to implement the local approach and their functions. Selection manager consists of two components namely constraint decomposer which decomposes the given global or workflow level constraints in local or task level constraints and service selector which selects appropriate service for each task with maximum utility, satisfying the corresponding local constraints. QoS manager manages the QoS information at two levels namely, service class level and individual service level. The architecture serves as an implementation model for local selection.Keywords: architecture of service selection, local method for service selection, QoS based service selection, approaches for QoS based service selection
Procedia PDF Downloads 4273133 Numerical Simulation of Flow and Heat Transfer Characteristics with Various Working Conditions inside a Reactor of Wet Scrubber
Authors: Jonghyuk Yoon, Hyoungwoon Song, Youngbae Kim, Eunju Kim
Abstract:
Recently, with the rapid growth of semiconductor industry, lots of interests have been focused on after treatment system that remove the polluted gas produced from semiconductor manufacturing process, and a wet scrubber is the one of the widely used system. When it comes to mechanism of removing the gas, the polluted gas is removed firstly by chemical reaction in a reactor part. After that, the polluted gas stream is brought into contact with the scrubbing liquid, by spraying it with the liquid. Effective design of the reactor part inside the wet scrubber is highly important since removal performance of the polluted gas in the reactor plays an important role in overall performance and stability. In the present study, a CFD (Computational Fluid Dynamics) analysis was performed to figure out the thermal and flow characteristics inside unit a reactor of wet scrubber. In order to verify the numerical result, temperature distribution of the numerical result at various monitoring points was compared to the experimental result. The average error rates (12~15%) between them was shown and the numerical result of temperature distribution was in good agreement with the experimental data. By using validated numerical method, the effect of the reactor geometry on heat transfer rate was also taken into consideration. Uniformity of temperature distribution was improved about 15%. Overall, the result of present study could be useful information to identify the fluid behavior and thermal performance for various scrubber systems. This project is supported by the ‘R&D Center for the reduction of Non-CO₂ Greenhouse gases (RE201706054)’ funded by the Korea Ministry of Environment (MOE) as the Global Top Environment R&D Program.Keywords: semiconductor, polluted gas, CFD (Computational Fluid Dynamics), wet scrubber, reactor
Procedia PDF Downloads 1463132 Mapping Soils from Terrain Features: The Case of Nech SAR National Park of Ethiopia
Authors: Shetie Gatew
Abstract:
Current soil maps of Ethiopia do not represent accurately the soils of Nech Sar National Park. In the framework of studies on the ecology of the park, we prepared a soil map based on field observations and a digital terrain model derived from SRTM data with a 30-m resolution. The landscape comprises volcanic cones, lava and basalt outflows, undulating plains, horsts, alluvial plains and river deltas. SOTER-like terrain mapping units were identified. First, the DTM was classified into 128 terrain classes defined by slope gradient (4 classes), relief intensity (4 classes), potential drainage density (2 classes), and hypsometry (4 classes). A soil-landscape relation between the terrain mapping units and WRB soil units was established based on 34 soil profile pits. Based on this relation, the terrain mapping units were either merged or split to represent a comprehensive soil and terrain map. The soil map indicates that Leptosols (30 %), Cambisols (26%), Andosols (21%), Fluvisols (12 %), and Vertisols (9%) are the most widespread Reference Soil Groups of the park. In contrast, the harmonized soil map of Africa derived from the FAO soil map of the world indicates that Luvisols (70%), Vertisols (14%) and Fluvisols (16%) would be the most common Reference Soil Groups. However, these latter mapping units are not consistent with the topography, nor did we find such extensive areas occupied by Luvisols during the field survey. This case study shows that with the now freely available SRTM data, it is possible to improve current soil information layers with relatively limited resources, even in a complex terrain like Nech Sar National Park.Keywords: andosols, cambisols, digital elevation model, leptosols, soil-landscaps relation
Procedia PDF Downloads 1083131 Design of Microwave Building Block by Using Numerical Search Algorithm
Authors: Haifeng Zhou, Tsungyang Liow, Xiaoguang Tu, Eujin Lim, Chao Li, Junfeng Song, Xianshu Luo, Ying Huang, Lianxi Jia, Lianwee Luo, Qing Fang, Mingbin Yu, Guoqiang Lo
Abstract:
With the development of technology, countries gradually allocated more and more frequency spectrums for civilization and commercial usage, especially those high radio frequency bands indicating high information capacity. The field effect becomes more and more prominent in microwave components as frequency increases, which invalidates the transmission line theory and complicate the design of microwave components. Here a modeling approach based on numerical search algorithm is proposed to design various building blocks for microwave circuits to avoid complicated impedance matching and equivalent electrical circuit approximation. Concretely, a microwave component is discretized to a set of segments along the microwave propagation path. Each of the segment is initialized with random dimensions, which constructs a multiple-dimension parameter space. Then numerical searching algorithms (e.g. Pattern search algorithm) are used to find out the ideal geometrical parameters. The optimal parameter set is achieved by evaluating the fitness of S parameters after a number of iterations. We had adopted this approach in our current projects and designed many microwave components including sharp bends, T-branches, Y-branches, microstrip-to-stripline converters and etc. For example, a stripline 90° bend was designed in 2.54 mm x 2.54 mm space for dual-band operation (Ka band and Ku band) with < 0.18 dB insertion loss and < -55 dB reflection. We expect that this approach can enrich the tool kits for microwave designers.Keywords: microwave component, microstrip and stripline, bend, power division, the numerical search algorithm.
Procedia PDF Downloads 3823130 Multi-Stage Classification for Lung Lesion Detection on CT Scan Images Applying Medical Image Processing Technique
Authors: Behnaz Sohani, Sahand Shahalinezhad, Amir Rahmani, Aliyu Aliyu
Abstract:
Recently, medical imaging and specifically medical image processing is becoming one of the most dynamically developing areas of medical science. It has led to the emergence of new approaches in terms of the prevention, diagnosis, and treatment of various diseases. In the process of diagnosis of lung cancer, medical professionals rely on computed tomography (CT) scans, in which failure to correctly identify masses can lead to incorrect diagnosis or sampling of lung tissue. Identification and demarcation of masses in terms of detecting cancer within lung tissue are critical challenges in diagnosis. In this work, a segmentation system in image processing techniques has been applied for detection purposes. Particularly, the use and validation of a novel lung cancer detection algorithm have been presented through simulation. This has been performed employing CT images based on multilevel thresholding. The proposed technique consists of segmentation, feature extraction, and feature selection and classification. More in detail, the features with useful information are selected after featuring extraction. Eventually, the output image of lung cancer is obtained with 96.3% accuracy and 87.25%. The purpose of feature extraction applying the proposed approach is to transform the raw data into a more usable form for subsequent statistical processing. Future steps will involve employing the current feature extraction method to achieve more accurate resulting images, including further details available to machine vision systems to recognise objects in lung CT scan images.Keywords: lung cancer detection, image segmentation, lung computed tomography (CT) images, medical image processing
Procedia PDF Downloads 1013129 Analyzing the Permissibility of Demonstration in Islamic Perspective: Case Study of Former Governor of Jakarta Basuki Tjahaja Purnama
Authors: Ahmad Syauqi
Abstract:
This paper analyzes the permissibility of demonstrations against a leader's decision, policies, as well as statements against Islamic values from an Islamic point of view. Recorded at the end of 2016, a large demonstration in Jakarta involving many people, mostly from Muslim society against the former Governor of Jakarta, Basuki Tjahaja Purnama, was considered a form of harm to the value of harmony and the unity of religious communities in Indonesia. Hence, this paper aims to answer the question that became a tough discussion and a long debate among Indonesian Muslims after an immense demonstration known as the 212 movements, ‘how exactly Islam sees such act of demonstration?’. Is there any particular historical source in Islamic history that mention information related to demonstration? A phenomenological qualitative method was implemented throughout the process of this research to study the perspective of various Muslims scholars by reviewing, and comparing their opinions through the classical source of Islamic history and Hadith literature. One of the main roots of this extensive debate is due to the extremist group, which bans all forms of demonstration, assuming that such acts had come from the West and unknown culture in the Islamic history. In addition, they also claim that all the demonstrators are Bughat. While some other groups, freely declare that demonstration can be done anytime and anywhere, without specific terms and regulations associated. The findings of this research illustrate that the protests which we now know of today, in terms of demonstration had existed since ancient times, even from the time of the prophet Muhammad (peace be upon him). This paper reveals that there is a strong evidence that demonstration is justified in Islamic law and has a historical root. This can, therefore, be a proposition of such permissibility. However, there are still a number of things one has to be aware of when it comes to the demonstration, and clearly, not all demonstrations are legal from the Islamic perspective.Keywords: Basuki Tjahaja Purnama, demonstration, Muslim scholars, protest
Procedia PDF Downloads 1313128 Neuroecological Approach for Anthropological Studies in Archaeology
Authors: Kalangi Rodrigo
Abstract:
The term Neuroecology elucidates the study of customizable variation in cognition and the brain. Subject marked the birth since 1980s, when researches began to apply methods of comparative evolutionary biology to cognitive processes and the underlying neural mechanisms of cognition. In Archaeology and Anthropology, we observe behaviors such as social learning skills, innovative feeding and foraging, tool use and social manipulation to determine the cognitive processes of ancient mankind. Depending on the brainstem size was used as a control variable, and phylogeny was controlled using independent contrasts. Both disciplines need to enriched with comparative literature and neurological experimental, behavioral studies among tribal peoples as well as primate groups which will lead the research to a potential end. Neuroecology examines the relations between ecological selection pressure and mankind or sex differences in cognition and the brain. The goal of neuroecology is to understand how natural law acts on perception and its neural apparatus. Furthermore, neuroecology will eventually lead both principal disciplines to Ethology, where human behaviors and social management studies from a biological perspective. It can be either ethnoarchaeological or prehistoric. Archaeology should adopt general approach of neuroecology, phylogenetic comparative methods can be used in the field, and new findings on the cognitive mechanisms and brain structures involved mating systems, social organization, communication and foraging. The contribution of neuroecology to archaeology and anthropology is the information it provides on the selective pressures that have influenced the evolution of cognition and brain structure of the mankind. It will shed a new light to the path of evolutionary studies including behavioral ecology, primate archaeology and cognitive archaeology.Keywords: Neuroecology, Archaeology, Brain Evolution, Cognitive Archaeology
Procedia PDF Downloads 1223127 Influence of Travel Time Reliability on Elderly Drivers Crash Severity
Authors: Ren Moses, Emmanuel Kidando, Eren Ozguven, Yassir Abdelrazig
Abstract:
Although older drivers (defined as those of age 65 and above) are less involved with speeding, alcohol use as well as night driving, they are more vulnerable to severe crashes. The major contributing factors for severe crashes include frailty and medical complications. Several studies have evaluated the contributing factors on severity of crashes. However, few studies have established the impact of travel time reliability (TTR) on road safety. In particular, the impact of TTR on senior adults who face several challenges including hearing difficulties, decreasing of the processing skills and cognitive problems in driving is not well established. Therefore, this study focuses on determining possible impacts of TTR on the traffic safety with focus on elderly drivers. Historical travel speed data from freeway links in the study area were used to calculate travel time and the associated TTR metrics that is, planning time index, the buffer index, the standard deviation of the travel time and the probability of congestion. Four-year information on crashes occurring on these freeway links was acquired. The binary logit model estimated using the Markov Chain Monte Carlo (MCMC) sampling technique was used to evaluate variables that could be influencing elderly crash severity. Preliminary results of the analysis suggest that TTR is statistically significant in affecting the severity of a crash involving an elderly driver. The result suggests that one unit increase in the probability of congestion reduces the likelihood of the elderly severe crash by nearly 22%. These findings will enhance the understanding of TTR and its impact on the elderly crash severity.Keywords: highway safety, travel time reliability, elderly drivers, traffic modeling
Procedia PDF Downloads 4953126 Evidence Theory Based Emergency Multi-Attribute Group Decision-Making: Application in Facility Location Problem
Authors: Bidzina Matsaberidze
Abstract:
It is known that, in emergency situations, multi-attribute group decision-making (MAGDM) models are characterized by insufficient objective data and a lack of time to respond to the task. Evidence theory is an effective tool for describing such incomplete information in decision-making models when the expert and his knowledge are involved in the estimations of the MAGDM parameters. We consider an emergency decision-making model, where expert assessments on humanitarian aid from distribution centers (HADC) are represented in q-rung ortho-pair fuzzy numbers, and the data structure is described within the data body theory. Based on focal probability construction and experts’ evaluations, an objective function-distribution centers’ selection ranking index is constructed. Our approach for solving the constructed bicriteria partitioning problem consists of two phases. In the first phase, based on the covering’s matrix, we generate a matrix, the columns of which allow us to find all possible partitionings of the HADCs with the service centers. Some constraints are also taken into consideration while generating the matrix. In the second phase, based on the matrix and using our exact algorithm, we find the partitionings -allocations of the HADCs to the centers- which correspond to the Pareto-optimal solutions. For an illustration of the obtained results, a numerical example is given for the facility location-selection problem.Keywords: emergency MAGDM, q-rung orthopair fuzzy sets, evidence theory, HADC, facility location problem, multi-objective combinatorial optimization problem, Pareto-optimal solutions
Procedia PDF Downloads 943125 Understanding Retail Benefits Trade-offs of Dynamic Expiration Dates (DED) Associated with Food Waste
Authors: Junzhang Wu, Yifeng Zou, Alessandro Manzardo, Antonio Scipioni
Abstract:
Dynamic expiration dates (DEDs) play an essential role in reducing food waste in the context of the sustainable cold chain and food system. However, it is unknown for the trades-off in retail benefits when setting an expiration date on fresh food products. This study aims to develop a multi-dimensional decision-making model that integrates DEDs with food waste based on wireless sensor network technology. The model considers the initial quality of fresh food and the change rate of food quality with the storage temperature as cross-independent variables to identify the potential impacts of food waste in retail by applying s DEDs system. The results show that retail benefits from the DEDs system depend on each scenario despite its advanced technology. In the DEDs, the storage temperature of the retail shelf leads to the food waste rate, followed by the change rate of food quality and the initial quality of food products. We found that the DEDs system could reduce food waste when food products are stored at lower temperature areas. Besides, the potential of food savings in an extended replenishment cycle is significantly more advantageous than the fixed expiration dates (FEDs). On the other hand, the information-sharing approach of the DEDs system is relatively limited in improving sustainable assessment performance of food waste in retail and even misleads consumers’ choices. The research provides a comprehensive understanding to support the techno-economic choice of the DEDs associated with food waste in retail.Keywords: dynamic expiry dates (DEDs), food waste, retail benefits, fixed expiration dates (FEDs)
Procedia PDF Downloads 1163124 Ferrites of the MeFe2O4 System (Me – Zn, Cu, Cd) and Their Two Faces
Authors: B. S. Boyanov, A. B. Peltekov, K. I. Ivanov
Abstract:
The ferrites of Zn, Cd, Cu, and mixed ferrites with NiO, MnO, MgO, CoO, ZnO, BaO combine the properties of dielectrics, semiconductors, ferro-magnets, catalysts, etc. The ferrites are used in an impressive range of applications due to their remarkable properties. A specific disadvantage of ferrites is that they are undesirably obtained in a lot of processes connected with metal production. They are very stable and poorly soluble compounds. The obtained ZnFe2O4 in zinc production connecting about 15% of the total zinc remains practically insoluble in dilute solutions of sulfuric acid. This decreases the degree of recovery of zinc and necessitates to further process the zinc-containing cake. In this context, the ferrites; ZnFe2O4, CdFe2O4, and CuFe2O4 are synthesized in laboratory conditions using ceramic technology. Their homogeneity and structure are proven by X-Ray diffraction analysis and Mössbauer spectroscopy. The synthesized ferrites are subjected to strong acid and high temperature leaching with solutions of H2SO4, HCl, and HNO3 (7, 10 and 15 %). The results indicate that the highest degree of leaching of Zn, Cd, and Cu from the ferrites is achieved by use of HCl. The resulting values for the degree of leaching of metals using H2SO4 are lower, but still remain significantly higher for all of the experimental conditions compared to the values obtained using HNO3. Five zinc sulfide concentrates are characterized for iron content by chemical analysis, Web-based Information System, and iron phases by Mössbauer spectroscopy. The charging was optimized using the criterion of minimal amount of zinc ferrite produced when roasting the concentrates in a fluidized bed. The results obtained are interpreted in terms of the hydrometallurgical zinc production and maximum recovery of zinc, copper and cadmium from initial zinc sulfide concentrates after their roasting.Keywords: hydrometallurgy, inorganic acids, solubility, zinc ferrite
Procedia PDF Downloads 4373123 When Digital Innovation Augments Cultural Heritage: An Innovation from Tradition Story
Authors: Danilo Pesce, Emilio Paolucci, Mariolina Affatato
Abstract:
Looking at the future and at the post-digital era, innovations commonly tend to dismiss the old and replace it with the new. The aim of this research is to study the role that digital innovation can play alongside the information chain within the traditional sectors and the subsequent value creation opportunities that actors and stakeholders can exploit. By drawing on a wide body of literature on innovation and strategic management and by conducting a case study on the cultural heritage industry, namely Google Arts & Culture, this study shows that technology augments complements, and amplifies the way people experience their cultural interests and experience. Furthermore, the study shows a process of democratization of art since museums can exploit new digital and virtual ways to distribute art globally. Moreover, new needs arose from the 2020 pandemic that hit and forced the world to a state of cultural fasting and caused a radical transformation of the paradigm online vs. onsite. Finally, the study highlights the capabilities that are emerging at different stages of the value chain, owing to the technological innovation available in the market. In essence, this research underlines the role of Google in allowing museums to reach users worldwide, thus unlocking new mechanisms of value creation in the cultural heritage industry. Likewise, this study points out how Google provides value to users by means of increasing the provision of artworks, improving the audience engagement and virtual experience, and providing new ways to access the online contents. The paper ends with a discussion of managerial and policy-making implications.Keywords: big data, digital platforms, digital transformation, digitization, Google Arts and Culture, stakeholders’ interests
Procedia PDF Downloads 1593122 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data
Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa
Abstract:
A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation
Procedia PDF Downloads 2033121 Flood Devastation Assessment Through Mapping in Nigeria-2022 using Geospatial Techniques
Authors: Hafiz Muhammad Tayyab Bhatti, Munazza Usmani
Abstract:
One of nature's most destructive occurrences, floods do immense damage to communities and economic losses. Nigeria country, specifically southern Nigeria, is known for being prone to flooding. Even though periodic flooding occurs in Nigeria frequently, the floods of 2022 were the worst since those in 2012. Flood vulnerability analysis and mapping are still lacking in this region due to the very limited historical hydrological measurements and surveys on the effects of floods, which makes it difficult to develop and put into practice efficient flood protection measures. Remote sensing and Geographic Information Systems (GIS) are useful approaches to detecting, determining, and estimating the flood extent and its impacts. In this study, NOAA VIIR has been used to extract the flood extent using the flood water fraction data and afterward fused with GIS data for some zonal statistical analysis. The estimated possible flooding areas are validated using satellite imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS). The goal is to map and studied flood extent, flood hazards, and their effects on the population, schools, and health facilities for each state of Nigeria. The resulting flood hazard maps show areas with high-risk levels clearly and serve as an important reference for planning and implementing future flood mitigation and control strategies. Overall, the study demonstrated the viability of using the chosen GIS and remote sensing approaches to detect possible risk regions to secure local populations and enhance disaster response capabilities during natural disasters.Keywords: flood hazards, remote sensing, damage assessment, GIS, geospatial analysis
Procedia PDF Downloads 1403120 Mapping of Geological Structures Using Aerial Photography
Authors: Ankit Sharma, Mudit Sachan, Anurag Prakash
Abstract:
Rapid growth in data acquisition technologies through drones, have led to advances and interests in collecting high-resolution images of geological fields. Being advantageous in capturing high volume of data in short flights, a number of challenges have to overcome for efficient analysis of this data, especially while data acquisition, image interpretation and processing. We introduce a method that allows effective mapping of geological fields using photogrammetric data of surfaces, drainage area, water bodies etc, which will be captured by airborne vehicles like UAVs, we are not taking satellite images because of problems in adequate resolution, time when it is captured may be 1 yr back, availability problem, difficult to capture exact image, then night vision etc. This method includes advanced automated image interpretation technology and human data interaction to model structures and. First Geological structures will be detected from the primary photographic dataset and the equivalent three dimensional structures would then be identified by digital elevation model. We can calculate dip and its direction by using the above information. The structural map will be generated by adopting a specified methodology starting from choosing the appropriate camera, camera’s mounting system, UAVs design ( based on the area and application), Challenge in air borne systems like Errors in image orientation, payload problem, mosaicing and geo referencing and registering of different images to applying DEM. The paper shows the potential of using our method for accurate and efficient modeling of geological structures, capture particularly from remote, of inaccessible and hazardous sites.Keywords: digital elevation model, mapping, photogrammetric data analysis, geological structures
Procedia PDF Downloads 6883119 The Impact of the Enron Scandal on the Reputation of Corporate Social Responsibility Rating Agencies
Authors: Jaballah Jamil
Abstract:
KLD (Peter Kinder, Steve Lydenberg and Amy Domini) research & analytics is an independent intermediary of social performance information that adopts an investor-pay model. KLD rating agency does not have an explicit monitoring on the rated firm which suggests that KLD ratings may not include private informations. Moreover, the incapacity of KLD to predict accurately the extra-financial rating of Enron casts doubt on the reliability of KLD ratings. Therefore, we first investigate whether KLD ratings affect investors' perception by studying the effect of KLD rating changes on firms' financial performances. Second, we study the impact of the Enron scandal on investors' perception of KLD rating changes by comparing the effect of KLD rating changes on firms' financial performances before and after the failure of Enron. We propose an empirical study that relates a number of equally-weighted portfolios returns, excess stock returns and book-to-market ratio to different dimensions of KLD social responsibility ratings. We first find that over the last two decades KLD rating changes influence significantly and negatively stock returns and book-to-market ratio of rated firms. This finding suggests that a raise in corporate social responsibility rating lowers the firm's risk. Second, to assess the Enron scandal's effect on the perception of KLD ratings, we compare the effect of KLD rating changes before and after the Enron scandal. We find that after the Enron scandal this significant effect disappears. This finding supports the view that the Enron scandal annihilates the KLD's effect on Socially Responsible Investors. Therefore, our findings may question results of recent studies that use KLD ratings as a proxy for Corporate Social Responsibility behavior.Keywords: KLD social rating agency, investors' perception, investment decision, financial performance
Procedia PDF Downloads 441