Search results for: explanatory logic
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 947

Search results for: explanatory logic

47 Diagnosis of Intermittent High Vibration Peaks in Industrial Gas Turbine Using Advanced Vibrations Analysis

Authors: Abubakar Rashid, Muhammad Saad, Faheem Ahmed

Abstract:

This paper provides a comprehensive study pertaining to diagnosis of intermittent high vibrations on an industrial gas turbine using detailed vibrations analysis, followed by its rectification. Engro Polymer & Chemicals Limited, a Chlor-Vinyl complex located in Pakistan has a captive combined cycle power plant having two 28 MW gas turbines (make Hitachi) & one 15 MW steam turbine. In 2018, the organization faced an issue of high vibrations on one of the gas turbines. These high vibration peaks appeared intermittently on both compressor’s drive end (DE) & turbine’s non-drive end (NDE) bearing. The amplitude of high vibration peaks was between 150-170% on the DE bearing & 200-300% on the NDE bearing from baseline values. In one of these episodes, the gas turbine got tripped on “High Vibrations Trip” logic actuated at 155µm. Limited instrumentation is available on the machine, which is monitored with GE Bently Nevada 3300 system having two proximity probes installed at Turbine NDE, Compressor DE &at Generator DE & NDE bearings. Machine’s transient ramp-up & steady state data was collected using ADRE SXP & DSPI 408. Since only 01 key phasor is installed at Turbine high speed shaft, a derived drive key phasor was configured in ADRE to obtain low speed shaft rpm required for data analysis. By analyzing the Bode plots, Shaft center line plot, Polar plot & orbit plots; rubbing was evident on Turbine’s NDE along with increased bearing clearance of Turbine’s NDE radial bearing. The subject bearing was then inspected & heavy deposition of carbonized coke was found on the labyrinth seals of bearing housing with clear rubbing marks on shaft & housing covering at 20-25 degrees on the inner radius of labyrinth seals. The collected coke sample was tested in laboratory & found to be the residue of lube oil in the bearing housing. After detailed inspection & cleaning of shaft journal area & bearing housing, new radial bearing was installed. Before assembling the bearing housing, cleaning of bearing cooling & sealing air lines was also carried out as inadequate flow of cooling & sealing air can accelerate coke formation in bearing housing. The machine was then taken back online & data was collected again using ADRE SXP & DSPI 408 for health analysis. The vibrations were found in acceptable zone as per ISO standard 7919-3 while all other parameters were also within vendor defined range. As a learning from subject case, revised operating & maintenance regime has also been proposed to enhance machine’s reliability.

Keywords: ADRE, bearing, gas turbine, GE Bently Nevada, Hitachi, vibration

Procedia PDF Downloads 145
46 Evaluating Urban City Indices: A Study for Investigating Functional Domains, Indicators and Integration Methods

Authors: Fatih Gundogan, Fatih Kafali, Abdullah Karadag, Alper Baloglu, Ersoy Pehlivan, Mustafa Eruyar, Osman Bayram, Orhan Karademiroglu, Wasim Shoman

Abstract:

Nowadays many cities around the world are investing their efforts and resources for the purpose of facilitating their citizen’s life and making cities more livable and sustainable by implementing newly emerged phenomena of smart city. For this purpose, related research institutions prepare and publish smart city indices or benchmarking reports aiming to measure the city’s current ‘smartness’ status. Several functional domains, various indicators along different selection and calculation methods are found within such indices and reports. The selection criteria varied for each institution resulting in inconsistency in the ranking and evaluating. This research aims to evaluate the impact of selecting such functional domains, indicators and calculation methods which may cause change in the rank. For that, six functional domains, i.e. Environment, Mobility, Economy, People, Living and governance, were selected covering 19 focus areas and 41 sub-focus (variable) areas. 60 out of 191 indicators were also selected according to several criteria. These were identified as a result of extensive literature review for 13 well known global indices and research and the ISO 37120 standards of sustainable development of communities. The values of the identified indicators were obtained from reliable sources for ten cities. The values of each indicator for the selected cities were normalized and standardized to objectively investigate the impact of the chosen indicators. Moreover, the effect of choosing an integration method to represent the values of indicators for each city is investigated by comparing the results of two of the most used methods i.e. geometric aggregation and fuzzy logic. The essence of these methods is assigning a weight to each indicator its relative significance. However, both methods resulted in different weights for the same indicator. As a result of this study, the alternation in city ranking resulting from each method was investigated and discussed separately. Generally, each method illustrated different ranking for the selected cities. However, it was observed that within certain functional areas the rank remained unchanged in both integration method. Based on the results of the study, it is recommended utilizing a common platform and method to objectively evaluate cities around the world. The common method should provide policymakers proper tools to evaluate their decisions and investments relative to other cities. Moreover, for smart cities indices, at least 481 different indicators were found, which is an immense number of indicators to be considered, especially for a smart city index. Further works should be devoted to finding mutual indicators representing the index purpose globally and objectively.

Keywords: functional domain, urban city index, indicator, smart city

Procedia PDF Downloads 147
45 Quantum Mechanics as A Limiting Case of Relativistic Mechanics

Authors: Ahmad Almajid

Abstract:

The idea of unifying quantum mechanics with general relativity is still a dream for many researchers, as physics has only two paths, no more. Einstein's path, which is mainly based on particle mechanics, and the path of Paul Dirac and others, which is based on wave mechanics, the incompatibility of the two approaches is due to the radical difference in the initial assumptions and the mathematical nature of each approach. Logical thinking in modern physics leads us to two problems: - In quantum mechanics, despite its success, the problem of measurement and the problem of wave function interpretation is still obscure. - In special relativity, despite the success of the equivalence of rest-mass and energy, but at the speed of light, the fact that the energy becomes infinite is contrary to logic because the speed of light is not infinite, and the mass of the particle is not infinite too. These contradictions arise from the overlap of relativistic and quantum mechanics in the neighborhood of the speed of light, and in order to solve these problems, one must understand well how to move from relativistic mechanics to quantum mechanics, or rather, to unify them in a way different from Dirac's method, in order to go along with God or Nature, since, as Einstein said, "God doesn't play dice." From De Broglie's hypothesis about wave-particle duality, Léon Brillouin's definition of the new proper time was deduced, and thus the quantum Lorentz factor was obtained. Finally, using the Euler-Lagrange equation, we come up with new equations in quantum mechanics. In this paper, the two problems in modern physics mentioned above are solved; it can be said that this new approach to quantum mechanics will enable us to unify it with general relativity quite simply. If the experiments prove the validity of the results of this research, we will be able in the future to transport the matter at speed close to the speed of light. Finally, this research yielded three important results: 1- Lorentz quantum factor. 2- Planck energy is a limited case of Einstein energy. 3- Real quantum mechanics, in which new equations for quantum mechanics match and exceed Dirac's equations, these equations have been reached in a completely different way from Dirac's method. These equations show that quantum mechanics is a limited case of relativistic mechanics. At the Solvay Conference in 1927, the debate about quantum mechanics between Bohr, Einstein, and others reached its climax, while Bohr suggested that if particles are not observed, they are in a probabilistic state, then Einstein said his famous claim ("God does not play dice"). Thus, Einstein was right, especially when he didn't accept the principle of indeterminacy in quantum theory, although experiments support quantum mechanics. However, the results of our research indicate that God really does not play dice; when the electron disappears, it turns into amicable particles or an elastic medium, according to the above obvious equations. Likewise, Bohr was right also, when he indicated that there must be a science like quantum mechanics to monitor and study the motion of subatomic particles, but the picture in front of him was blurry and not clear, so he resorted to the probabilistic interpretation.

Keywords: lorentz quantum factor, new, planck’s energy as a limiting case of einstein’s energy, real quantum mechanics, new equations for quantum mechanics

Procedia PDF Downloads 76
44 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows

Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican

Abstract:

This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.

Keywords: laboratory-process, optimization, pathology, computer simulation, workflow

Procedia PDF Downloads 285
43 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation

Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk

Abstract:

The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.

Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set

Procedia PDF Downloads 219
42 Nature of Cities: Ontological Dimension of the Urban

Authors: Ana Cristina García-Luna Romero

Abstract:

This document seeks to reflect on the urban project from its conceptual identity root. In the first instance, a proposal is made on how the city project is sustained from the conceptual root, from the logos: it opens a way to assimilate the imagination; what we imagine becomes a reality. In this way, firstly, the need to use language as a vehicle for transmitting the stories that sustain us as humanity can be deemed as an important social factor that enables us to social behavior. Secondly, the need to attend to the written language as a mechanism of power, as a means to consolidate a dominant ideology or a political position, is raised; as it served to carry out the modernization project, it is therefore addressed differences between the real and the literate city. Thus, the consolidated urban-architectural project is based on logos, the project, and planning. Considering the importance of materiality and its relation to subjective well-being contextualized from a socio-urban approach, we question ourselves into how we can look at something that is doubtful. From a philosophy perspective, the truth is considered to be nothing more than a matter of correspondence between the observer and the observed. To understand beyond the relative of the gaze, it is necessary to expose different perspectives since it depends on the understanding of what is observed and how it is critically analyzed. Therefore, the analysis of materiality, as a political field, takes a proposal based on this research in the principles in transgenesis: principle of communication, representativeness, security, health, malleability, availability of potentiality or development, conservation, sustainability, economy, harmony, stability, accessibility, justice, legibility, significance, consistency, joint responsibility, connectivity, beauty, among others. The (urban) human being acts because he wants to live in a certain way: in a community, in a fair way, with opportunity for development, with the possibility of managing the environment according to their needs, etc. In order to comply with this principle, it is necessary to design strategies from the principles in transgenesis, which must be named, defined, understood, and socialized by the urban being, the companies, and from themselves. In this way, the technical status of the city in the neoliberal present determines extraordinary conditions for reflecting on an almost emergency scenario created by the impact of cities that, far from being limited to resilient proposals, must aim at the reflection of the urban process that the present social model has generated. Therefore, can we rethink the paradigm of the perception of life quality in the current neoliberal model in the production of the character of public space related to the practices of being urban. What we are trying to do within this document is to build a framework to study under what logic the practices of the social system that make sense of the public space are developed, what the implications of the phenomena of the inscription of action and materialization (and its results over political action between the social and the technical system) are and finally, how we can improve the quality of life of individuals from the urban space.

Keywords: cities, nature, society, urban quality of life

Procedia PDF Downloads 124
41 p-Type Multilayer MoS₂ Enabled by Plasma Doping for Ultraviolet Photodetectors Application

Authors: Xiao-Mei Zhang, Sian-Hong Tseng, Ming-Yen Lu

Abstract:

Two-dimensional (2D) transition metal dichalcogenides (TMDCs), such as MoS₂, have attracted considerable attention owing to the unique optical and electronic properties related to its 2D ultrathin atomic layer structure. MoS₂ is becoming prevalent in post-silicon digital electronics and in highly efficient optoelectronics due to its extremely low thickness and its tunable band gap (Eg = 1-2 eV). For low-power, high-performance complementary logic applications, both p- and n-type MoS₂ FETs (NFETs and PFETs) must be developed. NFETs with an electron accumulation channel can be obtained using unintentionally doped n-type MoS₂. However, the fabrication of MoS₂ FETs with complementary p-type characteristics is challenging due to the significant difficulty of injecting holes into its inversion channel. Plasma treatments with different species (including CF₄, SF₆, O₂, and CHF₃) have also been found to achieve the desired property modifications of MoS₂. In this work, we demonstrated a p-type multilayer MoS₂ enabled by selective-area doping using CHF₃ plasma treatment. Compared with single layer MoS₂, multilayer MoS₂ can carry a higher drive current due to its lower bandgap and multiple conduction channels. Moreover, it has three times the density of states at its minimum conduction band. Large-area growth of MoS₂ films on 300 nm thick SiO₂/Si substrate is carried out by thermal decomposition of ammonium tetrathiomolybdate, (NH₄)₂MoS₄, in a tube furnace. A two-step annealing process is conducted to synthesize MoS₂ films. For the first step, the temperature is set to 280 °C for 30 min in an N₂ rich environment at 1.8 Torr. This is done to transform (NH₄)₂MoS₄ into MoS₃. To further reduce MoS₃ into MoS₂, the second step of annealing is performed. For the second step, the temperature is set to 750 °C for 30 min in a reducing atmosphere consisting of 90% Ar and 10% H₂ at 1.8 Torr. The grown MoS₂ films are subjected to out-of-plane doping by CHF₃ plasma treatment using a Dry-etching system (ULVAC original NLD-570). The radiofrequency power of this dry-etching system is set to 100 W and the pressure is set to 7.5 mTorr. The final thickness of the treated samples is obtained by etching for 30 s. Back-gated MoS₂ PFETs were presented with an on/off current ratio in the order of 10³ and a field-effect mobility of 65.2 cm²V⁻¹s⁻¹. The MoS₂ PFETs photodetector exhibited ultraviolet (UV) photodetection capability with a rapid response time of 37 ms and exhibited modulation of the generated photocurrent by back-gate voltage. This work suggests the potential application of the mild plasma-doped p-type multilayer MoS₂ in UV photodetectors for environmental monitoring, human health monitoring, and biological analysis.

Keywords: photodetection, p-type doping, multilayers, MoS₂

Procedia PDF Downloads 103
40 The Importance of SEEQ in Teaching Evaluation of Undergraduate Engineering Education in India

Authors: Aabha Chaubey, Bani Bhattacharya

Abstract:

Evaluation of the quality of teaching in engineering education in India needs to be conducted on a continuous basis to achieve the best teaching quality in technical education. Quality teaching is an influential factor in technical education which impacts largely on learning outcomes of the students. Present study is not exclusively theory-driven, but it draws on various specific concepts and constructs in the domain of technical education. These include teaching and learning in higher education, teacher effectiveness, and teacher evaluation and performance management in higher education. Student Evaluation of Education Quality (SEEQ) was proposed as one of the evaluation instruments of the quality teaching in engineering education. SEEQ is one of the popular and standard instrument widely utilized all over the world and bears the validity and reliability in educational world. The present study was designed to evaluate the teaching quality through SEEQ in the context of technical education in India, including its validity and reliability based on the collected data. The multiple dimensionality of SEEQ that is present in every teaching and learning process made it quite suitable to collect the feedback of students regarding the quality of instructions and instructor. The SEEQ comprises of 9 original constructs i.e.; learning value, teacher enthusiasm, organization, group interaction, and individual rapport, breadth of coverage, assessment, assignments and overall rating of particular course and instructor with total of 33 items. In the present study, a total of 350 samples comprising first year undergraduate students from Indian Institute of Technology, Kharagpur (IIT, Kharagpur, India) were included for the evaluation of the importance of SEEQ. They belonged to four different courses of different streams of engineering studies. The above studies depicted the validity and reliability of SEEQ was based upon the collected data. This further needs Confirmatory Factor Analysis (CFA) and Analysis of Moment structure (AMOS) for various scaled instrument like SEEQ Cronbach’s alpha which are associated with SPSS for the examination of the internal consistency. The evaluation of the effectiveness of SEEQ in CFA is implemented on the basis of fit indices such as CMIN/df, CFI, GFI, AGFI and RMSEA readings. The major findings of this study showed the fitness indices such as ChiSq = 993.664,df = 390,ChiSq/df = 2.548,GFI = 0.782,AGFI = 0.736,CFI = 0.848,RMSEA = 0.062,TLI = 0.945,RMR = 0.029,PCLOSE = 0.006. The final analysis of the fit indices presented positive construct validity and stability, on the other hand a higher reliability was also depicted which indicated towards internal consistency. Thus, the study suggests the effectivity of SEEQ as the indicator of the quality evaluation instrument in teaching-learning process in engineering education in India. Therefore, it is expected that with the continuation of this research in engineering education there remains a possibility towards the betterment of the quality of the technical education in India. It is also expected that this study will provide an empirical and theoretical logic towards locating a construct or factor related to teaching, which has the greatest impact on teaching and learning process in a particular course or stream in engineering education.

Keywords: confirmatory factor analysis, engineering education, SEEQ, teaching and learning process

Procedia PDF Downloads 421
39 Electrical Degradation of GaN-based p-channel HFETs Under Dynamic Electrical Stress

Authors: Xuerui Niu, Bolin Wang, Xinchuang Zhang, Xiaohua Ma, Bin Hou, Ling Yang

Abstract:

The application of discrete GaN-based power switches requires the collaboration of silicon-based peripheral circuit structures. However, the packages and interconnection between the Si and GaN devices can introduce parasitic effects to the circuit, which has great impacts on GaN power transistors. GaN-based monolithic power integration technology is an emerging solution which can improve the stability of circuits and allow the GaN-based devices to achieve more functions. Complementary logic circuits consisting of GaN-based E-mode p-channel heterostructure field-effect transistors (p-HFETs) and E-mode n-channel HEMTs can be served as the gate drivers. E-mode p-HFETs with recessed gate have attracted increasing interest because of the low leakage current and large gate swing. However, they suffer from a poor interface between the gate dielectric and polarized nitride layers. The reliability of p-HFETs is analyzed and discussed in this work. In circuit applications, the inverter is always operated with dynamic gate voltage (VGS) rather than a constant VGS. Therefore, dynamic electrical stress has been simulated to resemble the operation conditions for E-mode p-HFETs. The dynamic electrical stress condition is as follows. VGS is a square waveform switching from -5 V to 0 V, VDS is fixed, and the source grounded. The frequency of the square waveform is 100kHz with the rising/falling time of 100 ns and duty ratio of 50%. The effective stress time is 1000s. A number of stress tests are carried out. The stress was briefly interrupted to measure the linear IDS-VGS, saturation IDS-VGS, As VGS switches from -5 V to 0 V and VDS = 0 V, devices are under negative-bias-instability (NBI) condition. Holes are trapped at the interface of oxide layer and GaN channel layer, which results in the reduction of VTH. The negative shift of VTH is serious at the first 10s and then changes slightly with the following stress time. However, different phenomenon is observed when VDS reduces to -5V. VTH shifts negatively during stress condition, and the variation in VTH increases with time, which is different from that when VDS is 0V. Two mechanisms exists in this condition. On the one hand, the electric field in the gate region is influenced by the drain voltage, so that the trapping behavior of holes in the gate region changes. The impact of the gate voltage is weakened. On the other hand, large drain voltage can induce the hot holes generation and lead to serious hot carrier stress (HCS) degradation with time. The poor-quality interface between the oxide layer and GaN channel layer at the gate region makes a major contribution to the high-density interface traps, which will greatly influence the reliability of devices. These results emphasize that the improved etching and pretreatment processes needs to be developed so that high-performance GaN complementary logics with enhanced stability can be achieved.

Keywords: GaN-based E-mode p-HFETs, dynamic electric stress, threshold voltage, monolithic power integration technology

Procedia PDF Downloads 89
38 Control of Belts for Classification of Geometric Figures by Artificial Vision

Authors: Juan Sebastian Huertas Piedrahita, Jaime Arturo Lopez Duque, Eduardo Luis Perez Londoño, Julián S. Rodríguez

Abstract:

The process of generating computer vision is called artificial vision. The artificial vision is a branch of artificial intelligence that allows the obtaining, processing, and analysis of any type of information especially the ones obtained through digital images. Actually the artificial vision is used in manufacturing areas for quality control and production, as these processes can be realized through counting algorithms, positioning, and recognition of objects that can be measured by a single camera (or more). On the other hand, the companies use assembly lines formed by conveyor systems with actuators on them for moving pieces from one location to another in their production. These devices must be previously programmed for their good performance and must have a programmed logic routine. Nowadays the production is the main target of every industry, quality, and the fast elaboration of the different stages and processes in the chain of production of any product or service being offered. The principal base of this project is to program a computer that recognizes geometric figures (circle, square, and triangle) through a camera, each one with a different color and link it with a group of conveyor systems to organize the mentioned figures in cubicles, which differ from one another also by having different colors. This project bases on artificial vision, therefore the methodology needed to develop this project must be strict, this one is detailed below: 1. Methodology: 1.1 The software used in this project is QT Creator which is linked with Open CV libraries. Together, these tools perform to realize the respective program to identify colors and forms directly from the camera to the computer. 1.2 Imagery acquisition: To start using the libraries of Open CV is necessary to acquire images, which can be captured by a computer’s web camera or a different specialized camera. 1.3 The recognition of RGB colors is realized by code, crossing the matrices of the captured images and comparing pixels, identifying the primary colors which are red, green, and blue. 1.4 To detect forms it is necessary to realize the segmentation of the images, so the first step is converting the image from RGB to grayscale, to work with the dark tones of the image, then the image is binarized which means having the figure of the image in a white tone with a black background. Finally, we find the contours of the figure in the image to detect the quantity of edges to identify which figure it is. 1.5 After the color and figure have been identified, the program links with the conveyor systems, which through the actuators will classify the figures in their respective cubicles. Conclusions: The Open CV library is a useful tool for projects in which an interface between a computer and the environment is required since the camera obtains external characteristics and realizes any process. With the program for this project any type of assembly line can be optimized because images from the environment can be obtained and the process would be more accurate.

Keywords: artificial intelligence, artificial vision, binarized, grayscale, images, RGB

Procedia PDF Downloads 377
37 Journey to Inclusive School: Description of Crucial Sensitive Concepts in the Context of Situational Analysis

Authors: Denisa Denglerova, Radim Sip

Abstract:

Academic sources as well as international agreements and national documents define inclusion in terms of several criteria: equal opportunities, fulfilling individual needs, development of human resources, community participation. In order for these criteria to be met, the community must be cohesive. Community cohesion, which is a relatively new concept, is not determined by homogeneity, but by the acceptance of diversity among the community members and utilisation of its positive potential. This brings us to a central category of inclusion - appreciating diversity and using it to a positive effect. However, school diversity is a real phenomenon, which schools need to tackle more and more often. This is also indicated by the number of publications focused on diversity in schools. These sources present recent analyses of using identity as a tool of coping with the demands of a diversified society. The aim of this study is to identify and describe in detail the processes taking place in selected schools, which contribute to their pro-inclusive character. The research is designed around a multiple case study of three pro-inclusive schools. Paradigmatically speaking, the research is rooted in situational epistemology. This is also related to the overall framework of interpretation, for which we are going to use innovative methods of situational analysis. In terms of specific research outcomes this will manifest itself in replacing the idea of “objective theory” by the idea of “detailed cartography of a social world”. The cartographic approach directs both the logic of data collection and the choice of methods of their analysis and interpretation. The research results include detection of the following sensitive concepts: Key persons. All participants can contribute to promoting an inclusion-friendly environment; however, some do so with greater motivation than others. These could include school management, teachers with a strong vision of equality, or school counsellors. They have a significant effect on the transformation of the school, and are themselves deeply convinced that inclusion is necessary. Accordingly, they select suitable co-workers; they also inspire some of the other co-workers to make changes, leading by example. Employees with strongly opposing views gradually leave the school, and new members of staff are introduced to the concept of inclusion and openness from the beginning. Manifestations of school openness in working with diversity on all important levels. By this we mean positive manipulation with diversity both in the relationships between “traditional” school participants (directors, teachers, pupils) and school-parent relationships, or relationships between schools and the broader community, in terms of teaching methods as well as ways how the school culture affects the school environment. Other important detected concepts significantly helping to form a pro-inclusive environment in the school are individual and parallel classes; freedom and responsibility of both pupils and teachers, manifested on the didactic level by tendencies towards an open curriculum; ways of asserting discipline in the school environment.

Keywords: inclusion, diversity, education, sensitive concept, situational analysis

Procedia PDF Downloads 196
36 A Case Study on Problems Originated from Critical Path Method Application in a Governmental Construction Project

Authors: Mohammad Lemar Zalmai, Osman Hurol Turkakin, Cemil Akcay, Ekrem Manisali

Abstract:

In public construction projects, determining the contract period in the award phase is one of the most important factors. The contract period establishes the baseline for creating the cash flow curve and progress payment planning in the post-award phase. If overestimated, project duration causes losses for both the owner and the contractor. Therefore, it is essential to base construction project duration on reliable forecasting. In Turkey, schedules are usually built using the bar chart (Gantt) schedule, especially for governmental construction agencies. The usage of these schedules is limited for bidding purposes. Although the bar-chart schedule is useful in some cases, it lacks logical connections between activities; it would be harder to obtain the activities that have more effects than others on the project's total duration, especially in large complex projects. In this study, a construction schedule is prepared with Critical Path Method (CPM) that addresses the above-mentioned discrepancies. CPM is a simple and effective method that displays project time and critical paths, showing results of forward and backward calculations with considering the logic relationships between activities; it is a powerful tool for planning and managing all kinds of construction projects and is a very convenient method for the construction industry. CPM provides a much more useful and precise approach than traditional bar-chart diagrams that form the basis of construction planning and control. CPM has two main application utilities in the construction field; the first one is obtaining project duration, which is called an as-planned schedule that includes as-planned activity durations with relationships between subsequent activities. Another utility is during the project execution; each activity is tracked, and their durations are recorded in order to obtain as-built schedule, which is named as a black box of the project. The latter is more useful for delay analysis, and conflict resolutions. These features of CPM have been popular around the world. However, it has not been yet extensively used in Turkey. In this study, a real construction project is investigated as a case study; CPM-based scheduling is used for establishing both of as-built and as-planned schedules. Problems that emerged during the construction phase are identified and categorized. Subsequently, solutions are suggested. Two scenarios were considered. In the first scenario, project progress was monitored based as CPM was used to track and manage progress; this was carried out based on real-time data. In the second scenario, project progress was supposedly tracked based on the assumption that the Gantt chart was used. The S-curves of the two scenarios are plotted and interpreted. Comparing the results, possible faults of the latter scenario are highlighted, and solutions are suggested. The importance of CPM implementation has been emphasized and it has been proposed to make it mandatory for preparation of construction schedule based on CPM for public construction projects contracts.

Keywords: as-built, case-study, critical path method, Turkish government sector projects

Procedia PDF Downloads 118
35 Women Soldiers in the Israel Defence Forces: Changing Trends of Gender Equality and Military Service

Authors: Dipanwita Chakravortty

Abstract:

Officially, the Israel Defence Forces (IDF) follows a policy of 'gender equality and partnership' which institutionalises norms regarding equal duty towards the nation. It reiterates the equality in unbiased opportunities and resources for Jewish men and women to participate in the military as equal citizens. At the same time, as a military institution, the IDF supports gender biases and crystallises the same through various interactions among women soldiers, male soldiers and the institution. These biases are expressed through various stages and processes in the military institution like biased training, discriminatory postings of women soldiers, lack of combat training and acceptance of sexual harassment. The gender-military debates in Israel is largely devoted to female emancipation and converting the militarised women’s experiences into mainstream debates. This critical scholarship, largely female-based and located in Israel, has been consistently critical of the structural policies of the IDF that have led to continued discriminatory practices against women soldiers. This has compelled the military to increase its intake of women soldiers and make its structural policies more gender-friendly. Nonetheless, the continued thriving of gender discrimination in the IDF resulted in scholars looking deep into the failure of these policies in bringing about a change. This article looks into two research objectives, firstly to analyse existing gender relations in the IDF which impact the practices and prejudices in the institution and secondly to look beyond the structural discrimination as part of the gender debates in the IDF. The proposed research uses the structural-functional model as a framework to study the discourses and norms emerging out of the interaction between gender and military as two distinct social institutions. Changing gender-military debates will be discussed in great detail to understanding the in-depth relation between the Israeli society and the military due to the conscription model. The main arguments of the paper deal with the functional aspect of the military service rather than the structural component of the institution. Traditional stereotypes of military institutions along with cultural notions of a female body restrict the complete integration of women soldiers despite favourable legislations and policies. These result in functional discriminations like uneven promotion, sexual violence, restructuring gender identities and creating militarised bodies. The existing prejudices encourage younger women recruits to choose from within the accepted pink-collared jobs in the military rather than ‘breaking the barriers.’ Some women recruits do try to explore new avenues and make a mark for themselves. Most of them face stiff discrimination but they accept it as part of military life. The cyclical logic behind structural norms leading to functional discrimination which then emphasises traditional stereotypes and hampers change in the institutional norms compels the IDF to continue to strive towards gender equality within the institution without practical realisation.

Keywords: women soldiers, Israel Defence Forces, gender-military debates, security studies

Procedia PDF Downloads 170
34 Food Sovereignty as Local Resistance to Unequal Access to Food and Natural Resources in Latin America: A Gender Perspective

Authors: Ana Alvarenga De Castro

Abstract:

Food sovereignty has been brought by the international peasants’ movement, La Via Campesina, as a precondition to food security, speaking about the right of each nation to keep its own supply of foods respecting cultural, sustainable practices and productive diversity. The political conceptualization nowadays goes beyond saying that this term is about achieving the rights of farmers to control the food systems according to local specificities, and about equality in the access to natural resources and quality food. The current feminization of agroecosystems and of food insecurity identified by researchers and recognized by international agencies like the UN and FAO has enhanced the feminist discourse into the food sovereignty movement, considering the historical inequalities that place women farmers in subaltern positions inside the families and rural communities. The current tendency in many rural areas of more women taking responsibility for food production and still facing the lack of access to natural resources meets particular aspects in Latin America due to the global economic logic which places the Global South in the position of raw material supplier for the industrialized North, combined with regional characteristics. In this context, Latin American countries play the role of commodities exporters in the international labor division, including among exported items grains, soybean paste, and ores, to the expense of local food chains which provide domestic quality food supply under more sustainable practices. The connections between gender inequalities and global territorial inequalities related to the access and control of food and natural resources are pointed out by feminist political ecology - FPE - authors, and are linked in this article to the potentialities and limitations of women farmers to reproduce diversified agroecosystems in the tropical environments. The work brings the importance of local practices held by women farmers which are crucial to maintaining sustainable agricultural systems and their results on seeds, soil, biodiversity and water conservation. This work presents an analysis of documents, releases, videos and other publicized experiences launched by some peasants’ organizations in Latin America which evidence the different technical and political answers that meet food sovereignty from peasants’ groups that are attributed to women farmers. They are associated with articles presenting the empirical analysis of women farmers' practices in Latin America. The combination drove to discuss the benefits of peasants' conceptions about food systems and their connections with local realities and the gender issues linked to the food sovereignty conceptualization. Conclusion meets that reality on the field cannot reach food sovereignty's ideal homogeneously and that agricultural sustainable practices are dependent on rights' achievement and social inequalities' eradication.

Keywords: food sovereignty, gender, diversified agricultural systems, access to natural resources

Procedia PDF Downloads 246
33 Data Science/Artificial Intelligence: A Possible Panacea for Refugee Crisis

Authors: Avi Shrivastava

Abstract:

In 2021, two heart-wrenching scenes, shown live on television screens across countries, painted a grim picture of refugees. One of them was of people clinging onto an airplane's wings in their desperate attempt to flee war-torn Afghanistan. They ultimately fell to their death. The other scene was the U.S. government authorities separating children from their parents or guardians to deter migrants/refugees from coming to the U.S. These events show the desperation refugees feel when they are trying to leave their homes in disaster zones. However, data paints a grave picture of the current refugee situation. It also indicates that a bleak future lies ahead for the refugees across the globe. Data and information are the two threads that intertwine to weave the shimmery fabric of modern society. Data and information are often used interchangeably, but they differ considerably. For example, information analysis reveals rationale, and logic, while data analysis, on the other hand, reveals a pattern. Moreover, patterns revealed by data can enable us to create the necessary tools to combat huge problems on our hands. Data analysis paints a clear picture so that the decision-making process becomes simple. Geopolitical and economic data can be used to predict future refugee hotspots. Accurately predicting the next refugee hotspots will allow governments and relief agencies to prepare better for future refugee crises. The refugee crisis does not have binary answers. Given the emotionally wrenching nature of the ground realities, experts often shy away from realistically stating things as they are. This hesitancy can cost lives. When decisions are based solely on data, emotions can be removed from the decision-making process. Data also presents irrefutable evidence and tells whether there is a solution or not. Moreover, it also responds to a nonbinary crisis with a binary answer. Because of all that, it becomes easier to tackle a problem. Data science and A.I. can predict future refugee crises. With the recent explosion of data due to the rise of social media platforms, data and insight into data has solved many social and political problems. Data science can also help solve many issues refugees face while staying in refugee camps or adopted countries. This paper looks into various ways data science can help solve refugee problems. A.I.-based chatbots can help refugees seek legal help to find asylum in the country they want to settle in. These chatbots can help them find a marketplace where they can find help from the people willing to help. Data science and technology can also help solve refugees' many problems, including food, shelter, employment, security, and assimilation. The refugee problem seems to be one of the most challenging for social and political reasons. Data science and machine learning can help prevent the refugee crisis and solve or alleviate some of the problems that refugees face in their journey to a better life. With the explosion of data in the last decade, data science has made it possible to solve many geopolitical and social issues.

Keywords: refugee crisis, artificial intelligence, data science, refugee camps, Afghanistan, Ukraine

Procedia PDF Downloads 71
32 The Location of Park and Ride Facilities Using the Fuzzy Inference Model

Authors: Anna Lower, Michal Lower, Robert Masztalski, Agnieszka Szumilas

Abstract:

Contemporary cities are facing serious congestion and parking problems. In urban transport policy the introduction of the park and ride system (P&R) is an increasingly popular way of limiting vehicular traffic. The determining of P&R facilities location is a key aspect of the system. Criteria for assessing the quality of the selected location are formulated generally and descriptively. The research outsourced to specialists are expensive and time consuming. The most focus is on the examination of a few selected places. The practice has shown that the choice of the location of these sites in a intuitive way without a detailed analysis of all the circumstances, often gives negative results. Then the existing facilities are not used as expected. Methods of location as a research topic are also widely taken in the scientific literature. Built mathematical models often do not bring the problem comprehensively, e.g. assuming that the city is linear, developed along one important communications corridor. The paper presents a new method where the expert knowledge is applied to fuzzy inference model. With such a built system even a less experienced person could benefit from it, e.g. urban planners, officials. The analysis result is obtained in a very short time, so a large number of the proposed location can also be verified in a short time. The proposed method is intended for testing of car parks location in a city. The paper will show selected examples of locations of the P&R facilities in cities planning to introduce the P&R. The analysis of existing objects will also be shown in the paper and they will be confronted with the opinions of the system users, with particular emphasis on unpopular locations. The research are executed using the fuzzy inference model which was built and described in more detail in the earlier paper of the authors. The results of analyzes are compared to documents of P&R facilities location outsourced by the city and opinions of existing facilities users expressed on social networking sites. The research of existing facilities were conducted by means of the fuzzy model. The results are consistent with actual users feedback. The proposed method proves to be good, but does not require the involvement of a large experts team and large financial contributions for complicated research. The method also provides an opportunity to show the alternative location of P&R facilities. The performed studies show that the method has been confirmed. The method can be applied in urban planning of the P&R facilities location in relation to the accompanying functions. Although the results of the method are approximate, they are not worse than results of analysis of employed experts. The advantage of this method is ease of use, which simplifies the professional expert analysis. The ability of analyzing a large number of alternative locations gives a broader view on the problem. It is valuable that the arduous analysis of the team of people can be replaced by the model's calculation. According to the authors, the proposed method is also suitable for implementation on a GIS platform.

Keywords: fuzzy logic inference, park and ride system, P&R facilities, P&R location

Procedia PDF Downloads 324
31 Engineering Photodynamic with Radioactive Therapeutic Systems for Sustainable Molecular Polarity: Autopoiesis Systems

Authors: Moustafa Osman Mohammed

Abstract:

This paper introduces Luhmann’s autopoietic social systems starting with the original concept of autopoiesis by biologists and scientists, including the modification of general systems based on socialized medicine. A specific type of autopoietic system is explained in the three existing groups of the ecological phenomena: interaction, social and medical sciences. This hypothesis model, nevertheless, has a nonlinear interaction with its natural environment ‘interactional cycle’ for the exchange of photon energy with molecular without any changes in topology. The external forces in the systems environment might be concomitant with the natural fluctuations’ influence (e.g. radioactive radiation, electromagnetic waves). The cantilever sensor deploys insights to the future chip processor for prevention of social metabolic systems. Thus, the circuits with resonant electric and optical properties are prototyped on board as an intra–chip inter–chip transmission for producing electromagnetic energy approximately ranges from 1.7 mA at 3.3 V to service the detection in locomotion with the least significant power losses. Nowadays, therapeutic systems are assimilated materials from embryonic stem cells to aggregate multiple functions of the vessels nature de-cellular structure for replenishment. While, the interior actuators deploy base-pair complementarity of nucleotides for the symmetric arrangement in particular bacterial nanonetworks of the sequence cycle creating double-stranded DNA strings. The DNA strands must be sequenced, assembled, and decoded in order to reconstruct the original source reliably. The design of exterior actuators have the ability in sensing different variations in the corresponding patterns regarding beat-to-beat heart rate variability (HRV) for spatial autocorrelation of molecular communication, which consists of human electromagnetic, piezoelectric, electrostatic and electrothermal energy to monitor and transfer the dynamic changes of all the cantilevers simultaneously in real-time workspace with high precision. A prototype-enabled dynamic energy sensor has been investigated in the laboratory for inclusion of nanoscale devices in the architecture with a fuzzy logic control for detection of thermal and electrostatic changes with optoelectronic devices to interpret uncertainty associated with signal interference. Ultimately, the controversial aspect of molecular frictional properties is adjusted to each other and forms its unique spatial structure modules for providing the environment mutual contribution in the investigation of mass temperature changes due to pathogenic archival architecture of clusters.

Keywords: autopoiesis, nanoparticles, quantum photonics, portable energy, photonic structure, photodynamic therapeutic system

Procedia PDF Downloads 123
30 A Socio-Spatial Analysis of Financialization and the Formation of Oligopolies in Brazilian Basic Education

Authors: Gleyce Assis Da Silva Barbosa

Abstract:

In recent years, we have witnessed a vertiginous growth of large education companies. Daughters of national and world capital, these companies expand both through consolidated physical networks in the form of branches spread across the territory and through institutional networks such as business networks through mergers, acquisitions, creation of new companies and influence. They do this by incorporating small, medium and large schools and universities, teaching systems and other products and services. They are also able to weave their webs directly or indirectly in philanthropic circles, limited partnerships, family businesses and even in public education through various mechanisms of outsourcing, privatization and commercialization of products for the sector. Although the growth of these groups in basic education seems to us a recent phenomenon in peripheral countries such as Brazil, its diffusion is closely linked to higher education conglomerates and other sectors of the economy forming oligopolies, which began to expand in the 1990s with strong state support and through political reforms that redefined its role, transforming it into a fundamental agent in the formation of guidelines to boost the incorporation of neoliberal logic. This expansion occurred through the objectification of education, commodifying it and transforming students into consumer clients. Financial power combined with the neo-liberalization of state public policies allowed the profusion of social exclusion, the increase of individuals without access to basic services, deindustrialization, automation, capital volatility and the indetermination of the economy; in addition, this process causes capital to be valued and devalued at rates never seen before, which together generates various impacts such as the precariousness of work. Understanding the connection between these processes, which engender the economy, allows us to see their consequences in labor relations and in the territory. In this sense, it is necessary to analyze the geographic-economic context and the role of the facilitating agents of this process, which can give us clues about the ongoing transformations and the directions of education in the national and even international scenario since this process is linked to the multiple scales of financial globalization. Therefore, the present research has the general objective of analyzing the socio-spatial impacts of financialization and the formation of oligopolies in Brazilian basic education. For this, the survey of laws, data, and public policies on the subject in question was used as a methodology. As a methodology, the work was based on some data from these companies available on websites for investors. Survey of information from global and national companies that operate in Brazilian basic education. In addition to mapping the expansion of educational oligopolies using public data on the location of schools. With this, the research intends to provide information about the ongoing commodification process in the country. Discuss the consequences of the oligopolization of education, considering the impacts that financialization can bring to teaching work.

Keywords: financialization, oligopolies, education, Brazil

Procedia PDF Downloads 64
29 The Messy and Irregular Experience of Entrepreneurial Life

Authors: Hannah Dean

Abstract:

The growth ideology, and its association with progress, is an important construct in the narrative of modernity. This ideology is embedded in neoclassical economic growth theory which conceptualises growth as linear and predictable, and the entrepreneur as a rational economic manager. This conceptualisation has been critiqued for reinforcing the managerial discourse in entrepreneurship studies. Despite these critiques, both the neoclassical growth theory and its adjacent managerial discourse dominate entrepreneurship studies notably the literature on female entrepreneurs. The latter is the focus of this paper. Given this emphasis on growth, female entrepreneurs are portrayed as problematic because their growth lags behind their male counterparts. This image which ignores the complexity and diversity of female entrepreneurs’ experience persists in the literature due to the lack of studies that analyse the process and contextual factors surrounding female entrepreneurs’ experience. This study aims to address the subordination of female entrepreneurs by questioning the hegemonic logic of economic growth and the managerial discourse as a true representation for the entrepreneurial experience. This objective is achieved by drawing on Schumpeter’s theorising and narrative inquiry. This exploratory study undertakes in depth interviews to gain insights into female entrepreneurs’ experience and the impact of the economic growth model and the managerial discourse on their performance. The narratives challenge a number of assumptions about female entrepreneurs. The participants occupied senior positions in the corporate world before setting up their businesses. This is at odds with much writing which assumes that women underperform because they leave their career without gaining managerial experience to achieve work-life balance. In line with Schumpeter, who distinguishes the entrepreneur from the manager, the participants’ main function was innovation. They did not believe that the managerial paradigm governing their corporate careers was applicable to their entrepreneurial experience. Formal planning and managerial rationality can hinder their decision making process. The narratives point to the gap between the two worlds which makes stepping into entrepreneurship a scary move. Schumpeter argues that the entrepreneurial process is evolutionary and that failure is an integral part of it. The participants’ entrepreneurial process was in fact irregular. The performance of new combinations was not always predictable. They therefore relied on their initiative. The inhibition to deploy these traits had an adverse effect on business growth. The narratives also indicate that over-reliance on growth threaten the business survival as it faces competing pressures. The study offers theoretical and empirical contributions to (female) entrepreneurship studies by presenting Schumpeter’s theorising as an alternative theoretical framework to the neoclassical economic growth theory. The study also reduces entrepreneurs’ vulnerability by making them aware of the negative influence that the linear growth model and the managerial discourse hold upon their performance. The study has implications for policy makers as it generates new knowledge that incorporates the current social and economic changes in the context of entrepreneurs that can no longer be sustained by the linear growth models especially in the current economic climate.

Keywords: economic growth, female entrepreneurs, managerial discourse, Schumpeter

Procedia PDF Downloads 296
28 The Connection between Qom Seminaries and Interpretation of Sacred Sources in Ja‘farī Jurisprudence

Authors: Sumeyra Yakar, Emine Enise Yakar

Abstract:

Iran presents itself as Islamic, first and foremost, and thus, it can be said that sharī’a is the political and social centre of the states. However, actual practice reveals distinct interpretations and understandings of the sharī’a. The research can be categorised inside the framework of logic in Islamic law and theology. The first task of this paper will be to identify how the sharī’a is understood in Iran by mapping out how the judges apply the law in their respective jurisdictions. The attention will then move from a simple description of the diversity of sharī’a understandings to the question of how that diversity relates to social concepts and cultures. This, of course, necessitates a brief exploration of Iran’s historical background which will also allow for an understanding of sectarian influences and the significance of certain events. The main purpose is to reach an understanding of the process of applying sources to formulate solutions which are in accordance with sharī’a and how religious education is pursued in order to become official judges. Ultimately, this essay will explore the attempts to gain an understanding by linking the practices to the secondary sources of Islamic law. It is important to emphasise that these cultural components of Islamic law must be compatible with the aims of Islamic law and their fundamental sources. The sharī’a consists of more than just legal doctrines (fiqh) and interpretive activities (ijtihād). Its contextual and theoretical framework reveals a close relationship with cultural and historical elements of society. This has meant that its traditional reproduction over time has relied on being embedded into a highly particular form of life. Thus, as acknowledged by pre-modern jurists, the sharī’a encompasses a comprehensive approach to the requirements of justice in legal, historical and political contexts. In theological and legal areas that have the specific authority of tradition, Iran adheres to Shīa’ doctrine, and this explains why the Shīa’ religious establishment maintains a dominant position in matters relating to law and the interpretation of sharī’a. The statements and interpretations of the tradition are distinctly different from sunnī interpretations, and so the use of different sources could be understood as the main reason for the discrepancies in the application of sharī’a between Iran and other Muslim countries. The sharī’a has often accommodated prevailing customs; moreover, it has developed legal mechanisms to all for its adaptation to particular needs and circumstances in society. While jurists may operate within the realm of governance and politics, the moral authority of the sharī’a ensures that these actors legitimate their actions with reference to God’s commands. The Iranian regime enshrines the principle of vilāyāt-i faqīh (guardianship of the jurist) which enables jurists to solve the conflict between law as an ideal system, in theory, and law in practice. The paper aims to show how the religious, educational system works in harmony with the governmental authorities with the concept of vilāyāt-i faqīh in Iran and contributes to the creation of religious custom in the society.

Keywords: guardianship of the jurist (vilāyāt-i faqīh), imitation (taqlīd), seminaries (hawza), Shi’i jurisprudence

Procedia PDF Downloads 223
27 Scenarios of Digitalization and Energy Efficiency in the Building Sector in Brazil: 2050 Horizon

Authors: Maria Fatima Almeida, Rodrigo Calili, George Soares, João Krause, Myrthes Marcele Dos Santos, Anna Carolina Suzano E. Silva, Marcos Alexandre Da

Abstract:

In Brazil, the building sector accounts for 1/6 of energy consumption and 50% of electricity consumption. A complex sector with several driving actors plays an essential role in the country's economy. Currently, the digitalization readiness in this sector is still low, mainly due to the high investment costs and the difficulty of estimating the benefits of digital technologies in buildings. Nevertheless, the potential contribution of digitalization for increasing energy efficiency in the building sector in Brazil has been pointed out as relevant in the political and sectoral contexts, both in the medium and long-term horizons. To contribute to the debate on the possible evolving trajectories of digitalization in the building sector in Brazil and to subsidize the formulation or revision of current public policies and managerial decisions, three future scenarios were created to anticipate the potential energy efficiency in the building sector in Brazil due to digitalization by 2050. This work aims to present these scenarios as a basis to foresight the potential energy efficiency in this sector, according to different digitalization paces - slow, moderate, or fast in the 2050 horizon. A methodological approach was proposed to create alternative prospective scenarios, combining the Global Business Network (GBN) and the Laboratory for Investigation in Prospective Strategy and Organisation (LIPSOR) methods. This approach consists of seven steps: (i) definition of the question to be foresighted and time horizon to be considered (2050); (ii) definition and classification of a set of key variables, using the prospective structural analysis; (iii) identification of the main actors with an active role in the digital and energy spheres; (iv) characterization of the current situation (2021) and identification of main uncertainties that were considered critical in the development of alternative future scenarios; (v) scanning possible futures using morphological analysis; (vi) selection and description of the most likely scenarios; (vii) foresighting the potential energy efficiency in each of the three scenarios, namely slow digitalization; moderate digitalization, and fast digitalization. Each scenario begins with a core logic and then encompasses potentially related elements, including potential energy efficiency. Then, the first scenario refers to digitalization at a slow pace, with induction by the government limited to public buildings. In the second scenario, digitalization is implemented at a moderate pace, induced by the government in public, commercial, and service buildings, through regulation integrating digitalization and energy efficiency mechanisms. Finally, in the third scenario, digitalization in the building sector is implemented at a fast pace in the country and is strongly induced by the government, but with broad participation of private investments and accelerated adoption of digital technologies. As a result of the slow pace of digitalization in the sector, the potential for energy efficiency stands at levels below 10% of the total of 161TWh by 2050. In the moderate digitalization scenario, the potential reaches 20 to 30% of the total 161TWh by 2050. Furthermore, in the rapid digitalization scenario, it will reach 30 to 40% of the total 161TWh by 2050.

Keywords: building digitalization, energy efficiency, scenario building, prospective structural analysis, morphological analysis

Procedia PDF Downloads 113
26 Agri-Food Transparency and Traceability: A Marketing Tool to Satisfy Consumer Awareness Needs

Authors: Angelo Corallo, Maria Elena Latino, Marta Menegoli

Abstract:

The link between man and food plays, in the social and economic system, a central role where cultural and multidisciplinary aspects intertwine: food is not only nutrition, but also communication, culture, politics, environment, science, ethics, fashion. This multi-dimensionality has many implications in the food economy. In recent years, the consumer became more conscious about his food choices, involving a consistent change in consumption models. This change concerns several aspects: awareness of food system issues, employment of socially and environmentally conscious decision-making, food choices based on different characteristics than nutritional ones i.e. origin of food, how it’s produced, and who’s producing it. In this frame the ‘consumption choices’ and the ‘interests of the citizen’ become one part of the others. The figure of the ‘Citizen Consumer’ is born, a responsible and ethically motivated individual to change his lifestyle, achieving the goal of sustainable consumption. Simultaneously the branding, that before was guarantee of the product quality, today is questioned. In order to meet these needs, Agri-Food companies are developing specific product lines that follow two main philosophies: ‘Back to basics’ and ‘Less is more’. However, the issue of ethical behavior does not seem to find an adequate on market offer. Most likely due to a lack of attention on the communication strategy used, very often based on market logic and rarely on ethical one. The label in its classic concept of ‘clean labeling’ can no longer be the only instrument through which to convey product information and its evolution towards a concept of ‘clear label’ is necessary to embrace ethical and transparent concepts in progress the process of democratization of the Food System. The implementation of a voluntary traceability path, relying on the technological models of the Internet of Things or Industry 4.0, would enable the Agri-Food Supply Chain to collect data that, if properly treated, could satisfy the information need of consumers. A change of approach is therefore proposed towards Agri-Food traceability that is no longer intended as a tool to be used to respond to the legislator, but rather as a promotional tool useful to tell the company in a transparent manner and then reach the slice of the market of food citizens. The use of mobile technology can also facilitate this information transfer. However, in order to guarantee maximum efficiency, an appropriate communication model based on the ethical communication principles should be used, which aims to overcome the pipeline communication model, to offer the listener a new way of telling the food product, based on real data collected through processes traceability. The Citizen Consumer is therefore placed at the center of the new model of communication in which he has the opportunity to choose what to know and how. The new label creates a virtual access point capable of telling the product according to different point of views, following the personal interests and offering the possibility to give several content modalities to support different situations and usability.

Keywords: agri food traceability, agri-food transparency, clear label, food system, internet of things

Procedia PDF Downloads 156
25 India’s Foreign Policy toward its South Asian Neighbors: Retrospect and Prospect

Authors: Debasish Nandy

Abstract:

India’s foreign policy towards all of her neighbor countries is determinate on the basis of multi-dimensional factors. India’s relations with its South Asian neighbor can be classified into three categories. In the first category, there are four countries -Sri Lanka, Bangladesh, Nepal, and Afghanistan- whose bilateral relationships have encompassed cooperation, irritants, problems and crisis at different points in time. With Pakistan, the relationship has been perpetually adversarial. The third category includes Bhutan and Maldives whose relations are marked by friendship and cooperation, free of any bilateral problems. It is needless to say that Jawaharlal Nehru emphasized on friendly relations with the neighboring countries. The subsequent Prime Ministers of India especially I.K. Gujral had advocated in making of peaceful and friendly relations with the subcontinental countries. He had given a unique idea to foster bilateral relations with the neighbors. His idea is known as ‘Gujral Doctrine’. A dramatical change has been witnessed in Indian foreign policy since 1991.In the post-Cold War period, India’s national security has been vehemently threatened by terrorism, which originated from Pakistan-Afghanistan and partly Bangladesh. India has required a cooperative security, which can be made by mutual understanding among the South Asian countries. Additionally, the countries of South Asia need to evolve the concept of ‘Cooperative Security’ to explain the underlying logic of regional cooperation. According to C. Rajamohan, ‘cooperative security could be understood, as policies of governments, which see themselves as former adversaries or potential adversaries to shift from or avoid confrontationist policies.’ A cooperative security essentially reflects a policy of dealing peacefully with conflicts, not merely by abstention from violence or threats but by active engagement in negotiation, a search for practical solutions and with a commitment to preventive measures. Cooperative assumes the existence of a condition in which the two sides possess the military capabilities to harm each other. Establishing cooperative security runs into a complex process building confidence. South Asian nations often engaged with hostility to each other. Extra-regional powers have been influencing their powers in this region since a long time. South Asian nations are busy to purchase military equipment. In spite of weakened economic systems, these states are spending a huge amount of money for their security. India is the big power in this region in every aspect. The big states- small states syndrome is a negative factor in this respect. However, India will have to an initiative to extended ‘track II diplomacy’ or soft diplomacy for its security as well as the security of this region.Confidence building measures could help rejuvenate not only SAARC but also build trust and mutual confidence between India and its neighbors in South Asia. In this paper, I will focus on different aspects of India’s policy towards it, South-Asian neighbors. It will also be searched that how India is dealing with these countries by using a mixed type of diplomacy – both idealistic and realistic points of view. Security and cooperation are two major determinants of India’s foreign policy towards its South Asian neighbors.

Keywords: bilateral, diplomacy, infiltration, terrorism

Procedia PDF Downloads 538
24 The Display of Age-Period/Age-Cohort Mortality Trends Using 1-Year Intervals Reveals Period and Cohort Effects Coincident with Major Influenza A Events

Authors: Maria Ines Azambuja

Abstract:

Graphic displays of Age-Period-Cohort (APC) mortality trends generally uses data aggregated within 5 or 10-year intervals. Technology allows one to increase the amount of processed data. Displaying occurrences by 1-year intervals is a logic first step in the direction of attaining higher quality landscapes of variations in temporal occurrences. Method: 1) Comparison of UK mortality trends plotted by 10-, 5- and 1-year intervals; 2) Comparison of UK and US mortality trends (period X age and cohort X age) displayed by 1-year intervals. Source: Mortality data (period, 1x1, males, 1933-1912) uploaded from the Human Mortality Database to Excel files, where Period X Age and Cohort X Age graphics were produced. The choice of transforming age-specific trends from calendar to birth-cohort years (cohort = period – age) (instead of using cohort 1x1 data available at the HMD resource) was taken to facilitate the comparison of age-specific trends when looking across calendar-years and birth-cohorts. Yearly live births, males, 1933 to 1912 (UK) were uploaded from the HFD. Influenza references are from the literature. Results: 1) The use of 1-year intervals unveiled previously unsuspected period, cohort and interacting period x cohort effects upon all-causes mortality. 2) The UK and US figures showed variations associated with particular calendar years (1936, 1940, 1951, 1957-68, 72) and, most surprisingly, with particular birth-cohorts (1889-90 in the US, and 1900, 1918-19, 1940-41 and 1946-47, in both countries. Also, the figures showed ups and downs in age-specific trends initiated at particular birth-cohorts (1900, 1918-19 and 1947-48) or a particular calendar-year (1968, 1972, 1977-78 in the US), variations at times restricted to just a range of ages (cohort x period interacting effects). Importantly, most of the identified “scars” (period and cohort) correlates with the record of occurrences of Influenza A epidemics since the late 19th Century. Conclusions: The use of 1-year intervals to describe APC mortality trends both increases the amount of information available, thus enhancing the opportunities for patterns’ recognition, and increases our capability of interpreting those patterns by describing trends across smaller intervals of time (period or birth-cohort). The US and the UK mortality landscapes share many but not all 'scars' and distortions suggested here to be associated with influenza epidemics. Different size-effects of wars are evident, both in mortality and in fertility. But it would also be realistic to suppose that the preponderant influenza A viruses circulating in UK and US at the beginning of the 20th Century might be different and the difference to have intergenerational long-term consequences. Compared with the live births trend (UK data), birth-cohort scars clearly depend on birth-cohort sizes relatives to neighbor ones, which, if causally associated with influenza, would result from influenza-related fetal outcomes/selection. Fetal selection could introduce continuing modifications on population patterns of immune-inflammatory phenotypes that might give rise to 'epidemic constitutions' favoring the occurrence of particular diseases. Comparative analysis of mortality landscapes may help us to straight our record of past circulation of Influenza viruses and document associations between influenza recycling and fertility changes.

Keywords: age-period-cohort trends, epidemic constitution, fertility, influenza, mortality

Procedia PDF Downloads 230
23 Designing Agile Product Development Processes by Transferring Mechanisms of Action Used in Agile Software Development

Authors: Guenther Schuh, Michael Riesener, Jan Kantelberg

Abstract:

Due to the fugacity of markets and the reduction of product lifecycles, manufacturing companies from high-wage countries are nowadays faced with the challenge to place more innovative products within even shorter development time on the market. At the same time, volatile customer requirements have to be satisfied in order to successfully differentiate from market competitors. One potential approach to address the explained challenges is provided by agile values and principles. These agile values and principles already proofed their success within software development projects in the form of management frameworks like Scrum or concrete procedure models such as Extreme Programming or Crystal Clear. Those models lead to significant improvements regarding quality, costs and development time and are therefore used within most software development projects. Motivated by the success within the software industry, manufacturing companies have tried to transfer agile mechanisms of action to the development of hardware products ever since. Though first empirical studies show similar effects in the agile development of hardware products, no comprehensive procedure model for the design of development iterations has been developed for hardware development yet due to different constraints of the domains. For this reason, this paper focusses on the design of agile product development processes by transferring mechanisms of action used in agile software development towards product development. This is conducted by decomposing the individual systems 'product development' and 'agile software development' into relevant elements and symbiotically composing the elements of both systems in respect of the design of agile product development processes afterwards. In a first step, existing product development processes are described following existing approaches of the system theory. By analyzing existing case studies from industrial companies as well as academic approaches, characteristic objectives, activities and artefacts are identified within a target-, action- and object-system. In partial model two, mechanisms of action are derived from existing procedure models of agile software development. These mechanisms of action are classified in a superior strategy level, in a system level comprising characteristic, domain-independent activities and their cause-effect relationships as well as in an activity-based element level. Within partial model three, the influence of the identified agile mechanism of action towards the characteristic system elements of product development processes is analyzed. For this reason, target-, action- and object-system of the product development are compared with the strategy-, system- and element-level of agile mechanism of action by using the graph theory. Furthermore, the necessity of existence of activities within iteration can be determined by defining activity-specific degrees of freedom. Based on this analysis, agile product development processes are designed in form of different types of iterations within a last step. By defining iteration-differentiating characteristics and their interdependencies, a logic for the configuration of activities, their form of execution as well as relevant artefacts for the specific iteration is developed. Furthermore, characteristic types of iteration for the agile product development are identified.

Keywords: activity-based process model, agile mechanisms of action, agile product development, degrees of freedom

Procedia PDF Downloads 206
22 Understanding the Construction of Social Enterprises in India: Through Identity and Context of Social Entrepreneurs

Authors: K. Bose

Abstract:

India is one of the largest democracies in the global south, which demonstrates the highest social enterprise activities in the subcontinent. Although there has been a meteoric rise in social enterprise activities, it is not a new phenomenon, as it dates back to Vinoba Bhave's Land Gift movement in 1950. India also has a rich history of a welfare mix where non-governmental organisations played a significant role in the public welfare provision. Lately, the government’s impetus on entrepreneurship has contributed to a burgeoning social enterprise sector in the country; however, there is a lack in understanding of how social enterprises are constructed in India. Social entrepreneurship as practice has been conceptualised as a multi-dimensional concept, which is predominantly explained through the characteristics of a social entrepreneur. Social enterprise organisation, which is a component of social entrepreneurship practice are also classified through the role of the social entrepreneur; thus making social entrepreneur a vital unit shaping organisation and practice. Hence, individual identity of the social entrepreneur acts as a steering agent for defining organisation and practice. Individual identity does not operate in a vacuum and different isomorphic pressures (resource-rich actors/institutions) leads to negotiation in these identities. Dey and Teasdale's work investigated this identity work of non-profit practitioners within the practice of social enterprises in England. Furthermore, the construction of social enterprises is predominantly understood through two approaches i.e. an institutional logic perspective emerging from Europe and process and outcome perspective derived from the United States. These two approaches explain social enterprise as an inevitable institutional outcome in a linear and simplistic manner. Such linear institutional transition is inferred from structural policy reforms and austerity measures adopted by the government, which led to heightened competition for funds in the non-profit sector. These political and economic challenges were specific to the global north, which is different from transitions experienced in the global south, thus further investigation would help understand social enterprise activities as a contextual phenomenon. There is a growing interest in understanding the role of the context within the entrepreneurship literature, additionally, there is growing recognition in entrepreneurship research that economic behaviour is realised far better within its historical, temporal, institutional, spatial and social context, as these contexts provide boundaries to individuals in terms of opportunities and actions. Social enterprise phenomenon too is realised as contextual phenomenon though it differs from traditional entrepreneurship in terms of its dual mission (social and economic), however, the understanding of the role of context in social entrepreneurship has been limited. Hence, this work in progress study integrates identity work of social entrepreneur and the role of context. It investigates the identities of social entrepreneur and its negotiation within its context. Further, how this negotiated identity transcends into organisational practice in turn shaping how social enterprises are constructed in a specific region. The study employs a qualitative inquiry of semi-structured interviews and ethnographic institutionalism. Interviews were analysed using critical discourse analysis and the preliminary outcomes are currently a work in progress.

Keywords: context, Dey and Teasdale, identity, social entrepreneurs, social enterprise, social entrepreneurship

Procedia PDF Downloads 180
21 A Randomised Controlled Trial and Process Evaluation of the Lifestart Parenting Programme

Authors: Sharon Millen, Sarah Miller, Laura Dunne, Clare McGeady, Laura Neeson

Abstract:

This paper presents the findings from a randomised controlled trial (RCT) and process evaluation of the Lifestart parenting programme. Lifestart is a structured child-centred programme of information and practical activity for parents of children aged from birth to five years of age. It is delivered to parents in their own homes by trained, paid family visitors and it is offered to parents regardless of their social, economic or other circumstances. The RCT evaluated the effectiveness of the programme and the process evaluation documented programme delivery and included a qualitative exploration of parent and child outcomes. 424 parents and children participated in the RCT: 216 in the intervention group and 208 in the control group across the island of Ireland. Parent outcomes included: parental knowledge of child development, parental efficacy, stress, social support, parenting skills and embeddedness in the community. Child outcomes included cognitive, language and motor development and social-emotional and behavioural development. Both groups were tested at baseline (when children were less than 1 year old), mid-point (aged 3) and at post-test (aged 5). Data were collected during a home visit, which took two hours. The process evaluation consisted of interviews with parents (n=16 at baseline and end-point), and focus groups with Lifestart Coordinators (n=9) and Family Visitors (n=24). Quantitative findings from the RCT indicated that, compared to the control group, parents who received the Lifestart programme reported reduced parenting-related stress, increased knowledge of their child’s development, and improved confidence in their parenting role. These changes were statistically significant and consistent with the hypothesised pathway of change depicted in the logic model. There was no evidence of any change in parents’ embeddedness in the community. Although four of the five child outcomes showed small positive change for children who took part in the programme, these were not statistically significant and there is no evidence that the programme improves child cognitive and non-cognitive skills by immediate post-test. The qualitative process evaluation highlighted important challenges related to conducting trials of this magnitude and design in the general population. Parents reported that a key incentive to take part in study was receiving feedback from the developmental assessment, which formed part of the data collection. This highlights the potential importance of appropriate incentives in relation to recruitment and retention of participants. The interviews with intervention parents indicated that one of the first changes they experienced as a result of the Lifestart programme was increased knowledge and confidence in their parenting ability. The outcomes and pathways perceived by parents and described in the interviews are also consistent with the findings of the RCT and the theory of change underpinning the programme. This hypothesises that improvement in parental outcomes, arising as a consequence of the programme, mediate the change in child outcomes. Parents receiving the Lifestart programme reported great satisfaction with and commitment to the programme, with the role of the Family Visitor being identified as one of the key components of the programme.

Keywords: parent-child relationship, parental self-efficacy, parental stress, school readiness

Procedia PDF Downloads 444
20 Mondoc: Informal Lightweight Ontology for Faceted Semantic Classification of Hypernymy

Authors: M. Regina Carreira-Lopez

Abstract:

Lightweight ontologies seek to concrete union relationships between a parent node, and a secondary node, also called "child node". This logic relation (L) can be formally defined as a triple ontological relation (LO) equivalent to LO in ⟨LN, LE, LC⟩, and where LN represents a finite set of nodes (N); LE is a set of entities (E), each of which represents a relationship between nodes to form a rooted tree of ⟨LN, LE⟩; and LC is a finite set of concepts (C), encoded in a formal language (FL). Mondoc enables more refined searches on semantic and classified facets for retrieving specialized knowledge about Atlantic migrations, from the Declaration of Independence of the United States of America (1776) and to the end of the Spanish Civil War (1939). The model looks forward to increasing documentary relevance by applying an inverse frequency of co-ocurrent hypernymy phenomena for a concrete dataset of textual corpora, with RMySQL package. Mondoc profiles archival utilities implementing SQL programming code, and allows data export to XML schemas, for achieving semantic and faceted analysis of speech by analyzing keywords in context (KWIC). The methodology applies random and unrestricted sampling techniques with RMySQL to verify the resonance phenomena of inverse documentary relevance between the number of co-occurrences of the same term (t) in more than two documents of a set of texts (D). Secondly, the research also evidences co-associations between (t) and their corresponding synonyms and antonyms (synsets) are also inverse. The results from grouping facets or polysemic words with synsets in more than two textual corpora within their syntagmatic context (nouns, verbs, adjectives, etc.) state how to proceed with semantic indexing of hypernymy phenomena for subject-heading lists and for authority lists for documentary and archival purposes. Mondoc contributes to the development of web directories and seems to achieve a proper and more selective search of e-documents (classification ontology). It can also foster on-line catalogs production for semantic authorities, or concepts, through XML schemas, because its applications could be used for implementing data models, by a prior adaptation of the based-ontology to structured meta-languages, such as OWL, RDF (descriptive ontology). Mondoc serves to the classification of concepts and applies a semantic indexing approach of facets. It enables information retrieval, as well as quantitative and qualitative data interpretation. The model reproduces a triple tuple ⟨LN, LE, LT, LCF L, BKF⟩ where LN is a set of entities that connect with other nodes to concrete a rooted tree in ⟨LN, LE⟩. LT specifies a set of terms, and LCF acts as a finite set of concepts, encoded in a formal language, L. Mondoc only resolves partial problems of linguistic ambiguity (in case of synonymy and antonymy), but neither the pragmatic dimension of natural language nor the cognitive perspective is addressed. To achieve this goal, forthcoming programming developments should target at oriented meta-languages with structured documents in XML.

Keywords: hypernymy, information retrieval, lightweight ontology, resonance

Procedia PDF Downloads 124
19 Development and Validation of a Quantitative Measure of Engagement in the Analysing Aspect of Dialogical Inquiry

Authors: Marcus Goh Tian Xi, Alicia Chua Si Wen, Eunice Gan Ghee Wu, Helen Bound, Lee Liang Ying, Albert Lee

Abstract:

The Map of Dialogical Inquiry provides a conceptual look at the underlying nature of future-oriented skills. According to the Map, learning is learner-oriented, with conversational time shifted from teachers to learners, who play a strong role in deciding what and how they learn. For example, in courses operating on the principles of Dialogical Inquiry, learners were able to leave the classroom with a deeper understanding of the topic, broader exposure to differing perspectives, and stronger critical thinking capabilities, compared to traditional approaches to teaching. Despite its contributions to learning, the Map is grounded in a qualitative approach both in its development and its application for providing feedback to learners and educators. Studies hinge on openended responses by Map users, which can be time consuming and resource intensive. The present research is motivated by this gap in practicality by aiming to develop and validate a quantitative measure of the Map. In addition, a quantifiable measure may also strengthen applicability by making learning experiences trackable and comparable. The Map outlines eight learning aspects that learners should holistically engage. This research focuses on the Analysing aspect of learning. According to the Map, Analysing has four key components: liking or engaging in logic, using interpretative lenses, seeking patterns, and critiquing and deconstructing. Existing scales of constructs (e.g., critical thinking, rationality) related to these components were identified so that the current scale could adapt items from. Specifically, items were phrased beginning with an “I”, followed by an action phrase, to fulfil the purpose of assessing learners' engagement with Analysing either in general or in classroom contexts. Paralleling standard scale development procedure, the 26-item Analysing scale was administered to 330 participants alongside existing scales with varying levels of association to Analysing, to establish construct validity. Subsequently, the scale was refined and its dimensionality, reliability, and validity were determined. Confirmatory factor analysis (CFA) revealed if scale items loaded onto the four factors corresponding to the components of Analysing. To refine the scale, items were systematically removed via an iterative procedure, according to their factor loadings and results of likelihood ratio tests at each step. Eight items were removed this way. The Analysing scale is better conceptualised as unidimensional, rather than comprising the four components identified by the Map, for three reasons: 1) the covariance matrix of the model specified for the CFA was not positive definite, 2) correlations among the four factors were high, and 3) exploratory factor analyses did not yield an easily interpretable factor structure of Analysing. Regarding validity, since the Analysing scale had higher correlations with conceptually similar scales than conceptually distinct scales, with minor exceptions, construct validity was largely established. Overall, satisfactory reliability and validity of the scale suggest that the current procedure can result in a valid and easy-touse measure for each aspect of the Map.

Keywords: analytical thinking, dialogical inquiry, education, lifelong learning, pedagogy, scale development

Procedia PDF Downloads 90
18 Barriers and Facilitators of Physical Activity among Adults and Older Adults from Black and Minority Ethnic Groups in the UK: A Meta-Ethnographic Study

Authors: Janet Ige, Paul Pilkington, Selena Gray, Jane Powell

Abstract:

Older adults from socially disadvantaged groups and Black and Minority Ethnic (BME) groups experience a higher burden of physical inactivity. Physical inactivity among BME groups is associated with the disproportionately higher level of health inequalities. People from minority ethnic groups encounter more barriers to physical activity. However, this is not often reported. There is very limited review-level evidence on the barriers and facilitators of physical activity among older adults from BME groups in the UK. This study aims to answer the following research question: what are the barriers and facilitators of physical activity participation among adults and older adults from BME background in the UK? To address this, we conducted a review of qualitative studies investigating the barriers and opportunities for physical activity among of BME adults and older adults in the UK. Method: This study is nested in an interpretive paradigm of meta-ethnography. A structured search for published literature was conducted on 6 electronic databases (MEDLINE, PsychINFO, Cumulative Index to Nursing & Allied Health Literature, Applied Social Sciences Index and Abstracts, Cochrane Database of Systematic Reviews, Allied and Complementary Medicine) from January 2007 to July 2017. Hand searching of the reference list of publications was performed in addition to a search conducted on Google Scholar to identify grey literature. Studies were eligible provided they employed any qualitative method and included participants identified as being BME, aged 50 and above, living in any community within the UK. In total, 1036 studies were identified from the structured search for literature, 718 studies were screened by titles after duplicates were removed. On applying the inclusion and exclusion criteria, a final selection of 10 studies was considered eligible for synthesis. Quality assessment was performed using the Critical Appraisal Skills Programme tool. Logic maps were used to show the relationship between factors that impact on physical activity participation among adults and older adults Result: Six key themes emerged from the data: awareness of the links between physical activity and health, interaction, and engagement with health professionals, cultural expectations and social responsibilities, appropriate environment, religious fatalism and practical challenges. Findings also showed that the barriers and facilitators of physical activity exist at the individual, community, and socio-economic, cultural and environmental level. There was a substantial gap in research among Black African groups. Findings from the review also informed the design of an ongoing survey investigating the experience and attitude of adults from Somali backgrounds towards physical activity in the UK. Conclusion: Identifying the barriers and facilitators of physical activity among BME groups is a crucial step in addressing the widening inequality gap. Findings from this study highlight the importance of engaging local BME residents in the design of exercise facilities within the community. This will ensure that cultural and social concerns are recognized and properly addressed.

Keywords: BME, UK, meta-ethnographic, adults

Procedia PDF Downloads 119