Search results for: enforcement of decisions of ombudsmen
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2138

Search results for: enforcement of decisions of ombudsmen

188 The Effectiveness of an Occupational Therapy Metacognitive-Functional Intervention for the Improvement of Human Risk Factors of Bus Drivers

Authors: Navah Z. Ratzon, Rachel Shichrur

Abstract:

Background: Many studies have assessed and identified the risk factors of safe driving, but there is relatively little research-based evidence concerning the ability to improve the driving skills of drivers in general and in particular of bus drivers, who are defined as a population at risk. Accidents involving bus drivers can endanger dozens of passengers and cause high direct and indirect damages. Objective: To examine the effectiveness of a metacognitive-functional intervention program for the reduction of risk factors among professional drivers relative to a control group. Methods: The study examined 77 bus drivers working for a large public company in the center of the country, aged 27-69. Twenty-one drivers continued to the intervention stage; four of them dropped out before the end of the intervention. The intervention program we developed was based on previous driving models and the guiding occupational therapy practice framework model in Israel, while adjusting the model to the professional driving in public transportation and its particular risk factors. Treatment focused on raising awareness to safe driving risk factors identified at prescreening (ergonomic, perceptual-cognitive and on-road driving data), with reference to the difficulties that the driver raises and providing coping strategies. The intervention has been customized for each driver and included three sessions of two hours. The effectiveness of the intervention was tested using objective measures: In-Vehicle Data Recorders (IVDR) for monitoring natural driving data, traffic accident data before and after the intervention, and subjective measures (occupational performance questionnaire for bus drivers). Results: Statistical analysis found a significant difference between the degree of change in the rate of IVDR perilous events (t(17)=2.14, p=0.046), before and after the intervention. There was significant difference in the number of accidents per year before and after the intervention in the intervention group (t(17)=2.11, p=0.05), but no significant change in the control group. Subjective ratings of the level of performance and of satisfaction with performance improved in all areas tested following the intervention. The change in the ‘human factors/person’ field, was significant (performance : t=- 2.30, p=0.04; satisfaction with performance : t=-3.18, p=0.009). The change in the ‘driving occupation/tasks’ field, was not significant but showed a tendency toward significance (t=-1.94, p=0.07,). No significant differences were found in driving environment-related variables. Conclusions: The metacognitive-functional intervention significantly improved the objective and subjective measures of safety of bus drivers’ driving. These novel results highlight the potential contribution of occupational therapists, using metacognitive functional treatment, to preventing car accidents among the healthy drivers population and improving the well-being of these drivers. This study also enables familiarity with advanced technologies of IVDR systems and enriches the knowledge of occupational therapists in regards to using a wide variety of driving assessment tools and making the best practice decisions.

Keywords: bus drivers, IVDR, human risk factors, metacognitive-functional intervention

Procedia PDF Downloads 346
187 Personal Data Protection: A Legal Framework for Health Law in Turkey

Authors: Veli Durmus, Mert Uydaci

Abstract:

Every patient who needs to get a medical treatment should share health-related personal data with healthcare providers. Therefore, personal health data plays an important role to make health decisions and identify health threats during every encounter between a patient and caregivers. In other words, health data can be defined as privacy and sensitive information which is protected by various health laws and regulations. In many cases, the data are an outcome of the confidential relationship between patients and their healthcare providers. Globally, almost all nations have own laws, regulations or rules in order to protect personal data. There is a variety of instruments that allow authorities to use the health data or to set the barriers data sharing across international borders. For instance, Directive 95/46/EC of the European Union (EU) (also known as EU Data Protection Directive) establishes harmonized rules in European borders. In addition, the General Data Protection Regulation (GDPR) will set further common principles in 2018. Because of close policy relationship with EU, this study provides not only information on regulations, directives but also how they play a role during the legislative process in Turkey. Even if the decision is controversial, the Board has recently stated that private or public healthcare institutions are responsible for the patient call system, for doctors to call people waiting outside a consultation room, to prevent unlawful processing of personal data and unlawful access to personal data during the treatment. In Turkey, vast majority private and public health organizations provide a service that ensures personal data (i.e. patient’s name and ID number) to call the patient. According to the Board’s decision, hospital or other healthcare institutions are obliged to take all necessary administrative precautions and provide technical support to protect patient privacy. However, this application does not effectively and efficiently performing in most health services. For this reason, it is important to draw a legal framework of personal health data by stating what is the main purpose of this regulation and how to deal with complicated issues on personal health data in Turkey. The research is descriptive on data protection law for health care setting in Turkey. Primary as well as secondary data has been used for the study. The primary data includes the information collected under current national and international regulations or law. Secondary data include publications, books, journals, empirical legal studies. Consequently, privacy and data protection regimes in health law show there are some obligations, principles and procedures which shall be binding upon natural or legal persons who process health-related personal data. A comparative approach presents there are significant differences in some EU member states due to different legal competencies, policies, and cultural factors. This selected study provides theoretical and practitioner implications by highlighting the need to illustrate the relationship between privacy and confidentiality in Personal Data Protection in Health Law. Furthermore, this paper would help to define the legal framework for the health law case studies on data protection and privacy.

Keywords: data protection, personal data, privacy, healthcare, health law

Procedia PDF Downloads 224
186 Two-Dimensional Dynamics Motion Simulations of F1 Rare Wing-Flap

Authors: Chaitanya H. Acharya, Pavan Kumar P., Gopalakrishna Narayana

Abstract:

In the realm of aerodynamics, numerous vehicles incorporate moving components to enhance their performance. For instance, airliners deploy hydraulically operated flaps and ailerons during take-off and landing, while Formula 1 racing cars utilize hydraulic tubes and actuators for various components, including the Drag Reduction System (DRS). The DRS, consisting of a rear wing and adjustable flaps, plays a crucial role in overtaking manoeuvres. The DRS has two positions: the default position with the flaps down, providing high downforce, and the lifted position, which reduces drag, allowing for increased speed and aiding in overtaking. Swift deployment of the DRS during races is essential for overtaking competitors. The fluid flow over the rear wing flap becomes intricate during deployment, involving flow reversal and operational changes, leading to unsteady flow physics that significantly influence aerodynamic characteristics. Understanding the drag and downforce during DRS deployment is crucial for determining race outcomes. While experiments can yield accurate aerodynamic data, they can be expensive and challenging to conduct across varying speeds. Computational Fluid Dynamics (CFD) emerges as a cost-effective solution to predict drag and downforce across a range of speeds, especially with the rapid deployment of the DRS. This study employs the finite volume-based solver Ansys Fluent, incorporating dynamic mesh motions and a turbulent model to capture the complex flow phenomena associated with the moving rear wing flap. A dedicated section for the rare wing-flap is considered in the present simulations, and the aerodynamics of these sections closely resemble S1223 aerofoils. Before delving into the simulations of the rare wing-flap aerofoil, numerical results undergo validation using experimental data from an NLR flap aerofoil case, encompassing different flap angles at two distinct angles of attack was carried out. The increase in flap angle as increase in lift and drag is observed for a given angle of attack. The simulation methodology for the rare-wing-flap aerofoil case involves specific time durations before lifting the flap. During this period, drag and downforce values are determined as 330 N and 1800N, respectively. Following the flap lift, a noteworthy reduction in drag to 55 % and a decrease in downforce to 17 % are observed. This understanding is critical for making instantaneous decisions regarding the deployment of the Drag Reduction System (DRS) at specific speeds, thereby influencing the overall performance of the Formula 1 racing car. Hence, this work emphasizes the utilization of dynamic mesh motion methodology to predict the aerodynamic characteristics during the deployment of the DRS in a Formula 1 racing car.

Keywords: DRS, CFD, drag, downforce, dynamics mesh motion

Procedia PDF Downloads 94
185 Unlocking New Room of Production in Brown Field; ‎Integration of Geological Data Conditioned 3D Reservoir ‎Modelling of Lower Senonian Matulla Formation, RAS ‎Budran Field, East Central Gulf of Suez, Egypt

Authors: Nader Mohamed

Abstract:

The Late Cretaceous deposits are well developed through-out Egypt. This is due to a ‎transgression phase associated with the subsidence caused by the neo-Tethyan rift event that ‎took place across the northern margin of Africa, resulting in a period of dominantly marine ‎deposits in the Gulf of Suez. The Late Cretaceous Nezzazat Group represents the Cenomanian, ‎Turonian and clastic sediments of the Lower Senonian. The Nezzazat Group has been divided ‎into four formations namely, from base to top, the Raha Formation, the Abu Qada Formation, ‎the Wata Formation and the Matulla Formation. The Cenomanian Raha and the Lower Senonian ‎Matulla formations are the most important clastic sequence in the Nezzazat Group because they ‎provide the highest net reservoir thickness and the highest net/gross ratio. This study emphasis ‎on Matulla formation located in the eastern part of the Gulf of Suez. The three stratigraphic ‎surface sections (Wadi Sudr, Wadi Matulla and Gabal Nezzazat) which represent the exposed ‎Coniacian-Santonian sediments in Sinai are used for correlating Matulla sediments of Ras ‎Budran field. Cutting description, petrographic examination, log behaviors, biostratigraphy with ‎outcrops are used to identify the reservoir characteristics, lithology, facies environment logs and ‎subdivide the Matulla formation into three units. The lower unit is believed to be the main ‎reservoir where it consists mainly of sands with shale and sandy carbonates, while the other ‎units are mainly carbonate with some streaks of shale and sand. Reservoir modeling is an ‎effective technique that assists in reservoir management as decisions concerning development ‎and depletion of hydrocarbon reserves, So It was essential to model the Matulla reservoir as ‎accurately as possible in order to better evaluate, calculate the reserves and to determine the ‎most effective way of recovering as much of the petroleum economically as possible. All ‎available data on Matulla formation are used to build the reservoir structure model, lithofacies, ‎porosity, permeability and water saturation models which are the main parameters that describe ‎the reservoirs and provide information on effective evaluation of the need to develop the oil ‎potentiality of the reservoir. This study has shown the effectiveness of; 1) the integration of ‎geological data to evaluate and subdivide Matulla formation into three units. 2) Lithology and ‎facies environment interpretation which helped in defining the nature of deposition of Matulla ‎formation. 3) The 3D reservoir modeling technology as a tool for adequate understanding of the ‎spatial distribution of property and in addition evaluating the unlocked new reservoir areas of ‎Matulla formation which have to be drilled to investigate and exploit the un-drained oil. 4) This ‎study led to adding a new room of production and additional reserves to Ras Budran field. ‎

Keywords: geology, oil and gas, geoscience, sequence stratigraphy

Procedia PDF Downloads 106
184 Status of Vocational Education and Training in India: Policies and Practices

Authors: Vineeta Sirohi

Abstract:

The development of critical skills and competencies becomes imperative for young people to cope with the unpredicted challenges of the time and prepare for work and life. Recognizing that education has a critical role in reaching sustainability goals as emphasized by 2030 agenda for sustainability development, educating youth in global competence, meta-cognitive competencies, and skills from the initial stages of formal education are vital. Further, educating for global competence would help in developing work readiness and boost employability. Vocational education and training in India as envisaged in various policy documents remain marginalized in practice as compared to general education. The country is still far away from the national policy goal of tracking 25% of the secondary students at grade eleven and twelve under the vocational stream. In recent years, the importance of skill development has been recognized in the present context of globalization and change in the demographic structure of the Indian population. As a result, it has become a national policy priority and taken up with renewed focus by the government, which has set the target of skilling 500 million people by 2022. This paper provides an overview of the policies, practices, and current status of vocational education and training in India supported by statistics from the National Sample Survey, the official statistics of India. The national policy documents and annual reports of the organizations actively involved in vocational education and training have also been examined to capture relevant data and information. It has also highlighted major initiatives taken by the government to promote skill development. The data indicates that in the age group 15-59 years, only 2.2 percent reported having received formal vocational training, and 8.6 percent have received non-formal vocational training, whereas 88.3 percent did not receive any vocational training. At present, the coverage of vocational education is abysmal as less than 5 percent of the students are covered by the vocational education programme. Besides, launching various schemes to address the mismatch of skills supply and demand, the government through its National Policy on Skill Development and Entrepreneurship 2015 proposes to bring about inclusivity by bridging the gender, social and sectoral divide, ensuring that the skilling needs of socially disadvantaged and marginalized groups are appropriately addressed. It is fundamental that the curriculum is aligned with the demands of the labor market, incorporating more of the entrepreneur skills. Creating nonfarm employment opportunities for educated youth will be a challenge for the country in the near future. Hence, there is a need to formulate specific skill development programs for this sector and also programs for upgrading their skills to enhance their employability. There is a need to promote female participation in work and in non-traditional courses. Moreover, rigorous research and development of a robust information base for skills are required to inform policy decisions on vocational education and training.

Keywords: policy, skill, training, vocational education

Procedia PDF Downloads 152
183 Performing Arts and Performance Art: Interspaces and Flexible Transitions

Authors: Helmi Vent

Abstract:

This four-year artistic research project has set the goal of exploring the adaptable transitions within the realms between the two genres. This paper will single out one research question from the entire project for its focus, namely on how and under what circumstances such transitions between a reinterpretation and a new creation can take place during the performative process. The film documentation that accompany the project were produced at the Mozarteum University in Salzburg, Austria, as well as on diverse everyday stages at various locations. The model institution that hosted the project is the LIA – Lab Inter Arts, under the direction of Helmi Vent. LIA combines artistic research with performative applications. The project participants are students from various artistic fields of study. The film documentation forms a central platform for the entire project. They function as audiovisual records of performative performative origins and development processes, while serving as the basis for analysis and evaluation, including the self-evaluation of the recorded material and they also serve as illustrative and discussion material in relation to the topic of this paper. Regarding the “interspaces” and variable 'transitions': The performing arts in the western cultures generally orient themselves toward existing original compositions – most often in the interconnected fields of music, dance and theater – with the goal of reinterpreting and rehearsing a pre-existing score, choreographed work, libretto or script and presenting that respective piece to an audience. The essential tool in this reinterpretation process is generally the artistic ‘language’ performers learn over the course of their main studies. Thus, speaking is combined with singing, playing an instrument is combined with dancing, or with pictorial or sculpturally formed works, in addition to many other variations. If the Performing Arts would rid themselves of their designations from time to time and initially follow the emerging, diffusely gliding transitions into the unknown, the artistic language the performer has learned then becomes a creative resource. The illustrative film excerpts depicting the realms between Performing Arts and Performance Art present insights into the ways the project participants embrace unknown and explorative processes, thus allowing the genesis of new performative designs or concepts to be invented between the participants’ acquired cultural and artistic skills and their own creations – according to their own ideas and issues, sometimes with their direct involvement, fragmentary, provisional, left as a rough draft or fully composed. All in all, it is an evolutionary process and its key parameters cannot be distilled down to their essence. Rather, they stem from a subtle inner perception, from deep-seated emotions, imaginations, and non-discursive decisions, which ultimately result in an artistic statement rising to the visible and audible surface. Within these realms between performing arts and performance art and their extremely flexible transitions, exceptional opportunities can be found to grasp and realise art itself as a research process.

Keywords: art as research method, Lab Inter Arts ( LIA ), performing arts, performance art

Procedia PDF Downloads 271
182 Embodied Neoliberalism and the Mind as Tool to Manage the Body: A Descriptive Study Applied to Young Australian Amateur Athletes

Authors: Alicia Ettlin

Abstract:

Amid the rise of neoliberalism to the leading economic policy model in Western societies in the 1980s, people have started to internalise a neoliberal way of thinking, whereby the human body has become an entity that can and needs to be precisely managed through free yet rational decision-making processes. The neoliberal citizen has consequently become an entrepreneur of the self who is free, independent, rational, productive and responsible for themselves, their health and wellbeing as well as their appearance. The focus on individuals as entrepreneurs who manage their bodies through the rationally thinking mind has, however, become increasingly criticised for viewing the social actor as ‘disembodied’, as a detached, social actor whose powerful mind governs over the passive body. On the other hand, the discourse around embodiment seeks to connect rational decision-making processes to the dominant neoliberal discourse which creates an embodied understanding that the body, just as other areas of people’s lives, can and should be shaped, monitored and managed through cognitive and rational thinking. This perspective offers an understanding of the body regarding its connections with the social environment that reaches beyond the debates around mind-body binary thinking. Hence, following this argument, body management should not be thought of as either solely guided by embodied discourses nor as merely falling into a mind-body dualism, but rather, simultaneously and inseparably as both at once. The descriptive, qualitative analysis of semi-structured in-depth interviews conducted with young Australian amateur athletes between the age of 18 and 24 has shown that most participants are interested in measuring and managing their body to create self-knowledge and self-improvement. The participants thereby connected self-improvement to weight loss, muscle gain or simply staying fit and healthy. Self-knowledge refers to body measurements including weight, BMI or body fat percentage. Self-management and self-knowledge that are reliant on one another to take rational and well-thought-out decisions, are both characteristic values of the neoliberal doctrine. A neoliberal way of thinking and looking after the body has also by many been connected to rewarding themselves for their discipline, hard work or achievement of specific body management goals (e.g. eating chocolate for reaching the daily step count goal). A few participants, however, have shown resistance against these neoliberal values, and in particular, against the precise monitoring and management of the body with the help of self-tracking devices. Ultimately, however, it seems that most participants have internalised the dominant discourses around self-responsibility, and by association, a sense of duty to discipline their body in normative ways. Even those who have indicated their resistance against body work and body management practices that follow neoliberal thinking and measurement systems, are aware and have internalised the concept of the rational operating mind that needs or should decide how to look after the body in terms of health but also appearance ideals. The discussion around the collected data thereby shows that embodiment and the mind/body dualism constitute two connected, rather than two separate or opposing concepts.

Keywords: dualism, embodiment, mind, neoliberalism

Procedia PDF Downloads 163
181 Threats to the Business Value: The Case of Mechanical Engineering Companies in the Czech Republic

Authors: Maria Reznakova, Michala Strnadova, Lukas Reznak

Abstract:

Successful achievement of strategic goals requires an effective performance management system, i.e. determining the appropriate indicators measuring the rate of goal achievement. Assuming that the goal of the owners is to grow the assets they invested in, it is vital to identify the key performance indicators, which contribute to value creation. These indicators are known as value drivers. Based on the undertaken literature search, a value driver is defined as any factor that affects the value of an enterprise. The important factors are then monitored by both financial and non-financial indicators. Financial performance indicators are most useful in strategic management, since they indicate whether a company's strategy implementation and execution are contributing to bottom line improvement. Non-financial indicators are mainly used for short-term decisions. The identification of value drivers, however, is problematic for companies which are not publicly traded. Therefore financial ratios continue to be used to measure the performance of companies, despite their considerable criticism. The main drawback of such indicators is the fact that they are calculated based on accounting data, while accounting rules may differ considerably across different environments. For successful enterprise performance management it is vital to avoid factors that may reduce (or even destroy) its value. Among the known factors reducing the enterprise value are the lack of capital, lack of strategic management system and poor quality of production. In order to gain further insight into the topic, the paper presents results of the research identifying factors that adversely affect the performance of mechanical engineering enterprises in the Czech Republic. The research methodology focuses on both the qualitative and the quantitative aspect of the topic. The qualitative data were obtained from a questionnaire survey of the enterprises senior management, while the quantitative financial data were obtained from the Analysis Major Database for European Sources (AMADEUS). The questionnaire prompted managers to list factors which negatively affect business performance of their enterprises. The range of potential factors was based on a secondary research – analysis of previously undertaken questionnaire surveys and research of studies published in the scientific literature. The results of the survey were evaluated both in general, by average scores, and by detailed sub-analyses of additional criteria. These include the company specific characteristics, such as its size and ownership structure. The evaluation also included a comparison of the managers’ opinions and the performance of their enterprises – measured by return on equity and return on assets ratios. The comparisons were tested by a series of non-parametric tests of statistical significance. The results of the analyses show that the factors most detrimental to the enterprise performance include the incompetence of responsible employees and the disregard to the customers‘ requirements.

Keywords: business value, financial ratios, performance measurement, value drivers

Procedia PDF Downloads 222
180 Waste Burial to the Pressure Deficit Areas in the Eastern Siberia

Authors: L. Abukova, O. Abramova, A. Goreva, Y. Yakovlev

Abstract:

Important executive decisions on oil and gas production stimulation in Eastern Siberia have been recently taken. There are unique and large fields of oil, gas, and gas-condensate in Eastern Siberia. The Talakan, Koyumbinskoye, Yurubcheno-Tahomskoye, Kovykta, Chayadinskoye fields are supposed to be developed first. It will result in an abrupt increase in environmental load on the nature of Eastern Siberia. In Eastern Siberia, the introduction of ecological imperatives in hydrocarbon production is still realistic. Underground water movement is the one of the most important factors of the ecosystems condition management. Oil and gas production is associated with the forced displacement of huge water masses, mixing waters of different composition, and origin that determines the extent of anthropogenic impact on water drive systems and their protective reaction. An extensive hydrogeological system of the depression type is identified in the pre-salt deposits here. Pressure relieve here is steady up to the basement. The decrease of the hydrodynamic potential towards the basement with such a gradient resulted in reformation of the fields in process of historical (geological) development of the Nepsko-Botuobinskaya anteclise. The depression hydrodynamic systems are characterized by extremely high isolation and can only exist under such closed conditions. A steady nature of water movement due to a strictly negative gradient of reservoir pressure makes it quite possible to use environmentally-harmful liquid substances instead of water. Disposal of the most hazardous wastes is the most expedient in the deposits of the crystalline basement in certain structures distant from oil and gas fields. The time period for storage of environmentally-harmful liquid substances may be calculated by means of the geological time scales ensuring their complete prevention from releasing into environment or air even during strong earthquakes. Disposal of wastes of chemical and nuclear industries is a matter of special consideration. The existing methods of storage and disposal of wastes are very expensive. The methods applied at the moment for storage of nuclear wastes at the depth of several meters, even in the most durable containers, constitute a potential danger. The enormous size of the depression system of the Nepsko-Botuobinskaya anteclise makes it possible to easily identify such objects at the depth below 1500 m where nuclear wastes will be stored indefinitely without any environmental impact. Thus, the water drive system of the Nepsko-Botuobinskaya anteclise is the ideal object for large-volume injection of environmentally harmful liquid substances even if there are large oil and gas accumulations in the subsurface. Specific geological and hydrodynamic conditions of the system allow the production of hydrocarbons from the subsurface simultaneously with the disposal of industrial wastes of oil and gas, mining, chemical, and nuclear industries without any environmental impact.

Keywords: Eastern Siberia, formation pressure, underground water, waste burial

Procedia PDF Downloads 259
179 Air Pollution on Stroke in Shenzhen, China: A Time-Stratified Case Crossover Study Modified by Meteorological Variables

Authors: Lei Li, Ping Yin, Haneen Khreis

Abstract:

Stroke is the second leading cause of death and a third leading cause of death and disability worldwide in 2019. Given the significant role of environmental factors in stroke development and progression, it is essential to investigate the effect of air pollution on stroke occurrence while considering the modifying effects of meteorological variables. This study aimed to evaluate the association between short-term exposure to air pollution and the incidence of stroke subtypes in Shenzhen, China, and to explore the potential interactions of meteorological factors with air pollutants. The study analyzed data from January 1, 2006, to December 31, 2014, including 88,214 cases of ischemic stroke and 30,433 cases of hemorrhagic stroke among residents of Shenzhen. Using a time-stratified case–crossover design with conditional quasi-Poisson regression, the study estimated the percentage changes in stroke morbidity associated with short-term exposure to nitrogen dioxide (NO₂), sulfur dioxide (SO₂), particulate matter less than 10 mm in aerodynamic diameter (PM10), carbon monoxide (CO), and ozone (O₃). A five-day moving average of air pollution was applied to capture the cumulative effects of air pollution. The estimates were further stratified by sex, age, education level, and season. The additive and multiplicative interaction between air pollutants and meteorologic variables were assessed by the relative excess risk due to interaction (RERI) and adding the interactive term into the main model, respectively. The study found that NO₂ was positively associated with ischemic stroke occurrence throughout the year and in the cold season (November through April), with a stronger effect observed among men. Each 10 μg/m³ increment in the five-day moving average of NO₂ was associated with a 2.38% (95% confidence interval was 1.36% to 3.41%) increase in the risk of ischemic stroke over the whole year and a 3.36% (2.04% to 4.69%) increase in the cold season. The harmful effect of CO on ischemic stroke was observed only in the cold season, with each 1 mg/m³ increment in the five-day moving average of CO increasing the risk by 12.34% (3.85% to 21.51%). There was no statistically significant additive interaction between individual air pollutants and temperature or relative humidity, as demonstrated by the RERI. The interaction term in the model showed a multiplicative antagonistic effect between NO₂ and temperature (p-value=0.0268). For hemorrhagic stroke, no evidence of the effects of any individual air pollutants was found in the whole population. However, the RERI indicated a statistically additive and multiplicative interaction of temperature on the effects of PM10 and O₃ on hemorrhagic stroke onset. Therefore, the insignificant conclusion should be interpreted with caution. The study suggests that environmental NO₂ and CO might increase the morbidity of ischemic stroke, particularly during the cold season. These findings could help inform policy decisions aimed at reducing air pollution levels to prevent stroke and other health conditions. Additionally, the study provides valuable insights into the interaction between air pollution and meteorological variables, which underscores the need for further research into the complex relationship between environmental factors and health.

Keywords: air pollution, meteorological variables, interactive effect, seasonal pattern, stroke

Procedia PDF Downloads 88
178 Optimization of Structures with Mixed Integer Non-linear Programming (MINLP)

Authors: Stojan Kravanja, Andrej Ivanič, Tomaž Žula

Abstract:

This contribution focuses on structural optimization in civil engineering using mixed integer non-linear programming (MINLP). MINLP is characterized as a versatile method that can handle both continuous and discrete optimization variables simultaneously. Continuous variables are used to optimize parameters such as dimensions, stresses, masses, or costs, while discrete variables represent binary decisions to determine the presence or absence of structural elements within a structure while also calculating discrete materials and standard sections. The optimization process is divided into three main steps. First, a mechanical superstructure with a variety of different topology-, material- and dimensional alternatives. Next, a MINLP model is formulated to encapsulate the optimization problem. Finally, an optimal solution is searched in the direction of the defined objective function while respecting the structural constraints. The economic or mass objective function of the material and labor costs of a structure is subjected to the constraints known from structural analysis. These constraints include equations for the calculation of internal forces and deflections, as well as equations for the dimensioning of structural components (in accordance with the Eurocode standards). Given the complex, non-convex and highly non-linear nature of optimization problems in civil engineering, the Modified Outer-Approximation/Equality-Relaxation (OA/ER) algorithm is applied. This algorithm alternately solves subproblems of non-linear programming (NLP) and main problems of mixed-integer linear programming (MILP), in this way gradually refines the solution space up to the optimal solution. The NLP corresponds to the continuous optimization of parameters (with fixed topology, discrete materials and standard dimensions, all determined in the previous MILP), while the MILP involves a global approximation to the superstructure of alternatives, where a new topology, materials, standard dimensions are determined. The optimization of a convex problem is stopped when the MILP solution becomes better than the best NLP solution. Otherwise, it is terminated when the NLP solution can no longer be improved. While the OA/ER algorithm, like all other algorithms, does not guarantee global optimality due to the presence of non-convex functions, various modifications, including convexity tests, are implemented in OA/ER to mitigate these difficulties. The effectiveness of the proposed MINLP approach is demonstrated by its application to various structural optimization tasks, such as mass optimization of steel buildings, cost optimization of timber halls, composite floor systems, etc. Special optimization models have been developed for the optimization of these structures. The MINLP optimizations, facilitated by the user-friendly software package MIPSYN, provide insights into a mass or cost-optimal solutions, optimal structural topologies, optimal material and standard cross-section choices, confirming MINLP as a valuable method for the optimization of structures in civil engineering.

Keywords: MINLP, mixed-integer non-linear programming, optimization, structures

Procedia PDF Downloads 46
177 Integrative-Cyclical Approach to the Study of Quality Control of Resource Saving by the Use of Innovation Factors

Authors: Anatoliy A. Alabugin, Nikolay K. Topuzov, Sergei V. Aliukov

Abstract:

It is well known, that while we do a quantitative evaluation of the quality control of some economic processes (in particular, resource saving) with help innovation factors, there are three groups of problems: high uncertainty of indicators of the quality management, their considerable ambiguity, and high costs to provide a large-scale research. These problems are defined by the use of contradictory objectives of enhancing of the quality control in accordance with innovation factors and preservation of economic stability of the enterprise. The most acutely, such factors are felt in the countries lagging behind developed economies of the world according to criteria of innovativeness and effectiveness of management of the resource saving. In our opinion, the following two methods for reconciling of the above-mentioned objectives and reducing of conflictness of the problems are to solve this task most effectively: 1) the use of paradigms and concepts of evolutionary improvement of quality of resource-saving management in the cycle "from the project of an innovative product (technology) - to its commercialization and update parameters of customer value"; 2) the application of the so-called integrative-cyclical approach which consistent with complexity and type of the concept, to studies allowing to get quantitative assessment of the stages of achieving of the consistency of these objectives (from baseline of imbalance, their compromise to achievement of positive synergies). For implementation, the following mathematical tools are included in the integrative-cyclical approach: index-factor analysis (to identify the most relevant factors); regression analysis of relationship between the quality control and the factors; the use of results of the analysis in the model of fuzzy sets (to adjust the feature space); method of non-parametric statistics (for a decision on the completion or repetition of the cycle in the approach in depending on the focus and the closeness of the connection of indicator ranks of disbalance of purposes). The repetition is performed after partial substitution of technical and technological factors ("hard") by management factors ("soft") in accordance with our proposed methodology. Testing of the proposed approach has shown that in comparison with the world practice there are opportunities to improve the quality of resource-saving management using innovation factors. We believe that the implementation of this promising research, to provide consistent management decisions for reducing the severity of the above-mentioned contradictions and increasing the validity of the choice of resource-development strategies in terms of parameters of quality management and sustainability of enterprise, is perspective. Our existing experience in the field of quality resource-saving management and the achieved level of scientific competence of the authors allow us to hope that the use of the integrative-cyclical approach to the study and evaluation of the resulting and factor indicators will help raise the level of resource-saving characteristics up to the value existing in the developed economies of post-industrial type.

Keywords: integrative-cyclical approach, quality control, evaluation, innovation factors. economic sustainability, innovation cycle of management, disbalance of goals of development

Procedia PDF Downloads 245
176 Data Science/Artificial Intelligence: A Possible Panacea for Refugee Crisis

Authors: Avi Shrivastava

Abstract:

In 2021, two heart-wrenching scenes, shown live on television screens across countries, painted a grim picture of refugees. One of them was of people clinging onto an airplane's wings in their desperate attempt to flee war-torn Afghanistan. They ultimately fell to their death. The other scene was the U.S. government authorities separating children from their parents or guardians to deter migrants/refugees from coming to the U.S. These events show the desperation refugees feel when they are trying to leave their homes in disaster zones. However, data paints a grave picture of the current refugee situation. It also indicates that a bleak future lies ahead for the refugees across the globe. Data and information are the two threads that intertwine to weave the shimmery fabric of modern society. Data and information are often used interchangeably, but they differ considerably. For example, information analysis reveals rationale, and logic, while data analysis, on the other hand, reveals a pattern. Moreover, patterns revealed by data can enable us to create the necessary tools to combat huge problems on our hands. Data analysis paints a clear picture so that the decision-making process becomes simple. Geopolitical and economic data can be used to predict future refugee hotspots. Accurately predicting the next refugee hotspots will allow governments and relief agencies to prepare better for future refugee crises. The refugee crisis does not have binary answers. Given the emotionally wrenching nature of the ground realities, experts often shy away from realistically stating things as they are. This hesitancy can cost lives. When decisions are based solely on data, emotions can be removed from the decision-making process. Data also presents irrefutable evidence and tells whether there is a solution or not. Moreover, it also responds to a nonbinary crisis with a binary answer. Because of all that, it becomes easier to tackle a problem. Data science and A.I. can predict future refugee crises. With the recent explosion of data due to the rise of social media platforms, data and insight into data has solved many social and political problems. Data science can also help solve many issues refugees face while staying in refugee camps or adopted countries. This paper looks into various ways data science can help solve refugee problems. A.I.-based chatbots can help refugees seek legal help to find asylum in the country they want to settle in. These chatbots can help them find a marketplace where they can find help from the people willing to help. Data science and technology can also help solve refugees' many problems, including food, shelter, employment, security, and assimilation. The refugee problem seems to be one of the most challenging for social and political reasons. Data science and machine learning can help prevent the refugee crisis and solve or alleviate some of the problems that refugees face in their journey to a better life. With the explosion of data in the last decade, data science has made it possible to solve many geopolitical and social issues.

Keywords: refugee crisis, artificial intelligence, data science, refugee camps, Afghanistan, Ukraine

Procedia PDF Downloads 72
175 Bedouin Dispersion in Israel: Between Sustainable Development and Social Non-Recognition

Authors: Tamir Michal

Abstract:

The subject of Bedouin dispersion has accompanied the State of Israel from the day of its establishment. From a legal point of view, this subject has offered a launchpad for creative judicial decisions. Thus, for example, the first court decision in Israel to recognize affirmative action (Avitan), dealt with a petition submitted by a Jew appealing the refusal of the State to recognize the Petitioner’s entitlement to the long-term lease of a plot designated for Bedouins. The Supreme Court dismissed the petition, holding that there existed a public interest in assisting Bedouin to establish permanent urban settlements, an interest which justifies giving them preference by selling them plots at subsidized prices. In another case (The Forum for Coexistence in the Negev) the Supreme Court extended equitable relief for the purpose of constructing a bridge, even though the construction infringed the Law, in order to allow the children of dispersed Bedouin to reach school. Against this background, the recent verdict, delivered during the Protective Edge military campaign, which dismissed a petition aimed at forcing the State to spread out Protective Structures in Bedouin villages in the Negev against the risk of being hit from missiles launched from Gaza (Abu Afash) is disappointing. Even if, in arguendo, no selective discrimination was involved in the State’s decision not to provide such protection, the decision, and its affirmation by the Court, is problematic when examined through the prism of the Theory of Recognition. The article analyses the issue by tools of theory of Recognition, according to which people develop their identities through mutual relations of recognition in different fields. In the social context, the path to recognition is cognitive respect, which is provided by means of legal rights. By seeing other participants in Society as bearers of rights and obligations, the individual develops an understanding of his legal condition as reflected in the attitude to others. Consequently, even if the Court’s decision may be justified on strict legal grounds, the fact that Jewish settlements were protected during the military operation, whereas Bedouin villages were not, is a setback in the struggle to make the Bedouin citizens with equal rights in Israeli society. As the Court held, ‘Beyond their protective function, the Migunit [Protective Structures] may make a moral and psychological contribution that should not be undervalued’. This contribution is one that the Bedouin did not receive in the Abu Afash verdict. The basic thesis is that the Court’s verdict analyzed above clearly demonstrates that the reliance on classical liberal instruments (e.g., equality) cannot secure full appreciation of all aspects of Bedouin life, and hence it can in fact prejudice them. Therefore, elements of the recognition theory should be added, in order to find the channel for cognitive dignity, thereby advancing the Bedouins’ ability to perceive themselves as equal human beings in the Israeli society.

Keywords: bedouin dispersion, cognitive respect, recognition theory, sustainable development

Procedia PDF Downloads 350
174 Analysis of Lesotho Wool Production and Quality Trends 2008-2018

Authors: Papali Maqalika

Abstract:

Lesotho farmers produce significant quantities of Merino wool of a quality competitive on the global market and make a substantial impact on the economy of Lesotho. However, even with the economic contribution, the production and quality information and trends of this fibre has been recognised nor documented. This is a sombre shortcoming as Lesotho wool is unknown on international markets. The situation is worsened by the fact that Lesotho wool is auction together with South African wool, trading and benchmarking Lesotho wool are difficult not to mention attempts to advance its production and quality. Based on the information above, available data on Lesotho wool for 10 years were collected and analysed for trends to used in benchmarking where applicable. The fibre properties analysed include fibre diameter (fineness), vegetable matter and yield, application and price. These were selected because they are fundamental in determining fibre quality and price. Production of wool in Lesotho has increased slightly over the ten years covered by this study. It also became apparent that production and quality trends of Lesotho wool are greatly influenced by the farming practices, breed of sheep and climatic conditions. Greater adoption of the merino sheep breed, sheds/barns and sheep coats are suggested as ways to reduce mortality rate (due to extremely cold temperatures), to reduce the vegetable matter on the fibre thus improving the quality and increase yield per sheep and production as a whole. Some farming practices such as the lack of barns, supplementary feeding and veterinary care present constraints in wool production. The districts in the Highlands region were found to have the highest production of mostly wool, this being ascribed to better pastures, climatic, social and other conditions conducive to wool production. The production of Lesotho wool and its quality can be improved further, possibly because of the interventions the Ministry of Agriculture introduced through the Small Agricultural and Development Project (SADP) and other appropriate initiatives by the National Wool and Mohair Growers Association (NWMGA). The challenge however, remains the lack of direct involvement of the wool growers (farmers) in decisions making and policy development, this potentially influences and may lead to the reluctance to adopt the strategies. In some cases, the wool growers do not receive the benefits associated with the interventions immediately. Based on these discoveries; it is recommended that the relevant educators and researchers in wool and textile science, as well as the local wool farmers in Lesotho, be represented in policy and other decision making forums relating to these interventions. In this way, educational campaigns and training workshops will be demand driven with a better chance of adoption and success. This is because the direct beneficiaries will have been involved at inception and they will have a sense of ownership as well as intent to see them through successfully.

Keywords: lesotho wool, wool quality, wool production, lesotho economy, global market, apparel wool, database, textile science, exports, animal farming practices, intimate apparel, interventions

Procedia PDF Downloads 90
173 Monitoring the Responses to Nociceptive Stimuli During General Anesthesia Based on Electroencephalographic Signals in Surgical Patients Undergoing General Anesthesia with Laryngeal Mask Airway (LMA)

Authors: Ofelia Loani Elvir Lazo, Roya Yumul, Sevan Komshian, Ruby Wang, Jun Tang

Abstract:

Background: Monitoring the anti-nociceptive drug effect is useful because a sudden and strong nociceptive stimulus may result in untoward autonomic responses and muscular reflex movements. Monitoring the anti-nociceptive effects of perioperative medications has long been desiredas a way to provide anesthesiologists information regarding a patient’s level of antinociception and preclude any untoward autonomic responses and reflexive muscular movements from painful stimuli intraoperatively.To this end, electroencephalogram (EEG) based tools includingBIS and qCON were designed to provide information about the depth of sedation whileqNOXwas produced to informon the degree of antinociception.The goal of this study was to compare the reliability of qCON/qNOX to BIS asspecific indicators of response to nociceptive stimulation. Methods: Sixty-two patients undergoing general anesthesia with LMA were included in this study. Institutional Review Board(IRB) approval was obtained, and informed consent was acquired prior to patient enrollment. Inclusion criteria included American Society of Anesthesiologists (ASA) class I-III, 18 to 80 years of age, and either gender. Exclusion criteria included the inability to consent. Withdrawal criteria included conversion to endotracheal tube and EEG malfunction. BIS and qCON/qNOX electrodes were simultaneously placed o62n all patientsprior to induction of anesthesia and were monitored throughout the case, along with other perioperative data, including patient response to noxious stimuli. All intraoperative decisions were made by the primary anesthesiologist without influence from qCON/qNOX. Student’s t-distribution, prediction probability (PK), and ANOVA were used to statistically compare the relative ability to detect nociceptive stimuli for each index. Twenty patients were included for the preliminary analysis. Results: A comparison of overall intraoperative BIS, qCON and qNOX indices demonstrated no significant difference between the three measures (N=62, p> 0.05). Meanwhile, index values for qNOX (62±18) were significantly higher than those for BIS (46±14) and qCON (54±19) immediately preceding patient responses to nociceptive stimulation in a preliminary analysis (N=20, * p= 0.0408). Notably, certain hemodynamic measurements demonstrated a significant increase in response to painful stimuli (MAP increased from74±13 mm Hg at baseline to 84± 18 mm Hg during noxious stimuli [p= 0.032] and HR from 76±12 BPM at baseline to 80±13BPM during noxious stimuli[p=0.078] respectively). Conclusion: In this observational study, BIS and qCON/qNOX provided comparable information on patients’ level of sedation throughout the course of an anesthetic. Meanwhile, increases in qNOX values demonstrated a superior correlation to an imminent response to stimulation relative to all other indices.

Keywords: antinociception, bispectral index (BIS), general anesthesia, laryngeal mask airway, qCON/qNOX

Procedia PDF Downloads 92
172 Data Quality on Regular Childhood Immunization Programme at Degehabur District: Somali Region, Ethiopia

Authors: Eyob Seife

Abstract:

Immunization is a life-saving intervention which prevents needless suffering through sickness, disability, and death. Emphasis on data quality and use will become even stronger with the development of the immunization agenda 2030 (IA2030). Quality of data is a key factor in generating reliable health information that enables monitoring progress, financial planning, vaccine forecasting capacities, and making decisions for continuous improvement of the national immunization program. However, ensuring data of sufficient quality and promoting an information-use culture at the point of the collection remains critical and challenging, especially in hard-to-reach and pastoralist areas where Degehabur district is selected based on a hypothesis of ‘there is no difference in reported and recounted immunization data consistency. Data quality is dependent on different factors where organizational, behavioral, technical, and contextual factors are the mentioned ones. A cross-sectional quantitative study was conducted on September 2022 in the Degehabur district. The study used the world health organization (WHO) recommended data quality self-assessment (DQS) tools. Immunization tally sheets, registers, and reporting documents were reviewed at 5 health facilities (2 health centers and 3 health posts) of primary health care units for one fiscal year (12 months) to determine the accuracy ratio. The data was collected by trained DQS assessors to explore the quality of monitoring systems at health posts, health centers, and the district health office. A quality index (QI) was assessed, and the accuracy ratio formulated were: the first and third doses of pentavalent vaccines, fully immunized (FI), and the first dose of measles-containing vaccines (MCV). In this study, facility-level results showed both over-reporting and under-reporting were observed at health posts when computing the accuracy ratio of the tally sheet to health post reports found at health centers for almost all antigens verified where pentavalent 1 was 88.3%, 60.4%, and 125.6% for Health posts A, B, and C respectively. For first-dose measles-containing vaccines (MCV), similarly, the accuracy ratio was found to be 126.6%, 42.6%, and 140.9% for Health posts A, B, and C, respectively. The accuracy ratio for fully immunized children also showed 0% for health posts A and B and 100% for health post-C. A relatively better accuracy ratio was seen at health centers where the first pentavalent dose was 97.4% and 103.3% for health centers A and B, while a first dose of measles-containing vaccines (MCV) was 89.2% and 100.9% for health centers A and B, respectively. A quality index (QI) of all facilities also showed results between the maximum of 33.33% and a minimum of 0%. Most of the verified immunization data accuracy ratios were found to be relatively better at the health center level. However, the quality of the monitoring system is poor at all levels, besides poor data accuracy at all health posts. So attention should be given to improving the capacity of staff and quality of monitoring system components, namely recording, reporting, archiving, data analysis, and using information for decision at all levels, especially in pastoralist areas where such kinds of study findings need to be improved beside to improving the data quality at root and health posts level.

Keywords: accuracy ratio, Degehabur District, regular childhood immunization program, quality of monitoring system, Somali Region-Ethiopia

Procedia PDF Downloads 107
171 Tip60’s Novel RNA-Binding Function Modulates Alternative Splicing of Pre-mRNA Targets Implicated in Alzheimer’s Disease

Authors: Felice Elefant, Akanksha Bhatnaghar, Keegan Krick, Elizabeth Heller

Abstract:

Context: The severity of Alzheimer’s Disease (AD) progression involves an interplay of genetics, age, and environmental factors orchestrated by histone acetyltransferase (HAT) mediated neuroepigenetic mechanisms. While disruption of Tip60 HAT action in neural gene control is implicated in AD, alternative mechanisms underlying Tip60 function remain unexplored. Altered RNA splicing has recently been highlighted as a widespread hallmark in the AD transcriptome that is implicated in the disease. Research Aim: The aim of this study was to identify a novel RNA binding/splicing function for Tip60 in human hippocampus and impaired in brains from AD fly models and AD patients. Methodology/Analysis: The authors used RNA immunoprecipitation using RNA isolated from 200 pooled wild type Drosophila brains for each of the 3 biological replicates. To identify Tip60’s RNA targets, they performed genome sequencing (DNB-SequencingTM technology, BGI genomics) on 3 replicates for Input RNA and RNA IPs by Tip60. Findings: The authors' transcriptomic analysis of RNA bound to Tip60 by Tip60-RNA immunoprecipitation (RIP) revealed Tip60 RNA targets enriched for critical neuronal processes implicated in AD. Remarkably, 79% of Tip60’s RNA targets overlap with its chromatin gene targets, supporting a model by which Tip60 orchestrates bi-level transcriptional regulation at both the chromatin and RNA level, a function unprecedented for any HAT to date. Since RNA splicing occurs co-transcriptionally and splicing defects are implicated in AD, the authors investigated whether Tip60-RNA targeting modulates splicing decisions and if this function is altered in AD. Replicate multivariate analysis of transcript splicing (rMATS) analysis of RNA-Seq data sets from wild-type and AD fly brains revealed a multitude of mammalian-like AS defects. Strikingly, over half of these altered RNAs were bonafide Tip60-RNA targets enriched for in the AD-gene curated database, with some AS alterations prevented against by increasing Tip60 in fly brain. Importantly, human orthologs of several Tip60-modulated spliced genes in Drosophila are well characterized aberrantly spliced genes in human AD brains, implicating disruption of Tip60’s splicing function in AD pathogenesis. Theoretical Importance: The authors' findings support a novel RNA interaction and splicing regulatory function for Tip60 that may underlie AS impairments that hallmark AD etiology. Data Collection: The authors collected data from RNA immunoprecipitation experiments using RNA isolated from 200 pooled wild type Drosophila brains for each of the 3 biological replicates. They also performed genome sequencing (DNBSequencingTM technology, BGI genomics) on 3 replicates for Input RNA and RNA IPs by Tip60. Questions: The question addressed by this study was whether Tip60 has a novel RNA binding/splicing function in human hippocampus and whether this function is impaired in brains from AD fly models and AD patients. Conclusions: The authors' findings support a novel RNA interaction and splicing regulatory function for Tip60 that may underlie AS impairments that hallmark AD etiology.

Keywords: Alzheimer's disease, cognition, aging, neuroepigenetics

Procedia PDF Downloads 76
170 Artificial Intelligence Models for Detecting Spatiotemporal Crop Water Stress in Automating Irrigation Scheduling: A Review

Authors: Elham Koohi, Silvio Jose Gumiere, Hossein Bonakdari, Saeid Homayouni

Abstract:

Water used in agricultural crops can be managed by irrigation scheduling based on soil moisture levels and plant water stress thresholds. Automated irrigation scheduling limits crop physiological damage and yield reduction. Knowledge of crop water stress monitoring approaches can be effective in optimizing the use of agricultural water. Understanding the physiological mechanisms of crop responding and adapting to water deficit ensures sustainable agricultural management and food supply. This aim could be achieved by analyzing and diagnosing crop characteristics and their interlinkage with the surrounding environment. Assessments of plant functional types (e.g., leaf area and structure, tree height, rate of evapotranspiration, rate of photosynthesis), controlling changes, and irrigated areas mapping. Calculating thresholds of soil water content parameters, crop water use efficiency, and Nitrogen status make irrigation scheduling decisions more accurate by preventing water limitations between irrigations. Combining Remote Sensing (RS), the Internet of Things (IoT), Artificial Intelligence (AI), and Machine Learning Algorithms (MLAs) can improve measurement accuracies and automate irrigation scheduling. This paper is a review structured by surveying about 100 recent research studies to analyze varied approaches in terms of providing high spatial and temporal resolution mapping, sensor-based Variable Rate Application (VRA) mapping, the relation between spectral and thermal reflectance and different features of crop and soil. The other objective is to assess RS indices formed by choosing specific reflectance bands and identifying the correct spectral band to optimize classification techniques and analyze Proximal Optical Sensors (POSs) to control changes. The innovation of this paper can be defined as categorizing evaluation methodologies of precision irrigation (applying the right practice, at the right place, at the right time, with the right quantity) controlled by soil moisture levels and sensitiveness of crops to water stress, into pre-processing, processing (retrieval algorithms), and post-processing parts. Then, the main idea of this research is to analyze the error reasons and/or values in employing different approaches in three proposed parts reported by recent studies. Additionally, as an overview conclusion tried to decompose different approaches to optimizing indices, calibration methods for the sensors, thresholding and prediction models prone to errors, and improvements in classification accuracy for mapping changes.

Keywords: agricultural crops, crop water stress detection, irrigation scheduling, precision agriculture, remote sensing

Procedia PDF Downloads 71
169 A Smart Sensor Network Approach Using Affordable River Water Level Sensors

Authors: Dian Zhang, Brendan Heery, Maria O’Neill, Ciprian Briciu-Burghina, Noel E. O’Connor, Fiona Regan

Abstract:

Recent developments in sensors, wireless data communication and the cloud computing have brought the sensor web to a whole new generation. The introduction of the concept of ‘Internet of Thing (IoT)’ has brought the sensor research into a new level, which involves the developing of long lasting, low cost, environment friendly and smart sensors; new wireless data communication technologies; big data analytics algorithms and cloud based solutions that are tailored to large scale smart sensor network. The next generation of smart sensor network consists of several layers: physical layer, where all the smart sensors resident and data pre-processes occur, either on the sensor itself or field gateway; data transmission layer, where data and instructions exchanges happen; the data process layer, where meaningful information is extracted and organized from the pre-process data stream. There are many definitions of smart sensor, however, to summarize all these definitions, a smart sensor must be Intelligent and Adaptable. In future large scale sensor network, collected data are far too large for traditional applications to send, store or process. The sensor unit must be intelligent that pre-processes collected data locally on board (this process may occur on field gateway depends on the sensor network structure). In this case study, three smart sensing methods, corresponding to simple thresholding, statistical model and machine learning based MoPBAS method, are introduced and their strength and weakness are discussed as an introduction to the smart sensing concept. Data fusion, the integration of data and knowledge from multiple sources, are key components of the next generation smart sensor network. For example, in the water level monitoring system, weather forecast can be extracted from external sources and if a heavy rainfall is expected, the server can send instructions to the sensor notes to, for instance, increase the sampling rate or switch on the sleeping mode vice versa. In this paper, we describe the deployment of 11 affordable water level sensors in the Dublin catchment. The objective of this paper is to use the deployed river level sensor network at the Dodder catchment in Dublin, Ireland as a case study to give a vision of the next generation of a smart sensor network for flood monitoring to assist agencies in making decisions about deploying resources in the case of a severe flood event. Some of the deployed sensors are located alongside traditional water level sensors for validation purposes. Using the 11 deployed river level sensors in a network as a case study, a vision of the next generation of smart sensor network is proposed. Each key component of the smart sensor network is discussed, which hopefully inspires the researchers who are working in the sensor research domain.

Keywords: smart sensing, internet of things, water level sensor, flooding

Procedia PDF Downloads 381
168 Multivariate Ecoregion Analysis of Nutrient Runoff From Agricultural Land Uses in North America

Authors: Austin P. Hopkins, R. Daren Harmel, Jim A Ippolito, P. J. A. Kleinman, D. Sahoo

Abstract:

Field-scale runoff and water quality data are critical to understanding the fate and transport of nutrients applied to agricultural lands and minimizing their off-site transport because it is at that scale that agricultural management decisions are typically made based on hydrologic, soil, and land use factors. However, regional influences such as precipitation, temperature, and prevailing cropping systems and land use patterns also impact nutrient runoff. In the present study, the recently-updated MANAGE (Measured Annual Nutrient loads from Agricultural Environments) database was used to conduct an ecoregion-level analysis of nitrogen and phosphorus runoff from agricultural lands in the North America. Specifically, annual N and P runoff loads for cropland and grasslands in North American Level II EPA ecoregions were presented, and the impact of factors such as land use, tillage, and fertilizer timing and placement on N and P runoff were analyzed. Specifically we compiled annual N and P runoff load data (i.e., dissolved, particulate, and total N and P, kg/ha/yr) for each Level 2 EPA ecoregion and for various agricultural management practices (i.e., land use, tillage, fertilizer timing, fertilizer placement) within each ecoregion to showcase the analyses possible with the data in MANAGE. Potential differences in N and P runoff loads were evaluated between and within ecoregions with statistical and graphical approaches. Non-parametric analyses, mainly Mann-Whitney tests were conducted on median values weighted by the site years of data utilizing R because the data were not normally distributed, and we used Dunn tests and box and whisker plots to visually and statistically evaluate significant differences. Out of the 50 total North American Ecoregions, 11 were found that had significant data and site years to be utilized in the analysis. When examining ecoregions alone, it was observed that ER 9.2 temperate prairies had a significantly higher total N at 11.7 kg/ha/yr than ER 9.4 South Central Semi Arid Prairies with a total N of 2.4. When examining total P it was observed that ER 8.5 Mississippi Alluvial and Southeast USA Coastal Plains had a higher load at 3.0 kg/ha/yr than ER 8.2 Southeastern USA Plains with a load of 0.25 kg/ha/yr. Tillage and Land Use had severe impacts on nutrient loads. In ER 9.2 Temperate Prairies, conventional tillage had a total N load of 36.0 kg/ha/yr while conservation tillage had a total N load of 4.8 kg/ha/yr. In all relevant ecoregions, when corn was the predominant land use, total N levels significantly increased compared to grassland or other grains. In ER 8.4 Ozark-Ouachita, Corn had a total N of 22.1 kg/ha/yr while grazed grassland had a total N of 2.9 kg/ha/yr. There are further intricacies of the interactions that agricultural management practices have on one another combined with ecological conditions and their impacts on the continental aquatic nutrient loads that still need to be explored. This research provides a stepping stone to further understanding of land and resource stewardship and best management practices.

Keywords: water quality, ecoregions, nitrogen, phosphorus, agriculture, best management practices, land use

Procedia PDF Downloads 79
167 Ecological Relationships Between Material, Colonizing Organisms, and Resulting Performances

Authors: Chris Thurlbourne

Abstract:

Due to the continual demand for material to build, and a limit of good environmental material credentials of 'normal' building materials, there is a need to look at new and reconditioned material types - both biogenic and non-biogenic - and a field of research that accompanies this. This research development focuses on biogenic and non-biogenic material engineering and the impact of our environment on new and reconditioned material types. In our building industry and all the industries involved in constructing our built environment, building material types can be broadly categorized into two types, biogenic and non-biogenic material properties. Both play significant roles in shaping our built environment. Regardless of their properties, all material types originate from our earth, whereas many are modified through processing to provide resistance to 'forces of nature', be it rain, wind, sun, gravity, or whatever the local environmental conditions throw at us. Modifications are succumbed to offer benefits in endurance, resistance, malleability in handling (building with), and ergonomic values - in all types of building material. We assume control of all building materials through rigorous quality control specifications and regulations to ensure materials perform under specific constraints. Yet materials confront an external environment that is not controlled with live forces undetermined, and of which materials naturally act and react through weathering, patination and discoloring, promoting natural chemical reactions such as rusting. The purpose of the paper is to present recent research that explores the after-life of specific new and reconditioned biogenic and non-biogenic material types and how the understanding of materials' natural processes of transformation when exposed to the external climate, can inform initial design decisions. With qualities to receive in a transient and contingent manner, ecological relationships between material, the colonizing organisms and resulting performances invite opportunities for new design explorations for the benefit of both the needs of human society and the needs of our natural environment. The research follows designing for the benefit of both and engaging in both biogenic and non-biogenic material engineering whilst embracing the continual demand for colonization - human and environment, and the aptitude of a material to be colonized by one or several groups of living organisms without necessarily undergoing any severe deterioration, but embracing weathering, patination and discoloring, and at the same time establishing new habitat. The research follows iterative prototyping processes where knowledge has been accumulated via explorations of specific material performances, from laboratory to construction mock-ups focusing on the architectural qualities embedded in control of production techniques and facilitating longer-term patinas of material surfaces to extend the aesthetic beyond common judgments. Experiments are therefore focused on how the inherent material qualities drive a design brief toward specific investigations to explore aesthetics induced through production, patinas and colonization obtained over time while exposed and interactions with external climate conditions.

Keywords: biogenic and non-biogenic, natural processes of transformation, colonization, patina

Procedia PDF Downloads 87
166 Developing a Product Circularity Index with an Emphasis on Longevity, Repairability, and Material Efficiency

Authors: Lina Psarra, Manogj Sundaresan, Purjeet Sutar

Abstract:

In response to the global imperative for sustainable solutions, this article proposes the development of a comprehensive circularity index applicable to a wide range of products across various industries. The absence of a consensus on using a universal metric to assess circularity performance presents a significant challenge in prioritizing and effectively managing sustainable initiatives. This circularity index serves as a quantitative measure to evaluate the adherence of products, processes, and systems to the principles of a circular economy. Unlike traditional distinct metrics such as recycling rates or material efficiency, this index considers the entire lifecycle of a product in one single metric, also incorporating additional factors such as reusability, scarcity of materials, reparability, and recyclability. Through a systematic approach and by reviewing existing metrics and past methodologies, this work aims to address this gap by formulating a circularity index that can be applied to diverse product portfolio and assist in comparing the circularity of products on a scale of 0%-100%. Project objectives include developing a formula, designing and implementing a pilot tool based on the developed Product Circularity Index (PCI), evaluating the effectiveness of the formula and tool using real product data, and assessing the feasibility of integration into various sustainability initiatives. The research methodology involves an iterative process of comprehensive research, analysis, and refinement where key steps include defining circularity parameters, collecting relevant product data, applying the developed formula, and testing the tool in a pilot phase to gather insights and make necessary adjustments. Major findings of the study indicate that the PCI provides a robust framework for evaluating product circularity across various dimensions. The Excel-based pilot tool demonstrated high accuracy and reliability in measuring circularity, and the database proved instrumental in supporting comprehensive assessments. The PCI facilitated the identification of key areas for improvement, enabling more informed decision-making towards circularity and benchmarking across different products, essentially assisting towards better resource management. In conclusion, the development of the Product Circularity Index represents a significant advancement in global sustainability efforts. By providing a standardized metric, the PCI empowers companies and stakeholders to systematically assess product circularity, track progress, identify improvement areas, and make informed decisions about resource management. This project contributes to the broader discourse on sustainable development by offering a practical approach to enhance circularity within industrial systems, thus paving the way towards a more resilient and sustainable future.

Keywords: circular economy, circular metrics, circularity assessment, circularity tool, sustainable product design, product circularity index

Procedia PDF Downloads 28
165 Averting a Financial Crisis through Regulation, Including Legislation

Authors: Maria Krambia-Kapardis, Andreas Kapardis

Abstract:

The paper discusses regulatory and legislative measures implemented by various nations in an effort to avert another financial crisis. More specifically, to address the financial crisis, the European Commission followed the practice of other developed countries and implemented a European Economic Recovery Plan in an attempt to overhaul the regulatory and supervisory framework of the financial sector. In 2010 the Commission introduced the European Systemic Risk Board and in 2011 the European System of Financial Supervision. Some experts advocated that the type and extent of financial regulation introduced in the European crisis in the wake of the 2008 crisis has been excessive and counterproductive. In considering how different countries responded to the financial crisis, global regulators have shown a more focused commitment to combat industry misconduct and to pre-empt abusive behavior. Regulators have also increased funding and resources at their disposal; have increased regulatory fines, with an increasing trend towards action against individuals; and, finally, have focused on market abuse and market conduct issues. Financial regulation can be effected, first of all, through legislation. However, neither ex ante or ex post regulation is by itself effective in reducing systemic risk. Consequently, to avert a financial crisis, in their endeavor to achieve both economic efficiency and financial stability, governments need to balance the two approaches to financial regulation. Fiduciary duty is another means by which the behavior of actors in the financial world is constrained and, thus, regulated. Furthermore, fiduciary duties extend over and above other existing requirements set out by statute and/or common law and cover allegations of breach of fiduciary duty, negligence or fraud. Careful analysis of the etiology of the 2008 financial crisis demonstrates the great importance of corporate governance as a way of regulating boardroom behavior. In addition, the regulation of professions including accountants and auditors plays a crucial role as far as the financial management of companies is concerned. In the US, the Sarbanes-Oxley Act of 2002 established the Public Company Accounting Oversight Board in order to protect investors from financial accounting fraud. In most countries around the world, however, accounting regulation consists of a legal framework, international standards, education, and licensure. Accounting regulation is necessary because of the information asymmetry and the conflict of interest that exists between managers and users of financial information. If a holistic approach is to be taken then one cannot ignore the regulation of legislators themselves which can take the form of hard or soft legislation. The science of averting a financial crisis is yet to be perfected and this, as shown by the preceding discussion, is unlikely to be achieved in the foreseeable future as ‘disaster myopia’ may be reduced but will not be eliminated. It is easier, of course, to be wise in hindsight and regulating unreasonably risky decisions and unethical or outright criminal behavior in the financial world remains major challenges for governments, corporations, and professions alike.

Keywords: financial crisis, legislation, regulation, financial regulation

Procedia PDF Downloads 398
164 Methodological Approach to the Elaboration and Implementation of the Spatial-Urban Plan for the Special Purpose Area: Case-Study of Infrastructure Corridor of Highway E-80, Section Nis-Merdare, Serbia

Authors: Nebojsa Stefanovic, Sasa Milijic, Natasa Danilovic Hristic

Abstract:

Spatial plan of the special purpose area constitutes a basic tool in the planning of infrastructure corridor of a highway. The aim of the plan is to define the planning basis and provision of spatial conditions for the construction and operation of the highway, as well as for developing other infrastructure systems in the corridor. This paper presents a methodology and approach to the preparation of the Spatial Plan for the special purpose area for the infrastructure corridor of the highway E-80, Section Niš-Merdare in Serbia. The applied methodological approach is based on the combined application of the integrative and participatory method in the decision-making process on the sustainable development of the highway corridor. It was found that, for the planning and management of the infrastructure corridor, a key problem is coordination of spatial and urban planning, strategic environmental assessment and sectoral traffic planning and designing. Through the development of the plan, special attention is focused on increasing the accessibility of the local and regional surrounding, reducing the adverse impacts on the development of settlements and the economy, protection of natural resources, natural and cultural heritage, and the development of other infrastructure systems in the corridor of the highway. As a result of the applied methodology, this paper analyzes the basic features such as coverage, the concept, protected zones, service facilities and objects, the rules of development and construction, etc. Special emphasis is placed to methodology and results of the Strategic Environmental Assessment of the Spatial Plan, and to the importance of protection measures, with the special significance of air and noise protection measures. For evaluation in the Strategic Environmental Assessment, a multicriteria expert evaluation (semi-quantitative method) of planned solutions was used in relation to the set of goals and relevant indicators, based on the basic set of indicators of sustainable development. Evaluation of planned solutions encompassed the significance and size, spatial conditions and probability of the impact of planned solutions on the environment, and the defined goals of strategic assessment. The framework of the implementation of the Spatial Plan is presented, which is determined for the simultaneous elaboration of planning solutions at two levels: the strategic level of the spatial plan and detailed urban plan level. It is also analyzed the relationship of the Spatial Plan to other applicable planning documents for the planning area. The effects of this methodological approach relate to enabling integrated planning of the sustainable development of the infrastructure corridor of the highway and its surrounding area, through coordination of spatial, urban and sectoral traffic planning and design, as well as the participation of all key actors in the adoption and implementation of planned decisions. By the conclusions of the paper, it is pointed to the direction for further research, particularly in terms of harmonizing methodology of planning documentation and preparation of technical-design documentation.

Keywords: corridor, environment, highway, impact, methodology, spatial plan, urban

Procedia PDF Downloads 212
163 Revolutionizing Accounting: Unleashing the Power of Artificial Intelligence

Authors: Sogand Barghi

Abstract:

The integration of artificial intelligence (AI) in accounting practices is reshaping the landscape of financial management. This paper explores the innovative applications of AI in the realm of accounting, emphasizing its transformative impact on efficiency, accuracy, decision-making, and financial insights. By harnessing AI's capabilities in data analysis, pattern recognition, and automation, accounting professionals can redefine their roles, elevate strategic decision-making, and unlock unparalleled value for businesses. This paper delves into AI-driven solutions such as automated data entry, fraud detection, predictive analytics, and intelligent financial reporting, highlighting their potential to revolutionize the accounting profession. Artificial intelligence has swiftly emerged as a game-changer across industries, and accounting is no exception. This paper seeks to illuminate the profound ways in which AI is reshaping accounting practices, transcending conventional boundaries, and propelling the profession toward a new era of efficiency and insight-driven decision-making. One of the most impactful applications of AI in accounting is automation. Tasks that were once labor-intensive and time-consuming, such as data entry and reconciliation, can now be streamlined through AI-driven algorithms. This not only reduces the risk of errors but also allows accountants to allocate their valuable time to more strategic and analytical tasks. AI's ability to analyze vast amounts of data in real time enables it to detect irregularities and anomalies that might go unnoticed by traditional methods. Fraud detection algorithms can continuously monitor financial transactions, flagging any suspicious patterns and thereby bolstering financial security. AI-driven predictive analytics can forecast future financial trends based on historical data and market variables. This empowers organizations to make informed decisions, optimize resource allocation, and develop proactive strategies that enhance profitability and sustainability. Traditional financial reporting often involves extensive manual effort and data manipulation. With AI, reporting becomes more intelligent and intuitive. Automated report generation not only saves time but also ensures accuracy and consistency in financial statements. While the potential benefits of AI in accounting are undeniable, there are challenges to address. Data privacy and security concerns, the need for continuous learning to keep up with evolving AI technologies, and potential biases within algorithms demand careful attention. The convergence of AI and accounting marks a pivotal juncture in the evolution of financial management. By harnessing the capabilities of AI, accounting professionals can transcend routine tasks, becoming strategic advisors and data-driven decision-makers. The applications discussed in this paper underline the transformative power of AI, setting the stage for an accounting landscape that is smarter, more efficient, and more insightful than ever before. The future of accounting is here, and it's driven by artificial intelligence.

Keywords: artificial intelligence, accounting, automation, predictive analytics, financial reporting

Procedia PDF Downloads 71
162 Negotiating Communication Options for Deaf-Disabled Children

Authors: Steven J. Singer, Julianna F. Kamenakis, Allison R. Shapiro, Kimberly M. Cacciato

Abstract:

Communication and language are topics frequently studied among deaf children. However, there is limited research that focuses specifically on the communication and language experiences of Deaf-Disabled children. In this ethnography, researchers investigated the language experiences of six sets of parents with Deaf-Disabled children who chose American Sign Language (ASL) as the preferred mode of communication for their child. Specifically, the researchers were interested in the factors that influenced the parents’ decisions regarding their child’s communication options, educational placements, and social experiences. Data collection in this research included 18 hours of semi-structured interviews, 20 hours of participant observations, over 150 pages of reflexive journals and field notes, and a 2-hour focus group. The team conducted constant comparison qualitative analysis using NVivo software and an inductive coding procedure. The four researchers each read the data several times until they were able to chunk it into broad categories about communication and social influences. The team compared the various categories they developed, selecting ones that were consistent among researchers and redefining categories that differed. Continuing to use open inductive coding, the research team refined the categories until they were able to develop distinct themes. Two team members developed each theme through a process of independent coding, comparison, discussion, and resolution. The research team developed three themes: 1) early medical needs provided time for the parents to explore various communication options for their Deaf-Disabled child, 2) without intervention from medical professionals or educators, ASL emerged as a prioritized mode of communication for the family, 3) atypical gender roles affected familial communication dynamics. While managing the significant health issues of their Deaf-Disabled child at birth, families and medical professionals were so fixated on tending to the medical needs of the child that the typical pressures of determining a mode of communication were deprioritized. This allowed the families to meticulously research various methods of communication, resulting in an informed, rational, and well-considered decision to use ASL as the primary mode of communication with their Deaf-Disabled child. It was evident that having a Deaf-Disabled child meant an increased amount of labor and responsibilities for parents. This led to a shift in the roles of the family members. During the child’s development, the mother transformed from fulfilling the stereotypical roles of nurturer and administrator to that of administrator and champion. The mother facilitated medical proceedings and educational arrangements while the father became the caretaker and nurturer of their Deaf-Disabled child in addition to the traditional role of earning the family’s primary income. Ultimately, this research led to a deeper understanding of the critical role that time plays in parents’ decision-making process regarding communication methods with their Deaf-Disabled child.

Keywords: American Sign Language, deaf-disabled, ethnography, sociolinguistics

Procedia PDF Downloads 120
161 Harnessing Emerging Creative Technology for Knowledge Discovery of Multiwavelenght Datasets

Authors: Basiru Amuneni

Abstract:

Astronomy is one domain with a rise in data. Traditional tools for data management have been employed in the quest for knowledge discovery. However, these traditional tools become limited in the face of big. One means of maximizing knowledge discovery for big data is the use of scientific visualisation. The aim of the work is to explore the possibilities offered by emerging creative technologies of Virtual Reality (VR) systems and game engines to visualize multiwavelength datasets. Game Engines are primarily used for developing video games, however their advanced graphics could be exploited for scientific visualization which provides a means to graphically illustrate scientific data to ease human comprehension. Modern astronomy is now in the era of multiwavelength data where a single galaxy for example, is captured by the telescope several times and at different electromagnetic wavelength to have a more comprehensive picture of the physical characteristics of the galaxy. Visualising this in an immersive environment would be more intuitive and natural for an observer. This work presents a standalone VR application that accesses galaxy FITS files. The application was built using the Unity Game Engine for the graphics underpinning and the OpenXR API for the VR infrastructure. The work used a methodology known as Design Science Research (DSR) which entails the act of ‘using design as a research method or technique’. The key stages of the galaxy modelling pipeline are FITS data preparation, Galaxy Modelling, Unity 3D Visualisation and VR Display. The FITS data format cannot be read by the Unity Game Engine directly. A DLL (CSHARPFITS) which provides a native support for reading and writing FITS files was used. The Galaxy modeller uses an approach that integrates cleaned FITS image pixels into the graphics pipeline of the Unity3d game Engine. The cleaned FITS images are then input to the galaxy modeller pipeline phase, which has a pre-processing script that extracts, pixel, galaxy world position, and colour maps the FITS image pixels. The user can visualise image galaxies in different light bands, control the blend of the image with similar images from different sources or fuse images for a holistic view. The framework will allow users to build tools to realise complex workflows for public outreach and possibly scientific work with increased scalability, near real time interactivity with ease of access. The application is presented in an immersive environment and can use all commercially available headset built on the OpenXR API. The user can select galaxies in the scene, teleport to the galaxy, pan, zoom in/out, and change colour gradients of the galaxy. The findings and design lessons learnt in the implementation of different use cases will contribute to the development and design of game-based visualisation tools in immersive environment by enabling informed decisions to be made.

Keywords: astronomy, visualisation, multiwavelenght dataset, virtual reality

Procedia PDF Downloads 91
160 Environmental Impact of a New-Build Educational Building in England: Life-Cycle Assessment as a Method to Calculate Whole Life Carbon Emissions

Authors: Monkiz Khasreen

Abstract:

In the context of the global trend towards reducing new buildings carbon footprint, the design team is required to make early decisions that have a major influence on embodied and operational carbon. Sustainability strategies should be clear during early stages of building design process, as changes made later can be extremely costly. Life-Cycle Assessment (LCA) could be used as the vehicle to carry other tools and processes towards achieving the requested improvement. Although LCA is the ‘golden standard’ to evaluate buildings from 'cradle to grave', lack of details available on the concept design makes LCA very difficult, if not impossible, to be used as an estimation tool at early stages. Issues related to transparency and accessibility of information in the building industry are affecting the credibility of LCA studies. A verified database derived from LCA case studies is required to be accessible to researchers, design professionals, and decision makers in order to offer guidance on specific areas of significant impact. This database could be the build-up of data from multiple sources within a pool of research held in this context. One of the most important factors that affects the reliability of such data is the temporal factor as building materials, components, and systems are rapidly changing with the advancement of technology making production more efficient and less environmentally harmful. Recent LCA studies on different building functions, types, and structures are always needed to update databases derived from research and to form case bases for comparison studies. There is also a need to make these studies transparent and accessible to designers. The work in this paper sets out to address this need. This paper also presents life-cycle case study of a new-build educational building in England. The building utilised very current construction methods and technologies and is rated as BREEAM excellent. Carbon emissions of different life-cycle stages and different building materials and components were modelled. Scenario and sensitivity analyses were used to estimate the future of new educational buildings in England. The study attempts to form an indicator during the early design stages of similar buildings. Carbon dioxide emissions of this case study building, when normalised according to floor area, lie towards the lower end of the range of worldwide data reported in the literature. Sensitivity analysis shows that life cycle assessment results are highly sensitive to future assumptions made at the design stage, such as future changes in electricity generation structure over time, refurbishment processes and recycling. The analyses also prove that large savings in carbon dioxide emissions can result from very small changes at the design stage.

Keywords: architecture, building, carbon dioxide, construction, educational buildings, England, environmental impact, life-cycle assessment

Procedia PDF Downloads 112
159 Analyzing the Websites of Institutions Publishing Global Rankings of Universities: A Usability Study

Authors: Nuray Baltaci, Kursat Cagiltay

Abstract:

University rankings which can be seen as nouveau topic are at the center of focus and followed closely by different parties. Students are interested in university rankings in order to make informed decisions about the selection of their candidate future universities. University administrators and academicians can utilize them to see and evaluate their universities’ relative performance compared to other institutions in terms of including but not limited to academic, economic, and international outlook issues. Local institutions may use those ranking systems, as TUBITAK (The Scientific and Technological Research Council of Turkey) and YOK (Council of Higher Education) do in Turkey, to support students and give scholarships when they want to apply for undergraduate and graduate studies abroad. When it is considered that the ranking systems are concerned by this many different parties, the importance of having clear, easy to use and well-designed websites by ranking institutions will be apprehended. In this paper, a usability study for the websites of four different global university ranking institutions, namely Academic Ranking of World Universities (ARWU), Times Higher Education, QS and University Ranking by Academic Performance (URAP), was conducted. User-based approach was adopted and usability tests were conducted with 10 graduate students at Middle East Technical University in Ankara, Turkey. Before performing the formal usability tests, a pilot study had been completed to reflect the necessary changes to the settings of the study. Participants’ demographics, task completion times, paths traced to complete tasks, and their satisfaction levels on each task and website were collected. According to the analyses of the collected data, those ranking websites were compared in terms of efficiency, effectiveness and satisfaction dimensions of usability as pointed in ISO 9241-11. Results showed that none of the selected ranking websites is superior to other ones in terms of overall effectiveness and efficiency of the website. However the only remarkable result was that the highest average task completion times for two of the designed tasks belong to the Times Higher Education Rankings website. Evaluation of the user satisfaction on each task and each website produced slightly different but rather similar results. When the satisfaction levels of the participants on each task are examined, it was seen that the highest scores belong to ARWU and URAP websites. The overall satisfaction levels of the participants for each website showed that the URAP website has highest score followed by ARWU website. In addition, design problems and powerful design features of those websites reported by the participants are presented in the paper. Since the study mainly tackles about the design problems of the URAP website, the focus is on this website. Participants reported 3 main design problems about the website which are unaesthetic and unprofessional design style of the website, improper map location on ranking pages, and improper listing of the field names on field ranking page.

Keywords: university ranking, user-based approach, website usability, design

Procedia PDF Downloads 397