Search results for: external resource
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4423

Search results for: external resource

313 Validation of Mapping Historical Linked Data to International Committee for Documentation (CIDOC) Conceptual Reference Model Using Shapes Constraint Language

Authors: Ghazal Faraj, András Micsik

Abstract:

Shapes Constraint Language (SHACL), a World Wide Web Consortium (W3C) language, provides well-defined shapes and RDF graphs, named "shape graphs". These shape graphs validate other resource description framework (RDF) graphs which are called "data graphs". The structural features of SHACL permit generating a variety of conditions to evaluate string matching patterns, value type, and other constraints. Moreover, the framework of SHACL supports high-level validation by expressing more complex conditions in languages such as SPARQL protocol and RDF Query Language (SPARQL). SHACL includes two parts: SHACL Core and SHACL-SPARQL. SHACL Core includes all shapes that cover the most frequent constraint components. While SHACL-SPARQL is an extension that allows SHACL to express more complex customized constraints. Validating the efficacy of dataset mapping is an essential component of reconciled data mechanisms, as the enhancement of different datasets linking is a sustainable process. The conventional validation methods are the semantic reasoner and SPARQL queries. The former checks formalization errors and data type inconsistency, while the latter validates the data contradiction. After executing SPARQL queries, the retrieved information needs to be checked manually by an expert. However, this methodology is time-consuming and inaccurate as it does not test the mapping model comprehensively. Therefore, there is a serious need to expose a new methodology that covers the entire validation aspects for linking and mapping diverse datasets. Our goal is to conduct a new approach to achieve optimal validation outcomes. The first step towards this goal is implementing SHACL to validate the mapping between the International Committee for Documentation (CIDOC) conceptual reference model (CRM) and one of its ontologies. To initiate this project successfully, a thorough understanding of both source and target ontologies was required. Subsequently, the proper environment to run SHACL and its shape graphs were determined. As a case study, we performed SHACL over a CIDOC-CRM dataset after running a Pellet reasoner via the Protégé program. The applied validation falls under multiple categories: a) data type validation which constrains whether the source data is mapped to the correct data type. For instance, checking whether a birthdate is assigned to xsd:datetime and linked to Person entity via crm:P82a_begin_of_the_begin property. b) Data integrity validation which detects inconsistent data. For instance, inspecting whether a person's birthdate occurred before any of the linked event creation dates. The expected results of our work are: 1) highlighting validation techniques and categories, 2) selecting the most suitable techniques for those various categories of validation tasks. The next plan is to establish a comprehensive validation model and generate SHACL shapes automatically.

Keywords: SHACL, CIDOC-CRM, SPARQL, validation of ontology mapping

Procedia PDF Downloads 232
312 Behavioral Patterns of Adopting Digitalized Services (E-Sport versus Sports Spectating) Using Agent-Based Modeling

Authors: Justyna P. Majewska, Szymon M. Truskolaski

Abstract:

The growing importance of digitalized services in the so-called new economy, including the e-sports industry, can be observed recently. Various demographic or technological changes lead consumers to modify their needs, not regarding the services themselves but the method of their application (attracting customers, forms of payment, new content, etc.). In the case of leisure-related to competitive spectating activities, there is a growing need to participate in events whose content is not sports competitions but computer games challenge – e-sport. The literature in this area so far focuses on determining the number of e-sport fans with elements of a simple statistical description (mainly concerning demographic characteristics such as age, gender, place of residence). Meanwhile, the development of the industry is influenced by a combination of many different, intertwined demographic, personality and psychosocial characteristics of customers, as well as the characteristics of their environment. Therefore, there is a need for a deeper recognition of the determinants of the behavioral patterns upon selecting digitalized services by customers, which, in the absence of available large data sets, can be achieved by using econometric simulations – multi-agent modeling. The cognitive aim of the study is to reveal internal and external determinants of behavioral patterns of customers taking into account various variants of economic development (the pace of digitization and technological development, socio-demographic changes, etc.). In the paper, an agent-based model with heterogeneous agents (characteristics of customers themselves and their environment) was developed, which allowed identifying a three-stage development scenario: i) initial interest, ii) standardization, and iii) full professionalization. The probabilities regarding the transition process were estimated using the Method of Simulated Moments. The estimation of the agent-based model parameters and sensitivity analysis reveals crucial factors that have driven a rising trend in e-sport spectating and, in a wider perspective, the development of digitalized services. Among the psychosocial characteristics of customers, they are the level of familiarization with the rules of games as well as sports disciplines, active and passive participation history and individual perception of challenging activities. Environmental factors include general reception of games, number and level of recognition of community builders and the level of technological development of streaming as well as community building platforms. However, the crucial factor underlying the good predictive power of the model is the level of professionalization. While in the initial interest phase, the entry barriers for new customers are high. They decrease during the phase of standardization and increase again in the phase of full professionalization when new customers perceive participation history inaccessible. In this case, they are prone to switch to new methods of service application – in the case of e-sport vs. sports to new content and more modern methods of its delivery. In a wider context, the findings in the paper support the idea of a life cycle of services regarding methods of their application from “traditional” to digitalized.

Keywords: agent-based modeling, digitalized services, e-sport, spectators motives

Procedia PDF Downloads 148
311 A Study of the Atlantoaxial Fracture or Dislocation in Motorcyclists with Helmet Accidents

Authors: Shao-Huang Wu, Ai-Yun Wu, Meng-Chen Wu, Chun-Liang Wu, Kai-Ping Shaw, Hsiao-Ting Chen

Abstract:

Objective: To analyze the forensic autopsy data of known passengers and compare it with the National database of the autopsy report in 2017, and obtain the special patterned injuries, which can be used as the reference for the reconstruction of hit-and-run motor vehicle accidents. Methods: Analyze the items of the Motor Vehicle Accident Report, including Date of accident, Time occurred, Day, Acc. severity, Acc. Location, Acc. Class, Collision with Vehicle, Motorcyclists Codes, Safety equipment use, etc. Analyzed the items of the Autopsy Report included, including General Description, Clothing and Valuables, External Examination, Head and Neck Trauma, Trunk Trauma, Other Injuries, Internal Examination, Associated Items, Autopsy Determinations, etc. Materials: Case 1. The process of injury formation: the car was chased forward and collided with the scooter. The passenger wearing the helmet fell to the ground. The helmet crashed under the bottom of the sedan, and the bottom of the sedan was raised. Additionally, the sedan was hit on the left by the other sedan behind, resulting in the front sedan turning 180 degrees on the spot. The passenger’s head was rotated, and the cervical spine was fractured. Injuries: 1. Fracture of atlantoaxial joint 2. Fracture of the left clavicle, scapula, and proximal humerus 3. Fracture of the 1-10 left ribs and 2-7 right ribs with lung contusion and hemothorax 4. Fracture of the transverse process of 2-5 lumbar vertebras 5. Comminuted fracture of the right femur 6. Suspected subarachnoid space and subdural hemorrhage 7. Laceration of the spleen. Case 2. The process of injury formation: The motorcyclist wearing the helmet fell to the left by himself, and his chest was crushed by the car going straight. Only his upper body was under the car and the helmet finally fell off. Injuries: 1. Dislocation of atlantoaxial joint 2. Laceration on the left posterior occipital 3. Laceration on the left frontal 4. Laceration on the left side of the chin 5. Strip bruising on the anterior neck 6. Open rib fracture of the right chest wall 7. Comminuted fracture of both 1-12 ribs 8. Fracture of the sternum 9. Rupture of the left lung 10. Rupture of the left and right atria, heart tip and several large vessels 11. The aortic root is nearly transected 12. Severe rupture of the liver. Results: The common features of the two cases were the fracture or dislocation of the atlantoaxial joint and both helmets that were crashed. There were no atlantoaxial fractures or dislocations in 27 pedestrians (without wearing a helmet) versus motor vehicle accidents in 2017 the National database of an autopsy report, but there were two atlantoaxial fracture or dislocation cases in the database, both of which were cases of falling from height. Conclusion: The cervical spine fracture injury of the motorcyclist, who was wearing a helmet, is very likely to be a patterned injury caused by his/her fall and rollover under the sedan. It could provide a reference for forensic peers.

Keywords: patterned injuries, atlantoaxial fracture or dislocation, accident reconstruction, motorcycle accident with helmet, forensic autopsy data

Procedia PDF Downloads 60
310 Urban Enclaves Caused by Migration: Little Aleppo in Ankara, Turkey

Authors: Sezen Aslan, N. Aydan Sat

Abstract:

The society of 21st century constantly faces with complex otherness that emerges in various forms and justifications. Otherness caused by class, race or ethnicity inevitably reflects to urban areas, and in this way, cities are diversified into totally self-centered and closed-off urban enclaves. One of the most important dynamics that creates otherness in contemporary society is migration. Immigration on an international scale is one of the most important events that have reshaped the world, and the number of immigrants in the world is increasing day by day. Forced migration and refugee statements constitute the major part of countries' immigration policies and practices. Domestic problems such as racism, violence, war, censorship and silencing, attitudes contrary to human rights, different cultural or religious identities cause populations to migrate. Immigration is one of the most important reasons for the formation of urban enclaves within cities. Turkey, which was used to face a higher rate of outward migration, has begun to host immigrant groups from foreign countries. 1980s is the breaking point about the issue as a result of internal disturbances in the Middle East. After Iranian, Iraqi and Afghan immigrants, Turkey faces the largest external migration in its story with Syrian population. Turkey has been hosting approximate three million Syrian people after Syrian Civil War which started in 2011. 92% of Syrian refugees are currently living in different urban areas in Turkey instead of camps. Syrian refugees are experiencing a spontaneous spatiality due to the lack of specific settlement and housing policies of the country. This spontaneity is one of the most important factors in the creation of urban enclaves. From this point of view, the aim of this study is to clarify processes that lead the creation of urban enclaves and to explain socio-spatial effects of these urban enclaves to the other parts of the cities. Ankara, which is one of the most registered Syrian hosting Province in Turkey, is selected as a case study area. About 55% of the total Syrian population lives in the Altındağ district in Ankara. They settled specifically in two neighborhoods in Altındağ district, named as Önder and Ulubey. These neighborhoods are old slum areas, and they were evacuated due to urban renewal on the same dates with the migration of the Syrians. Before demolition of these old slums, Syrians are settled into them as tenants. In the first part of the study, a brief explanation of the concept of urban enclave, its occurrence parameters and possible socio-spatial threats, examples from previous immigrant urban enclaves caused internal migration will be given. Emergence of slums, planning history and social processes in the case study area will be described in the second part of the study. The third part will be focused on the Syrian refugees and their socio-spatial relationship in the case study area and in-depth interviews with refugees and spatial analysis will be realized. Suggestions for the future of the case study area and recommendations to prevent immigrant groups from social and spatial exclusion will be discussed in the conclusion part of the study.

Keywords: migration, immigration, Syrian refugees, urban enclaves, Ankara

Procedia PDF Downloads 181
309 A Valid Professional Development Framework For Supporting Science Teachers In Relation To Inquiry-Based Curriculum Units

Authors: Fru Vitalis Akuma, Jenna Koenen

Abstract:

The science education community is increasingly calling for learning experiences that mirror the work of scientists. Although inquiry-based science education is aligned with these calls, the implementation of this strategy is a complex and daunting task for many teachers. Thus, policymakers and researchers have noted the need for continued teacher Professional Development (PD) in the enactment of inquiry-based science education, coupled with effective ways of reaching the goals of teacher PD. This is a complex problem for which educational design research is suitable. The purpose at this stage of our design research is to develop a generic PD framework that is valid as the blueprint of a PD program for supporting science teachers in relation to inquiry-based curriculum units. The seven components of the framework are the goal, learning theory, strategy, phases, support, motivation, and an instructional model. Based on a systematic review of the literature on effective (science) teacher PD, coupled with developer screening, we have generated a design principle per component of the PD framework. For example, as per the associated design principle, the goal of the framework is to provide science teachers with experiences in authentic inquiry, coupled with enhancing their competencies linked to the adoption, customization and design; then the classroom implementation and the revision of inquiry-based curriculum units. The seven design principles have allowed us to synthesize the PD framework, which, coupled with the design principles, are the preliminary outcomes of the current research. We are in the process of evaluating the content and construct validity of the framework, based on nine one-on-one interviews with experts in inquiry-based classroom and teacher learning. To this end, we have developed an interview protocol with the input of eight such experts in South Africa and Germany. Using the protocol, the expert appraisal of the PD framework will involve three experts from Germany, South Africa, and Cameroon, respectively. These countries, where we originate and/or work, provide a variety of inquiry-based science education contexts, making the countries suitable in the evaluation of the generic PD framework. Based on the evaluation, we will revise the framework and its seven design principles to arrive at the final outcomes of the current research. While the final content and construct a valid version of the framework will serve as an example of the needed ways through which effective inquiry-based science teacher PD may be achieved, the final design principles will be useful to researchers when transforming the framework for use in any specific educational context. For example, in our further research, we will transform the framework to one that is practical and effective in supporting inquiry-based practical work in resource-constrained physical sciences classrooms in South Africa. Researchers in other educational contexts may similarly consider the final framework and design principles in their work. Thus, our final outcomes will inform practice and research around the support of teachers to increase the incorporation of learning experiences that mirror the work of scientists in a worldwide manner.

Keywords: design principles, educational design research, evaluation, inquiry-based science education, professional development framework

Procedia PDF Downloads 130
308 Monitoring of Rice Phenology and Agricultural Practices from Sentinel 2 Images

Authors: D. Courault, L. Hossard, V. Demarez, E. Ndikumana, D. Ho Tong Minh, N. Baghdadi, F. Ruget

Abstract:

In the global change context, efficient management of the available resources has become one of the most important topics, particularly for sustainable crop development. Timely assessment with high precision is crucial for water resource and pest management. Rice cultivated in Southern France in the Camargue region must face a challenge, reduction of the soil salinity by flooding and at the same time reduce the number of herbicides impacting negatively the environment. This context has lead farmers to diversify crop rotation and their agricultural practices. The objective of this study was to evaluate this crop diversity both in crop systems and in agricultural practices applied to rice paddy in order to quantify the impact on the environment and on the crop production. The proposed method is based on the combined use of crop models and multispectral data acquired from the recent Sentinel 2 satellite sensors launched by the European Space Agency (ESA) within the homework of the Copernicus program. More than 40 images at fine spatial resolution (10m in the optical range) were processed for 2016 and 2017 (with a revisit time of 5 days) to map crop types using random forest method and to estimate biophysical variables (LAI) retrieved by inversion of the PROSAIL canopy radiative transfer model. Thanks to the high revisit time of Sentinel 2 data, it was possible to monitor the soil labor before flooding and the second sowing made by some farmers to better control weeds. The temporal trajectories of remote sensing data were analyzed for various rice cultivars for defining the main parameters describing the phenological stages useful to calibrate two crop models (STICS and SAFY). Results were compared to surveys conducted with 10 farms. A large variability of LAI has been observed at farm scale (up to 2-3m²/m²) which induced a significant variability in the yields simulated (up to 2 ton/ha). Observations on more than 300 fields have also been collected on land use. Various maps were elaborated, land use, LAI, flooding and sowing, and harvest dates. All these maps allow proposing a new typology to classify these paddy crop systems. Key phenological dates can be estimated from inverse procedures and were validated against ground surveys. The proposed approach allowed to compare the years and to detect anomalies. The methods proposed here can be applied at different crops in various contexts and confirm the potential of remote sensing acquired at fine resolution such as the Sentinel2 system for agriculture applications and environment monitoring. This study was supported by the French national center of spatial studies (CNES, funded by the TOSCA).

Keywords: agricultural practices, remote sensing, rice, yield

Procedia PDF Downloads 251
307 Optimal-Based Structural Vibration Attenuation Using Nonlinear Tuned Vibration Absorbers

Authors: Pawel Martynowicz

Abstract:

Vibrations are a crucial problem for slender structures such as towers, masts, chimneys, wind turbines, bridges, high buildings, etc., that is why most of them are equipped with vibration attenuation or fatigue reduction solutions. In this work, a slender structure (i.e., wind turbine tower-nacelle model) equipped with nonlinear, semiactive tuned vibration absorber(s) is analyzed. For this study purposes, magnetorheological (MR) dampers are used as semiactive actuators. Several optimal-based approaches to structural vibration attenuation are investigated against the standard ‘ground-hook’ law and passive tuned vibration absorber(s) implementations. The common approach to optimal control of nonlinear systems is offline computation of the optimal solution, however, so determined open loop control suffers from lack of robustness to uncertainties (e.g., unmodelled dynamics, perturbations of external forces or initial conditions), and thus perturbation control techniques are often used. However, proper linearization may be an issue for highly nonlinear systems with implicit relations between state, co-state, and control. The main contribution of the author is the development as well as numerical and experimental verification of the Pontriagin maximum-principle-based vibration control concepts that produce directly actuator control input (not the demanded force), thus force tracking algorithm that results in control inaccuracy is entirely omitted. These concepts, including one-step optimal control, quasi-optimal control, and optimal-based modified ‘ground-hook’ law, can be directly implemented in online and real-time feedback control for periodic (or semi-periodic) disturbances with invariant or time-varying parameters, as well as for non-periodic, transient or random disturbances, what is a limitation for some other known solutions. No offline calculation, excitations/disturbances assumption or vibration frequency determination is necessary, moreover, all of the nonlinear actuator (MR damper) force constraints, i.e., no active forces, lower and upper saturation limits, hysteresis-type dynamics, etc., are embedded in the control technique, thus the solution is optimal or suboptimal for the assumed actuator, respecting its limitations. Depending on the selected method variant, a moderate or decisive reduction in the computational load is possible compared to other methods of nonlinear optimal control, while assuring the quality and robustness of the vibration reduction system, as well as considering multi-pronged operational aspects, such as possible minimization of the amplitude of the deflection and acceleration of the vibrating structure, its potential and/or kinetic energy, required actuator force, control input (e.g. electric current in the MR damper coil) and/or stroke amplitude. The developed solutions are characterized by high vibration reduction efficiency – the obtained maximum values of the dynamic amplification factor are close to 2.0, while for the best of the passive systems, these values exceed 3.5.

Keywords: magnetorheological damper, nonlinear tuned vibration absorber, optimal control, real-time structural vibration attenuation, wind turbines

Procedia PDF Downloads 101
306 Crisis Management and Corporate Political Activism: A Qualitative Analysis of Online Reactions toward Tesla

Authors: Roxana D. Maiorescu-Murphy

Abstract:

In the US, corporations have recently embraced political stances in an attempt to respond to the external pressure exerted by activist groups. To date, research in this area remains in its infancy, and few studies have been conducted on the way stakeholder groups respond to corporate political advocacy in general and in the immediacy of such a corporate announcement in particular. The current study aims to fill in this research void. In addition, the study contributes to an emerging trajectory in the field of crisis management by focusing on the delineation between crises (unexpected events related to products and services) and scandals (crises that spur moral outrage). The present study looked at online reactions in the aftermath of Elon Musk’s endorsement of the Republican party on Twitter. Two data sets were collected from Twitter following two political endorsements made by Elon Musk on May 18, 2022, and June 15, 2022, respectively. The total sample of analysis stemming from the data two sets consisted of N=1,374 user comments written as a response to Musk’s initial tweets. Given the paucity of studies in the preceding research areas, the analysis employed a case study methodology, used in circumstances in which the phenomena to be studied had not been researched before. According to the case study methodology, which answers the questions of how and why a phenomenon occurs, this study responded to the research questions of how online users perceived Tesla and why they did so. The data were analyzed in NVivo by the use of the grounded theory methodology, which implied multiple exposures to the text and the undertaking of an inductive-deductive approach. Through multiple exposures to the data, the researcher ascertained the common themes and subthemes in the online discussion. Each theme and subtheme were later defined and labeled. Additional exposures to the text ensured that these were exhaustive. The results revealed that the CEO’s political endorsements triggered moral outrage, leading to Tesla’s facing a scandal as opposed to a crisis. The moral outrage revolved around the stakeholders’ predominant rejection of a perceived intrusion of an influential figure on a domain reserved for voters. As expected, Musk’s political endorsements led to polarizing opinions, and those who opposed his views engaged in online activism aimed to boycott the Tesla brand. These findings reveal that the moral outrage that characterizes a scandal requires communication practices that differ from those that practitioners currently borrow from the field of crisis management. Specifically, because scandals flourish in online settings, practitioners should regularly monitor stakeholder perceptions and address them in real-time. While promptness is essential when managing crises, it becomes crucial to respond immediately as a scandal is flourishing online. Finally, attempts should be made to distance a brand, its products, and its CEO from the latter’s political views.

Keywords: crisis management, communication management, Tesla, corporate political activism, Elon Musk

Procedia PDF Downloads 63
305 Effects of Abiotic Stress on the Phytochemical Content and Bioactivity of Pistacia lentiscus L.

Authors: S. Mamoucha, N. Tsafantakis, Α. Ioannidis, S. Chatzipanagiotou, C. Nikolaou, L. Skaltsounis, N. Fokialakis, N. Christodoulakis

Abstract:

Introduction: Plant secondary metabolites (SM) can be grouped into three chemically distinct groups: terpenes, phenolics, and nitrogen-containing compounds. For many years the adaptive significance of SM was unknown. They were thought to be functionless end-products. Currently it is accepted that many secondary metabolites (also known as natural products) have important ecological roles in plants. For instance, they serve as attractants (odor, color, taste) for pollinators and seed-dispersing animals. Moreover, they protect plants from herbivores, microbial pathogens and from environmental stress (high and low temperatures, drought, alkalinity, salinity, radiation etc). It is well known that both biotic and abiotic stress often increase the accumulation of SM. The local climatic conditions, seasonal changes, external factors such as light, temperature, humidity affect the biosynthesis and composition of secondary metabolites. A well known dioecious evergreen plant, Pistacia lentiscus L. (mastic tree), was selected in order to study the metabolic variations occur in response to the different climate conditions, due to the seasonal variation and its effect on the biosynthesis of bioactive compounds. Materials-methods: Young and mature leaves were collected in January and July 2014, dried and extracted by accelerated solvent extraction (Dionex ASE™ 350) using solvents of increased polarity (DCM, MeOH, and H2O). GC-MS and UHPLC-HRMS analysis were carried out in order to define the nature and the relative abundance of SM. The antibacterial activity was evaluated by using the Agar Disc Diffusion Assay against ATCC and clinical isolates strains: Escherichia coli, Staphylococcus aureus, Pseudomonas aeruginosa, Candida albicans, Streptococcus mutans and Klebsiella pneumoniae. All tests were carried out in duplicate and the average radii of the inhibition zones were calculated for each extract. Results: According to the phytochemical profile obtained from each extract, the biosynthesis of SM varied both qualitatively and quantitatively under the two different types of seasonal stress. With exception of the biologically inactive nonpolar DCM extract of July, all extracts inhibited the growth of most of the investigated microorganisms. A clear positive correlation has been observed between the relative abundance of SM and the bioactivity of the DCM extracts of January and July. Observed changes during phytochemical analysis were mainly focused on the triterpenoid content. On the other hand, the bioactivity of the polar extracts (MeOH and H2O) of January and July resulted practically invariable against most of the microorganisms, besides the significant variation of the SM content due to the seasonal variation. Conclusion: Our results clearly confirmed the hypothesis of abiotic stress as an important regulating factor that significantly affects the biosynthesis of secondary metabolites and thus the presence of bioactive compounds. Acknowledgment: This work was supported by IKY - State Scholarship Foundation, Athens, Greece.

Keywords: antibacterial screening, phytochemical profile, Pistacia lentiscus, abiotic stress

Procedia PDF Downloads 222
304 Technology in Commercial Law Enforcement: Tanzania, Canada, and Singapore Comparatively

Authors: Katarina Revocati Mteule

Abstract:

The background of this research arises from global demands for fair business opportunities. As one of responses to these demands, nations embarked on reforms in commercial laws. In 1990s Tanzania resorted to economic transformation through liberalization to attract more investments included reform in commercial laws enforcement. This research scrutinizes the effectiveness of reforms in Tanzania in comparison with Canada and Singapore and the role of technology. The methodology to be used is doctrinal legal research mixed with international comparative legal research. It involves comparative analysis of library, online, and internet resources as well as Case Laws and Statutory Laws. Tanzania, Canada and Singapore are sampled comparators basing on their distinct level of economic development. The criteria of analysis includes the nature of reforms, type of technology, technological infrastructure and human resource technical competence in each country. As the world progresses towards reforms in commercial laws, improvements in law, policy, and regulatory frameworks are paramount. Specifically, commercial laws are essential in contract enforcement and dispute resolution and how it copes with modern technologies is a concern. Harnessing the best technology is necessary to cope with the modernity in world businesses. In line with this, Tanzania is improving its business environment, including law enforcement mechanisms that are supportive to investments. Reforms such as specialized commercial law enforcement coupled with alternative dispute resolutions such as arbitration, mediation, and reconciliation are emphasized. Court technology as one of the reform tools given high priority. This research evaluates the progress and the effectiveness of the reforms in Commercial Laws towards friendly business environment in Tanzania in comparison with Canada and Singapore. The experience of Tanzania is compared with Canada and Singapore to see what to improve for each country to enhance quick and fair enforcement of commercial law. The research proposes necessary global standards of procedures and in national laws to offer a business-friendly environment and the use of appropriate technology. Solutions are proposed in tackling the challenges of delays in enforcing Commercial Laws such as case management, funding, legal and procedural hindrances, laxity among staff, and abuse of Court process among litigants, all in line with modern technology. It is the finding of the research that proper use of technology has managed to reduce case backlogs and time taken to resolve a commercial dispute, to increase court integrity by minimizing human contacts in commercial law enforcement which may lead to solicitation of favors and saving of parties’ time due to online service. Among the three countries, each one is facing a distinct challenge due to the level of poverty and remoteness from online service. How solutions are found in one country is a lesson to another. To conclude, this paper is suggesting solutions for improving the commercial law enforcement mechanisms in line with modern technology. The call for technological transformation is essential for the enforcement of commercial laws.

Keywords: commercial law, enforcement, technology

Procedia PDF Downloads 33
303 A Distributed Smart Battery Management System – sBMS, for Stationary Energy Storage Applications

Authors: António J. Gano, Carmen Rangel

Abstract:

Currently, electric energy storage systems for stationary applications have known an increasing interest, namely with the integration of local renewable energy power sources into energy communities. Li-ion batteries are considered the leading electric storage devices to achieve this integration, and Battery Management Systems (BMS) are decisive for their control and optimum performance. In this work, the advancement of a smart BMS (sBMS) prototype with a modular distributed topology is described. The system, still under development, has a distributed architecture with modular characteristics to operate with different battery pack topologies and charge capacities, integrating adaptive algorithms for functional state real-time monitoring and management of multicellular Li-ion batteries, and is intended for application in the context of a local energy community fed by renewable energy sources. This sBMS system includes different developed hardware units: (1) Cell monitoring units (CMUs) for interfacing with each individual cell or module monitoring within the battery pack; (2) Battery monitoring and switching unit (BMU) for global battery pack monitoring, thermal control and functional operating state switching; (3) Main management and local control unit (MCU) for local sBMS’s management and control, also serving as a communications gateway to external systems and devices. This architecture is fully expandable to battery packs with a large number of cells, or modules, interconnected in series, as the several units have local data acquisition and processing capabilities, communicating over a standard CAN bus and will be able to operate almost autonomously. The CMU units are intended to be used with Li-ion cells but can be used with other cell chemistries, with output voltages within the 2.5 to 5 V range. The different unit’s characteristics and specifications are described, including the different implemented hardware solutions. The developed hardware supports both passive and active methods for charge equalization, considered fundamental functionalities for optimizing the performance and the useful lifetime of a Li-ion battery package. The functional characteristics of the different units of this sBMS system, including different process variables data acquisition using a flexible set of sensors, can support the development of custom algorithms for estimating the parameters defining the functional states of the battery pack (State-of-Charge, State-of-Health, etc.) as well as different charge equalizing strategies and algorithms. This sBMS system is intended to interface with other systems and devices using standard communication protocols, like those used by the Internet of Things. In the future, this sBMS architecture can evolve to a fully decentralized topology, with all the units using Wi-Fi protocols and integrating a mesh network, making unnecessary the MCU unit. The status of the work in progress is reported, leading to conclusions on the system already executed, considering the implemented hardware solution, not only as fully functional advanced and configurable battery management system but also as a platform for developing custom algorithms and optimizing strategies to achieve better performance of electric energy stationary storage devices.

Keywords: Li-ion battery, smart BMS, stationary electric storage, distributed BMS

Procedia PDF Downloads 65
302 European Electromagnetic Compatibility Directive Applied to Astronomical Observatories

Authors: Oibar Martinez, Clara Oliver

Abstract:

The Cherenkov Telescope Array Project (CTA) aims to build two different observatories of Cherenkov Telescopes, located in Cerro del Paranal, Chile, and La Palma, Spain. These facilities are used in this paper as a case study to investigate how to apply standard Directives on Electromagnetic Compatibility to astronomical observatories. Cherenkov Telescopes are able to provide valuable information from both Galactic and Extragalactic sources by measuring Cherenkov radiation, which is produced by particles which travel faster than light in the atmosphere. The construction requirements demand compliance with the European Electromagnetic Compatibility Directive. The largest telescopes of these observatories, called Large Scale Telescopes (LSTs), are high precision instruments with advanced photomultipliers able to detect the faint sub-nanosecond blue light pulses produced by Cherenkov Radiation. They have a 23-meter parabolic reflective surface. This surface focuses the radiation on a camera composed of an array of high-speed photosensors which are highly sensitive to the radio spectrum pollution. The camera has a field of view of about 4.5 degrees and has been designed for maximum compactness and lowest weight, cost and power consumption. Each pixel incorporates a photo-sensor able to discriminate single photons and the corresponding readout electronics. The first LST is already commissioned and intends to be operated as a service to Scientific Community. Because of this, it must comply with a series of reliability and functional requirements and must have a Conformité Européen (CE) marking. This demands compliance with Directive 2014/30/EU on electromagnetic compatibility. The main difficulty of accomplishing this goal resides on the fact that Conformité Européen marking setups and procedures were implemented for industrial products, whereas no clear protocols have been defined for scientific installations. In this paper, we aim to give an answer to the question on how the directive should be applied to our installation to guarantee the fulfillment of all the requirements and the proper functioning of the telescope itself. Experts in Optics and Electromagnetism were both needed to make these kinds of decisions and match tests which were designed to be made over the equipment of limited dimensions on large scientific plants. An analysis of the elements and configurations most likely to be affected by external interferences and those that are most likely to cause the maximum disturbances was also performed. Obtaining the Conformité Européen mark requires knowing what the harmonized standards are and how the elaboration of the specific requirement is defined. For this type of large installations, one needs to adapt and develop the tests to be carried out. In addition, throughout this process, certification entities and notified bodies play a key role in preparing and agreeing the required technical documentation. We have focused our attention mostly on the technical aspects of each point. We believe that this contribution will be of interest for other scientists involved in applying industrial quality assurance standards to large scientific plant.

Keywords: CE marking, electromagnetic compatibility, european directive, scientific installations

Procedia PDF Downloads 86
301 A Column Generation Based Algorithm for Airline Cabin Crew Rostering Problem

Authors: Nan Xu

Abstract:

In airlines, the crew scheduling problem is usually decomposed into two stages: crew pairing and crew rostering. In the crew pairing stage, pairings are generated such that each flight is covered by exactly one pairing and the overall cost is minimized. In the crew rostering stage, the pairings generated in the crew pairing stage are combined with off days, training and other breaks to create individual work schedules. The paper focuses on cabin crew rostering problem, which is challenging due to the extremely large size and the complex working rules involved. In our approach, the objective of rostering consists of two major components. The first is to minimize the number of unassigned pairings and the second is to ensure the fairness to crew members. There are two measures of fairness to crew members, the number of overnight duties and the total fly-hour over a given period. Pairings should be assigned to each crew member so that their actual overnight duties and fly hours are as close to the expected average as possible. Deviations from the expected average are penalized in the objective function. Since several small deviations are preferred than a large deviation, the penalization is quadratic. Our model of the airline crew rostering problem is based on column generation. The problem is decomposed into a master problem and subproblems. The mater problem is modeled as a set partition problem and exactly one roster for each crew is picked up such that the pairings are covered. The restricted linear master problem (RLMP) is considered. The current subproblem tries to find columns with negative reduced costs and add them to the RLMP for the next iteration. When no column with negative reduced cost can be found or a stop criteria is met, the procedure ends. The subproblem is to generate feasible crew rosters for each crew member. A separate acyclic weighted graph is constructed for each crew member and the subproblem is modeled as resource constrained shortest path problems in the graph. Labeling algorithm is used to solve it. Since the penalization is quadratic, a method to deal with non-additive shortest path problem using labeling algorithm is proposed and corresponding domination condition is defined. The major contribution of our model is: 1) We propose a method to deal with non-additive shortest path problem; 2) Operation to allow relaxing some soft rules is allowed in our algorithm, which can improve the coverage rate; 3) Multi-thread techniques are used to improve the efficiency of the algorithm when generating Line-of-Work for crew members. Here a column generation based algorithm for the airline cabin crew rostering problem is proposed. The objective is to assign a personalized roster to crew member which minimize the number of unassigned pairings and ensure the fairness to crew members. The algorithm we propose in this paper has been put into production in a major airline in China and numerical experiments show that it has a good performance.

Keywords: aircrew rostering, aircrew scheduling, column generation, SPPRC

Procedia PDF Downloads 121
300 Earthquake Risk Assessment Using Out-of-Sequence Thrust Movement

Authors: Rajkumar Ghosh

Abstract:

Earthquakes are natural disasters that pose a significant risk to human life and infrastructure. Effective earthquake mitigation measures require a thorough understanding of the dynamics of seismic occurrences, including thrust movement. Traditionally, estimating thrust movement has relied on typical techniques that may not capture the full complexity of these events. Therefore, investigating alternative approaches, such as incorporating out-of-sequence thrust movement data, could enhance earthquake mitigation strategies. This review aims to provide an overview of the applications of out-of-sequence thrust movement in earthquake mitigation. By examining existing research and studies, the objective is to understand how precise estimation of thrust movement can contribute to improving structural design, analyzing infrastructure risk, and developing early warning systems. The study demonstrates how to estimate out-of-sequence thrust movement using multiple data sources, including GPS measurements, satellite imagery, and seismic recordings. By analyzing and synthesizing these diverse datasets, researchers can gain a more comprehensive understanding of thrust movement dynamics during seismic occurrences. The review identifies potential advantages of incorporating out-of-sequence data in earthquake mitigation techniques. These include improving the efficiency of structural design, enhancing infrastructure risk analysis, and developing more accurate early warning systems. By considering out-of-sequence thrust movement estimates, researchers and policymakers can make informed decisions to mitigate the impact of earthquakes. This study contributes to the field of seismic monitoring and earthquake risk assessment by highlighting the benefits of incorporating out-of-sequence thrust movement data. By broadening the scope of analysis beyond traditional techniques, researchers can enhance their knowledge of earthquake dynamics and improve the effectiveness of mitigation measures. The study collects data from various sources, including GPS measurements, satellite imagery, and seismic recordings. These datasets are then analyzed using appropriate statistical and computational techniques to estimate out-of-sequence thrust movement. The review integrates findings from multiple studies to provide a comprehensive assessment of the topic. The study concludes that incorporating out-of-sequence thrust movement data can significantly enhance earthquake mitigation measures. By utilizing diverse data sources, researchers and policymakers can gain a more comprehensive understanding of seismic dynamics and make informed decisions. However, challenges exist, such as data quality difficulties, modelling uncertainties, and computational complications. To address these obstacles and improve the accuracy of estimates, further research and advancements in methodology are recommended. Overall, this review serves as a valuable resource for researchers, engineers, and policymakers involved in earthquake mitigation, as it encourages the development of innovative strategies based on a better understanding of thrust movement dynamics.

Keywords: earthquake, out-of-sequence thrust, disaster, human life

Procedia PDF Downloads 46
299 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland

Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski

Abstract:

PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.

Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks

Procedia PDF Downloads 113
298 Multivariate Ecoregion Analysis of Nutrient Runoff From Agricultural Land Uses in North America

Authors: Austin P. Hopkins, R. Daren Harmel, Jim A Ippolito, P. J. A. Kleinman, D. Sahoo

Abstract:

Field-scale runoff and water quality data are critical to understanding the fate and transport of nutrients applied to agricultural lands and minimizing their off-site transport because it is at that scale that agricultural management decisions are typically made based on hydrologic, soil, and land use factors. However, regional influences such as precipitation, temperature, and prevailing cropping systems and land use patterns also impact nutrient runoff. In the present study, the recently-updated MANAGE (Measured Annual Nutrient loads from Agricultural Environments) database was used to conduct an ecoregion-level analysis of nitrogen and phosphorus runoff from agricultural lands in the North America. Specifically, annual N and P runoff loads for cropland and grasslands in North American Level II EPA ecoregions were presented, and the impact of factors such as land use, tillage, and fertilizer timing and placement on N and P runoff were analyzed. Specifically we compiled annual N and P runoff load data (i.e., dissolved, particulate, and total N and P, kg/ha/yr) for each Level 2 EPA ecoregion and for various agricultural management practices (i.e., land use, tillage, fertilizer timing, fertilizer placement) within each ecoregion to showcase the analyses possible with the data in MANAGE. Potential differences in N and P runoff loads were evaluated between and within ecoregions with statistical and graphical approaches. Non-parametric analyses, mainly Mann-Whitney tests were conducted on median values weighted by the site years of data utilizing R because the data were not normally distributed, and we used Dunn tests and box and whisker plots to visually and statistically evaluate significant differences. Out of the 50 total North American Ecoregions, 11 were found that had significant data and site years to be utilized in the analysis. When examining ecoregions alone, it was observed that ER 9.2 temperate prairies had a significantly higher total N at 11.7 kg/ha/yr than ER 9.4 South Central Semi Arid Prairies with a total N of 2.4. When examining total P it was observed that ER 8.5 Mississippi Alluvial and Southeast USA Coastal Plains had a higher load at 3.0 kg/ha/yr than ER 8.2 Southeastern USA Plains with a load of 0.25 kg/ha/yr. Tillage and Land Use had severe impacts on nutrient loads. In ER 9.2 Temperate Prairies, conventional tillage had a total N load of 36.0 kg/ha/yr while conservation tillage had a total N load of 4.8 kg/ha/yr. In all relevant ecoregions, when corn was the predominant land use, total N levels significantly increased compared to grassland or other grains. In ER 8.4 Ozark-Ouachita, Corn had a total N of 22.1 kg/ha/yr while grazed grassland had a total N of 2.9 kg/ha/yr. There are further intricacies of the interactions that agricultural management practices have on one another combined with ecological conditions and their impacts on the continental aquatic nutrient loads that still need to be explored. This research provides a stepping stone to further understanding of land and resource stewardship and best management practices.

Keywords: water quality, ecoregions, nitrogen, phosphorus, agriculture, best management practices, land use

Procedia PDF Downloads 59
297 The Seller’s Sense: Buying-Selling Perspective Affects the Sensitivity to Expected-Value Differences

Authors: Taher Abofol, Eldad Yechiam, Thorsten Pachur

Abstract:

In four studies, we examined whether seller and buyers differ not only in subjective price levels for objects (i.e., the endowment effect) but also in their relative accuracy given objects varying in expected value. If, as has been proposed, sellers stand to accrue a more substantial loss than buyers do, then their pricing decisions should be more sensitive to expected-value differences between objects. This is implied by loss aversion due to the steeper slope of prospect theory’s value function for losses than for gains, as well as by loss attention account, which posits that losses increase the attention invested in a task. Both accounts suggest that losses increased sensitivity to relative values of different objects, which should result in better alignment of pricing decisions to the objective value of objects on the part of sellers. Under loss attention, this characteristic should only emerge under certain boundary conditions. In Study 1 a published dataset was reanalyzed, in which 152 participants indicated buying or selling prices for monetary lotteries with different expected values. Relative EV sensitivity was calculated for participants as the Spearman rank correlation between their pricing decisions for each of the lotteries and the lotteries' expected values. An ANOVA revealed a main effect of perspective (sellers versus buyers), F(1,150) = 85.3, p < .0001 with greater EV sensitivity for sellers. Study 2 examined the prediction (implied by loss attention) that the positive effect of losses on performance emerges particularly under conditions of time constraints. A published dataset was reanalyzed, where 84 participants were asked to provide selling and buying prices for monetary lotteries in three deliberations time conditions (5, 10, 15 seconds). As in Study 1, an ANOVA revealed greater EV sensitivity for sellers than for buyers, F(1,82) = 9.34, p = .003. Importantly, there was also an interaction of perspective by deliberation time. Post-hoc tests revealed that there were main effects of perspective both in the condition with 5s deliberation time, and in the condition with 10s deliberation time, but not in the 15s condition. Thus, sellers’ EV-sensitivity advantage disappeared with extended deliberation. Study 3 replicated the design of study 1 but administered the task three times to test if the effect decays with repeated presentation. The results showed that the difference between buyers and sellers’ EV sensitivity was replicated in repeated task presentations. Study 4 examined the loss attention prediction that EV-sensitivity differences can be eliminated by manipulations that reduce the differential attention investment of sellers and buyers. This was carried out by randomly mixing selling and buying trials for each participant. The results revealed no differences in EV sensitivity between selling and buying trials. The pattern of results is consistent with an attentional resource-based account of the differences between sellers and buyers. Thus, asking people to price, an object from a seller's perspective rather than the buyer's improves the relative accuracy of pricing decisions; subtle changes in the framing of one’s perspective in a trading negotiation may improve price accuracy.

Keywords: decision making, endowment effect, pricing, loss aversion, loss attention

Procedia PDF Downloads 314
296 Occipital Squama Convexity and Neurocranial Covariation in Extant Homo sapiens

Authors: Miranda E. Karban

Abstract:

A distinctive pattern of occipital squama convexity, known as the occipital bun or chignon, has traditionally been considered a derived Neandertal trait. However, some early modern and extant Homo sapiens share similar occipital bone morphology, showing pronounced internal and external occipital squama curvature and paralambdoidal flattening. It has been posited that these morphological patterns are homologous in the two groups, but this claim remains disputed. Many developmental hypotheses have been proposed, including assertions that the chignon represents a developmental response to a long and narrow cranial vault, a narrow or flexed basicranium, or a prognathic face. These claims, however, remain to be metrically quantified in a large subadult sample, and little is known about the feature’s developmental, functional, or evolutionary significance. This study assesses patterns of chignon development and covariation in a comparative sample of extant human growth study cephalograms. Cephalograms from a total of 549 European-derived North American subjects (286 male, 263 female) were scored on a 5-stage ranking system of chignon prominence. Occipital squama shape was found to exist along a continuum, with 34 subjects (6.19%) possessing defined chignons, and 54 subjects (9.84%) possessing very little occipital squama convexity. From this larger sample, those subjects represented by a complete radiographic series were selected for metric analysis. Measurements were collected from lateral and posteroanterior (PA) cephalograms of 26 subjects (16 male, 10 female), each represented at 3 longitudinal age groups. Age group 1 (range: 3.0-6.0 years) includes subjects during a period of rapid brain growth. Age group 2 (range: 8.0-9.5 years) includes subjects during a stage in which brain growth has largely ceased, but cranial and facial development continues. Age group 3 (range: 15.9-20.4 years) includes subjects at their adult stage. A total of 16 landmarks and 153 sliding semi-landmarks were digitized at each age point, and geometric morphometric analyses, including relative warps analysis and two-block partial least squares analysis, were conducted to study covariation patterns between midsagittal occipital bone shape and other aspects of craniofacial morphology. A convex occipital squama was found to covary significantly with a low, elongated neurocranial vault, and this pattern was found to exist from the youngest age group. Other tested patterns of covariation, including cranial and basicranial breadth, basicranial angle, midcoronal cranial vault shape, and facial prognathism, were not found to be significant at any age group. These results suggest that the chignon, at least in this sample, should not be considered an independent feature, but rather the result of developmental interactions relating to neurocranial elongation. While more work must be done to quantify chignon morphology in fossil subadults, this study finds no evidence to disprove the developmental homology of the feature in modern humans and Neandertals.

Keywords: chignon, craniofacial covariation, human cranial development, longitudinal growth study, occipital bun

Procedia PDF Downloads 164
295 Untangling the Greek Seafood Market: Authentication of Crustacean Products Using DNA-Barcoding Methodologies

Authors: Z. Giagkazoglou, D. Loukovitis, C. Gubili, A. Imsiridou

Abstract:

Along with the increase in human population, demand for seafood has increased. Despite the strict labeling regulations that exist for most marketed species in the European Union, seafood substitution remains a persistent global issue. Food fraud occurs when food products are traded in a false or misleading way. Mislabeling occurs when one species is substituted and traded under the name of another, and it can be intentional or unintentional. Crustaceans are one of the most regularly consumed seafood in Greece. Shrimps, prawns, lobsters, crayfish, and crabs are considered a delicacy and can be encountered in a variety of market presentations (fresh, frozen, pre-cooked, peeled, etc.). With most of the external traits removed, products as such are susceptible to species substitution. DNA barcoding has proven to be the most accurate method for the detection of fraudulent seafood products. To our best knowledge, the DNA barcoding methodology is used for the first time in Greece, in order to investigate the labeling practices for crustacean products available in the market. A total of 100 tissue samples were collected from various retailers and markets across four Greek cities. In an effort to cover the highest range of products possible, different market presentations were targeted (fresh, frozen and cooked). Genomic DNA was extracted using the DNeasy Blood & Tissue Kit, according to the manufacturer's instructions. The mitochondrial gene selected as the target region of the analysis was the cytochrome c oxidase subunit I (COI). PCR products were purified and sequenced using an ABI 3500 Genetic Analyzer. Sequences were manually checked and edited using BioEdit software and compared against the ones available in GenBank and BOLD databases. Statistical analyses were conducted in R and PAST software. For most samples, COI amplification was successful, and species level identification was possible. The preliminary results estimate moderate mislabeling rates (25%) in the identified samples. Mislabeling was most commonly detected in fresh products, with 50% of the samples in this category labeled incorrectly. Overall, the mislabeling rates detected by our study probably relate to some degree of unintentional misidentification, and lack of knowledge surrounding the legal designations by both retailers and consumers. For some species of crustaceans (i.e. Squila mantis) the mislabeling appears to be also affected by the local labeling practices. Across Greece, S. mantis is sold in the market under two common names, but only one is recognized by the country's legislation, and therefore any mislabeling is probably not profit motivated. However, the substitution of the speckled shrimp (Metapenaus monoceros) for the distinct, giant river prawn (Macrobranchium rosenbergii), is a clear example of deliberate fraudulent substitution, aiming for profit. To our best knowledge, no scientific study investigating substitution and mislabeling rates in crustaceans has been conducted in Greece. For a better understanding of Greece's seafood market, similar DNA barcoding studies in other regions with increased touristic importance (e.g., the Greek islands) should be conducted. Regardless, the expansion of the list of species-specific designations for crustaceans in the country is advised.

Keywords: COI gene, food fraud, labelling control, molecular identification

Procedia PDF Downloads 39
294 Fiber Stiffness Detection of GFRP Using Combined ABAQUS and Genetic Algorithms

Authors: Gyu-Dong Kim, Wuk-Jae Yoo, Sang-Youl Lee

Abstract:

Composite structures offer numerous advantages over conventional structural systems in the form of higher specific stiffness and strength, lower life-cycle costs, and benefits such as easy installation and improved safety. Recently, there has been a considerable increase in the use of composites in engineering applications and as wraps for seismic upgrading and repairs. However, these composites deteriorate with time because of outdated materials, excessive use, repetitive loading, climatic conditions, manufacturing errors, and deficiencies in inspection methods. In particular, damaged fibers in a composite result in significant degradation of structural performance. In order to reduce the failure probability of composites in service, techniques to assess the condition of the composites to prevent continual growth of fiber damage are required. Condition assessment technology and nondestructive evaluation (NDE) techniques have provided various solutions for the safety of structures by means of detecting damage or defects from static or dynamic responses induced by external loading. A variety of techniques based on detecting the changes in static or dynamic behavior of isotropic structures has been developed in the last two decades. These methods, based on analytical approaches, are limited in their capabilities in dealing with complex systems, primarily because of their limitations in handling different loading and boundary conditions. Recently, investigators have introduced direct search methods based on metaheuristics techniques and artificial intelligence, such as genetic algorithms (GA), simulated annealing (SA) methods, and neural networks (NN), and have promisingly applied these methods to the field of structural identification. Among them, GAs attract our attention because they do not require a considerable amount of data in advance in dealing with complex problems and can make a global solution search possible as opposed to classical gradient-based optimization techniques. In this study, we propose an alternative damage-detection technique that can determine the degraded stiffness distribution of vibrating laminated composites made of Glass Fiber-reinforced Polymer (GFRP). The proposed method uses a modified form of the bivariate Gaussian distribution function to detect degraded stiffness characteristics. In addition, this study presents a method to detect the fiber property variation of laminated composite plates from the micromechanical point of view. The finite element model is used to study free vibrations of laminated composite plates for fiber stiffness degradation. In order to solve the inverse problem using the combined method, this study uses only first mode shapes in a structure for the measured frequency data. In particular, this study focuses on the effect of the interaction among various parameters, such as fiber angles, layup sequences, and damage distributions, on fiber-stiffness damage detection.

Keywords: stiffness detection, fiber damage, genetic algorithm, layup sequences

Procedia PDF Downloads 240
293 A Study on Green Building Certification Systems within the Context of Anticipatory Systems

Authors: Taner Izzet Acarer, Ece Ceylan Baba

Abstract:

This paper examines green building certification systems and their current processes in comparison with anticipatory systems. Rapid growth of human population and depletion of natural resources are causing irreparable damage to urban and natural environment. In this context, the concept of ‘sustainable architecture’ has emerged in the 20th century so as to establish and maintain standards for livable urban spaces, to improve quality of urban life, and to preserve natural resources for future generations. The construction industry is responsible for a large part of the resource consumption and it is believed that the ‘green building’ designs that emerge in construction industry can reduce environmental problems and contribute to sustainable development around the world. A building must meet a specific set of criteria, set forth through various certification systems, in order to be eligible for designation as a green building. It is disputable whether methods used by green building certification systems today truly serve the purposes of creating a sustainable world. Accordingly, this study will investigate the sets of rating systems used by the most popular green building certification programs, including LEED (Leadership in Energy and Environmental Design), BREEAM (Building Research Establishment's Environmental Assessment Methods), DGNB (Deutsche Gesellschaft für Nachhaltiges Bauen System), in terms of ‘Anticipatory Systems’ in accordance with the certification processes and their goals, while discussing their contribution to architecture. The basic methodology of the study is as follows. Firstly analyzes of brief historical and literature review of green buildings and certificate systems will be stated. Secondly, processes of green building certificate systems will be disputed by the help of anticipatory systems. Anticipatory Systems is a set of systems designed to generate action-oriented projections and to forecast potential side effects using the most current data. Anticipatory Systems pull the future into the present and take action based on future predictions. Although they do not have a claim to see into the future, they can provide foresight data. When shaping the foresight data, Anticipatory Systems use feedforward instead of feedback, enabling them to forecast the system’s behavior and potential side effects by establishing a correlation between the system’s present/past behavior and projected results. This study indicates the goals and current status of LEED, BREEAM and DGNB rating systems that created by using the feedback technique will be examined and presented in a chart. In addition, by examining these rating systems with the anticipatory system that using the feedforward method, the negative influences of the potential side effects on the purpose and current status of the rating systems will be shown in another chart. By comparing the two obtained data, the findings will be shown that rating systems are used for different goals than the purposes they are aiming for. In conclusion, the side effects of green building certification systems will be stated by using anticipatory system models.

Keywords: anticipatory systems, BREEAM, certificate systems, DGNB, green buildings, LEED

Procedia PDF Downloads 200
292 Implementation of Deep Neural Networks for Pavement Condition Index Prediction

Authors: M. Sirhan, S. Bekhor, A. Sidess

Abstract:

In-service pavements deteriorate with time due to traffic wheel loads, environment, and climate conditions. Pavement deterioration leads to a reduction in their serviceability and structural behavior. Consequently, proper maintenance and rehabilitation (M&R) are necessary actions to keep the in-service pavement network at the desired level of serviceability. Due to resource and financial constraints, the pavement management system (PMS) prioritizes roads most in need of maintenance and rehabilitation action. It recommends a suitable action for each pavement based on the performance and surface condition of each road in the network. The pavement performance and condition are usually quantified and evaluated by different types of roughness-based and stress-based indices. Examples of such indices are Pavement Serviceability Index (PSI), Pavement Serviceability Ratio (PSR), Mean Panel Rating (MPR), Pavement Condition Rating (PCR), Ride Number (RN), Profile Index (PI), International Roughness Index (IRI), and Pavement Condition Index (PCI). PCI is commonly used in PMS as an indicator of the extent of the distresses on the pavement surface. PCI values range between 0 and 100; where 0 and 100 represent a highly deteriorated pavement and a newly constructed pavement, respectively. The PCI value is a function of distress type, severity, and density (measured as a percentage of the total pavement area). PCI is usually calculated iteratively using the 'Paver' program developed by the US Army Corps. The use of soft computing techniques, especially Artificial Neural Network (ANN), has become increasingly popular in the modeling of engineering problems. ANN techniques have successfully modeled the performance of the in-service pavements, due to its efficiency in predicting and solving non-linear relationships and dealing with an uncertain large amount of data. Typical regression models, which require a pre-defined relationship, can be replaced by ANN, which was found to be an appropriate tool for predicting the different pavement performance indices versus different factors as well. Subsequently, the objective of the presented study is to develop and train an ANN model that predicts the PCI values. The model’s input consists of percentage areas of 11 different damage types; alligator cracking, swelling, rutting, block cracking, longitudinal/transverse cracking, edge cracking, shoving, raveling, potholes, patching, and lane drop off, at three severity levels (low, medium, high) for each. The developed model was trained using 536,000 samples and tested on 134,000 samples. The samples were collected and prepared by The National Transport Infrastructure Company. The predicted results yielded satisfactory compliance with field measurements. The proposed model predicted PCI values with relatively low standard deviations, suggesting that it could be incorporated into the PMS for PCI determination. It is worth mentioning that the most influencing variables for PCI prediction are damages related to alligator cracking, swelling, rutting, and potholes.

Keywords: artificial neural networks, computer programming, pavement condition index, pavement management, performance prediction

Procedia PDF Downloads 108
291 Chemical and Biomolecular Detection at a Polarizable Electrical Interface

Authors: Nicholas Mavrogiannis, Francesca Crivellari, Zachary Gagnon

Abstract:

Development of low-cost, rapid, sensitive and portable biosensing systems are important for the detection and prevention of disease in developing countries, biowarfare/antiterrorism applications, environmental monitoring, point-of-care diagnostic testing and for basic biological research. Currently, the most established commercially available and widespread assays for portable point of care detection and disease testing are paper-based dipstick and lateral flow test strips. These paper-based devices are often small, cheap and simple to operate. The last three decades in particular have seen an emergence in these assays in diagnostic settings for detection of pregnancy, HIV/AIDS, blood glucose, Influenza, urinary protein, cardiovascular disease, respiratory infections and blood chemistries. Such assays are widely available largely because they are inexpensive, lightweight, and portable, are simple to operate, and a few platforms are capable of multiplexed detection for a small number of sample targets. However, there is a critical need for sensitive, quantitative and multiplexed detection capabilities for point-of-care diagnostics and for the detection and prevention of disease in the developing world that cannot be satisfied by current state-of-the-art paper-based assays. For example, applications including the detection of cardiac and cancer biomarkers and biothreat applications require sensitive multiplexed detection of analytes in the nM and pM range, and cannot currently be satisfied with current inexpensive portable platforms due to their lack of sensitivity, quantitative capabilities and often unreliable performance. In this talk, inexpensive label-free biomolecular detection at liquid interfaces using a newly discovered electrokinetic phenomenon known as fluidic dielectrophoresis (fDEP) is demonstrated. The electrokinetic approach involves exploiting the electrical mismatches between two aqueous liquid streams forced to flow side-by-side in a microfluidic T-channel. In this system, one fluid stream is engineered to have a higher conductivity relative to its neighbor which has a higher permittivity. When a “low” frequency (< 1 MHz) alternating current (AC) electrical field is applied normal to this fluidic electrical interface the fluid stream with high conductivity displaces into the low conductive stream. Conversely, when a “high” frequency (20MHz) AC electric field is applied, the high permittivity stream deflects across the microfluidic channel. There is, however, a critical frequency sensitive to the electrical differences between each fluid phase – the fDEP crossover frequency – between these two events where no fluid deflection is observed, and the interface remains fixed when exposed to an external field. To perform biomolecular detection, two streams flow side-by-side in a microfluidic T-channel: one fluid stream with an analyte of choice and an adjacent stream with a specific receptor to the chosen target. The two fluid streams merge and the fDEP crossover frequency is measured at different axial positions down the resulting liquid

Keywords: biodetection, fluidic dielectrophoresis, interfacial polarization, liquid interface

Procedia PDF Downloads 424
290 Predicting Provider Service Time in Outpatient Clinics Using Artificial Intelligence-Based Models

Authors: Haya Salah, Srinivas Sharan

Abstract:

Healthcare facilities use appointment systems to schedule their appointments and to manage access to their medical services. With the growing demand for outpatient care, it is now imperative to manage physician's time effectively. However, high variation in consultation duration affects the clinical scheduler's ability to estimate the appointment duration and allocate provider time appropriately. Underestimating consultation times can lead to physician's burnout, misdiagnosis, and patient dissatisfaction. On the other hand, appointment durations that are longer than required lead to doctor idle time and fewer patient visits. Therefore, a good estimation of consultation duration has the potential to improve timely access to care, resource utilization, quality of care, and patient satisfaction. Although the literature on factors influencing consultation length abound, little work has done to predict it using based data-driven approaches. Therefore, this study aims to predict consultation duration using supervised machine learning algorithms (ML), which predicts an outcome variable (e.g., consultation) based on potential features that influence the outcome. In particular, ML algorithms learn from a historical dataset without explicitly being programmed and uncover the relationship between the features and outcome variable. A subset of the data used in this study has been obtained from the electronic medical records (EMR) of four different outpatient clinics located in central Pennsylvania, USA. Also, publicly available information on doctor's characteristics such as gender and experience has been extracted from online sources. This research develops three popular ML algorithms (deep learning, random forest, gradient boosting machine) to predict the treatment time required for a patient and conducts a comparative analysis of these algorithms with respect to predictive performance. The findings of this study indicate that ML algorithms have the potential to predict the provider service time with superior accuracy. While the current approach of experience-based appointment duration estimation adopted by the clinic resulted in a mean absolute percentage error of 25.8%, the Deep learning algorithm developed in this study yielded the best performance with a MAPE of 12.24%, followed by gradient boosting machine (13.26%) and random forests (14.71%). Besides, this research also identified the critical variables affecting consultation duration to be patient type (new vs. established), doctor's experience, zip code, appointment day, and doctor's specialty. Moreover, several practical insights are obtained based on the comparative analysis of the ML algorithms. The machine learning approach presented in this study can serve as a decision support tool and could be integrated into the appointment system for effectively managing patient scheduling.

Keywords: clinical decision support system, machine learning algorithms, patient scheduling, prediction models, provider service time

Procedia PDF Downloads 94
289 Portable Environmental Parameter Monitor Based on STM32

Authors: Liang Zhao, Chongquan Zhong

Abstract:

Introduction: According to statistics, people spend 80% to 90% of time indoor, so indoor air quality, either at home or in the office, greatly impacts the quality of life, health and work efficiency. Therefore, indoor air quality is very important to human activities. With the acceleration of urbanization, people are spending more time in indoor activity. The time in indoor environment, the living space, and the frequency interior decoration are all increasingly increased. However, housing decoration materials contain formaldehyde and other harmful substances, causing environmental and air quality problems, which have brought serious damage to countless families and attracted growing attention. According to World Health Organization statistics, the indoor environments in more than 30% of buildings in China are polluted by poisonous and harmful gases. Indoor pollution has caused various health problems, and these widespread public health problems can lead to respiratory diseases. Long-term inhalation of low-concentration formaldehyde would cause persistent headache, insomnia, weakness, palpitation, weight loss and vomiting, which are serious impacts on human health and safety. On the other hand, as for offices, some surveys show that good indoor air quality helps to enthuse the staff and improve the work efficiency by 2%-16%. Therefore, people need to further understand the living and working environments. There is a need for easy-to-use indoor environment monitoring instruments, with which users only have to power up and monitor the environmental parameters. The corresponding real-time data can be displayed on the screen for analysis. Environment monitoring should have the sensitive signal alarm function and send alarm when harmful gases such as formaldehyde, CO, SO2, are excessive to human body. System design: According to the monitoring requirements of various gases, temperature and humidity, we designed a portable, light, real-time and accurate monitor for various environmental parameters, including temperature, humidity, formaldehyde, methane, and CO. This monitor will generate an alarm signal when a target is beyond the standard. It can conveniently measure a variety of harmful gases and provide the alarm function. It also has the advantages of small volume, convenience to carry and use. It has a real-time display function, outputting the parameters on the LCD screen, and a real-time alarm function. Conclusions: This study is focused on the research and development of a portable parameter monitoring instrument for indoor environment. On the platform of an STM32 development board, the monitored data are collected through an external sensor. The STM32 platform is for data acquisition and processing procedures, and successfully monitors the real-time temperature, humidity, formaldehyde, CO, methane and other environmental parameters. Real-time data are displayed on the LCD screen. The system is stable and can be used in different indoor places such as family, hospital, and office. Meanwhile, the system adopts the idea of modular design and is superior in transplanting. The scheme is slightly modified and can be used similarly as the function of a monitoring system. This monitor has very high research and application values.

Keywords: indoor air quality, gas concentration detection, embedded system, sensor

Procedia PDF Downloads 214
288 Diversity in the Community - The Disability Perspective

Authors: Sarah Reker, Christiane H. Kellner

Abstract:

From the perspective of people with disabilities, inequalities can also emerge from spatial segregation, the lack of social contacts or limited economic resources. In order to reduce or even eliminate these disadvantages and increase general well-being, community-based participation as well as decentralisation efforts within exclusively residential homes is essential. Therefore, the new research project “Index for participation development and quality of life for persons with disabilities”(TeLe-Index, 2014-2016), which is anchored at the Technische Universität München in Munich and at a large residential complex and service provider for persons with disabilities in the outskirts of Munich aims to assist the development of community-based living environments. People with disabilities should be able to participate in social life beyond the confines of the institution. Since a diverse society is a society in which different individual needs and wishes can emerge and be catered to, the ultimate goal of the project is to create an environment for all citizens–regardless of disability, age or ethnic background–that accommodates their daily activities and requirements. The UN-Convention on the Rights of Persons with Disabilities, which Germany also ratified, postulates the necessity of user-centered design, especially when it comes to evaluating the individual needs and wishes of all citizens. Therefore, a multidimensional approach is required. Based on this insight, the structure of the town-like center will be remodeled to open up the community to all people. This strategy should lead to more equal opportunities and open the way for a much more diverse community. Therefore, macro-level research questions were inspired by quality of life theory and were formulated as follows for different dimensions: •The user dimension: what needs and necessities can we identify? Are needs person-related? Are there any options to choose from? What type of quality of life can we identify? The economic dimension: what resources (both material and staff-related) are available in the region? (How) are they used? What costs (can) arise and what effects do they entail? •The environment dimension: what “environmental factors” such as access (mobility and absence of barriers) prove beneficial or impedimental? In this context, we have provided academic supervision and support for three projects (the construction of a new school, inclusive housing for children and teenagers with disabilities and the professionalization of employees with person-centered thinking). Since we cannot present all the issues of the umbrella-project within the conference framework, we will be focusing on one project more in-depth, namely “Outpatient Housing Options for Children and Teenagers with Disabilities”. The insights we have obtained until now will enable us to present the intermediary results of our evaluation. The most central questions pertaining to this part of the research were the following: •How have the existing network relations been designed? •What meaning (or significance) does the existing service offers and structures have for the everyday life of an external residential group? These issues underpinned the environmental analyses as well as the qualitative guided interviews and qualitative network analyses we carried out.

Keywords: decentralisation, environmental analyses, outpatient housing options for children and teenagers with disabilities, qualitative network analyses

Procedia PDF Downloads 340
287 Revolutionizing Accounting: Unleashing the Power of Artificial Intelligence

Authors: Sogand Barghi

Abstract:

The integration of artificial intelligence (AI) in accounting practices is reshaping the landscape of financial management. This paper explores the innovative applications of AI in the realm of accounting, emphasizing its transformative impact on efficiency, accuracy, decision-making, and financial insights. By harnessing AI's capabilities in data analysis, pattern recognition, and automation, accounting professionals can redefine their roles, elevate strategic decision-making, and unlock unparalleled value for businesses. This paper delves into AI-driven solutions such as automated data entry, fraud detection, predictive analytics, and intelligent financial reporting, highlighting their potential to revolutionize the accounting profession. Artificial intelligence has swiftly emerged as a game-changer across industries, and accounting is no exception. This paper seeks to illuminate the profound ways in which AI is reshaping accounting practices, transcending conventional boundaries, and propelling the profession toward a new era of efficiency and insight-driven decision-making. One of the most impactful applications of AI in accounting is automation. Tasks that were once labor-intensive and time-consuming, such as data entry and reconciliation, can now be streamlined through AI-driven algorithms. This not only reduces the risk of errors but also allows accountants to allocate their valuable time to more strategic and analytical tasks. AI's ability to analyze vast amounts of data in real time enables it to detect irregularities and anomalies that might go unnoticed by traditional methods. Fraud detection algorithms can continuously monitor financial transactions, flagging any suspicious patterns and thereby bolstering financial security. AI-driven predictive analytics can forecast future financial trends based on historical data and market variables. This empowers organizations to make informed decisions, optimize resource allocation, and develop proactive strategies that enhance profitability and sustainability. Traditional financial reporting often involves extensive manual effort and data manipulation. With AI, reporting becomes more intelligent and intuitive. Automated report generation not only saves time but also ensures accuracy and consistency in financial statements. While the potential benefits of AI in accounting are undeniable, there are challenges to address. Data privacy and security concerns, the need for continuous learning to keep up with evolving AI technologies, and potential biases within algorithms demand careful attention. The convergence of AI and accounting marks a pivotal juncture in the evolution of financial management. By harnessing the capabilities of AI, accounting professionals can transcend routine tasks, becoming strategic advisors and data-driven decision-makers. The applications discussed in this paper underline the transformative power of AI, setting the stage for an accounting landscape that is smarter, more efficient, and more insightful than ever before. The future of accounting is here, and it's driven by artificial intelligence.

Keywords: artificial intelligence, accounting, automation, predictive analytics, financial reporting

Procedia PDF Downloads 36
286 Consumers and Voters’ Choice: Two Different Contexts with a Powerful Behavioural Parallel

Authors: Valentina Dolmova

Abstract:

What consumers choose to buy and who voters select on election days are two questions that have captivated the interest of both academics and practitioners for many decades. The importance of understanding what influences the behavior of those groups and whether or not we can predict or control it fuels a steady stream of research in a range of fields. By looking only at the past 40 years, more than 70 thousand scientific papers have been published in each field – consumer behavior and political psychology, respectively. From marketing, economics, and the science of persuasion to political and cognitive psychology - we have all remained heavily engaged. The ever-evolving technology, inevitable socio-cultural shifts, global economic conditions, and much more play an important role in choice-equations regardless of context. On one hand, this makes the research efforts always relevant and needed. On the other, the relatively low number of cross-field collaborations, which seem to be picking up only in more in recent years, makes the existing findings isolated into framed bubbles. By performing systematic research across both areas of psychology and building a parallel between theories and factors of influence, however, we find that there is not only a definitive common ground between the behaviors of consumers and voters but that we are moving towards a global model of choice. This means that the lines between contexts are fading which has a direct implication on what we should focus on when predicting or navigating buyers and voters’ behavior. Internal and external factors in four main categories determine the choices we make as consumers and as voters. Together, personal, psychological, social, and cultural create a holistic framework through which all stimuli in relation to a particular product or a political party get filtered. The analogy “consumer-voter” solidifies further. Leading academics suggest that this fundamental parallel is the key to managing successfully political and consumer brands alike. However, we distinguish additional four key stimuli that relate to those factor categories (1/ opportunity costs; 2/the memory of the past; 3/recognisable figures/faces and 4/conflict) arguing that the level of expertise a person has determines the prevalence of factors or specific stimuli. Our efforts take into account global trends such as the establishment of “celebrity politics” and the image of “ethically concerned consumer brands” which bridge the gap between contexts to an even greater extent. Scientists and practitioners are pushed to accept the transformative nature of both fields in social psychology. Existing blind spots as well as the limited number of research conducted outside the American and European societies open up space for more collaborative efforts in this highly demanding and lucrative field. A mixed method of research tests three main hypotheses, the first two of which are focused on the level of irrelevance of context when comparing voting or consumer behavior – both from the factors and stimuli lenses, the third on determining whether or not the level of expertise in any field skews the weight of what prism we are more likely to choose when evaluating options.

Keywords: buyers’ behaviour, decision-making, voters’ behaviour, social psychology

Procedia PDF Downloads 133
285 A Rural Journey of Integrating Interprofessional Education to Foster Trust

Authors: Julia Wimmers Klick

Abstract:

Interprofessional Education (IPE) is widely recognized as a valuable approach in healthcare education, despite the challenges it presents. This study explores IP surface anatomy lab sessions, with a focus on fostering trust and collaboration among healthcare students. The research is conducted within the context of rural healthcare settings in British Columbia (BC), where a medical school and a physical therapy (PT) program operate under the Faculty of Medicine at the University of British Columbia (UBC). While IPE sessions addressing soft skills have been implemented, the integration of hard skills, such as Anatomy, remains limited. To address this gap, a pilot feasibility study was conducted with a positive outcome, a follow-up study involved these IPE sessions aimed at exploring the influence of bonding and trust between medical and PT students. Data were collected through focus groups comprising participating students and faculty members, and a structured SWOC (Strengths, Weaknesses, Opportunities, and Challenges) analysis was conducted. The IPE sessions, 3 in total, consisted of a 2.5-hour lab on surface anatomy, where PT students took on the teaching role, and medical students were newly exposed to surface anatomy. The focus of the study was on the relationship-building process and trust development between the two student groups, rather than assessing the acquisition of surface anatomy skills. Results indicated that the surface anatomy lab served as a suitable tool for the application and learning of soft skills. Faculty members observed positive outcomes, including productive interaction between students, reversed hierarchy with PT students teaching medical students, practicing active listening skills, and using a mutual language of anatomy. Notably, there was no grade assessment or external pressure to perform. The students also reported an overall positive experience; however, the specific impact on the development of soft skill competencies could not be definitively determined. Participants expressed a sense of feeling respected, welcomed, and included, all of which contributed to feeling safe. Within the small group environment, students experienced becoming a part of a community of healthcare providers that bonded over a shared interest in health professions education. They enjoyed sharing diverse experiences related to learning across their varied contexts, without fear of judgment and reprisal that were often intimidating in single professional contexts. During a joint Christmas party for both cohorts, faculty members observed students mingling, laughing, and forming bonds. This emphasized the importance of early bonding and trust development among healthcare colleagues, particularly in rural settings. In conclusion, the findings emphasize the potential of IPE sessions to enhance trust and collaboration among healthcare students, with implications for their future professional lives in rural settings. Early bonding and trust development are crucial in rural settings, where healthcare professionals often rely on each other. Future research should continue to explore the impact of content-concentrated IPE on the development of soft skill competencies.

Keywords: interprofessional education, rural healthcare settings, trust, surface anatomy

Procedia PDF Downloads 45
284 Sustainable Urban Growth of Neighborhoods: A Case Study of Alryad-Khartoum

Authors: Zuhal Eltayeb Awad

Abstract:

Alryad neighborhood is located in Khartoum town– the administrative center of the Capital of Sudan. The neighborhood is one of the high-income residential areas with villa type development of low-density. It was planned and developed in 1972 with large plots (600-875m²), wide crossing roads and balanced environment. Recently the area transformed into more compact urban form of high density, mixed-use integrated development with more intensive use of land; multi-storied apartments. The most important socio-economic process in the neighborhood has been the commercialization and deinitialization of the area in connect with the displacement of the residential function. This transformation affected the quality of the neighborhood and the inter-related features of the built environment. A case study approach was chosen to gather the necessary qualitative and quantitative data. A detailed survey on existing development pattern was carried out over the whole area of Alryad. Data on the built and social environment of the neighborhoods were collected through observations, interviews and secondary data sources. The paper reflected a theoretical and empirical interest in the particular characteristics of compact neighborhood with high density, and mixed land uses and their effect on social wellbeing of the residents all in the context of the sustainable development. The research problem is focused on the challenges of transformation that associated with compact neighborhood that created multiple urban problems, e.g., stress of essential services (water supply, electricity, and drainage), congestion of streets and demand for parking. The main objective of the study is to analyze the transformation of this area from residential use to commercial and administrative use. The study analyzed the current situation of the neighborhood compared to the five principles of sustainable neighborhood prepared by UN Habitat. The study found that the neighborhood is experienced changes that occur to inner-city residential areas and the process of change of the neighborhood was originated by external forces due to the declining economic situation of the whole country. It is evident that non-residential uses have taken place uncontrolled, unregulated and haphazardly that led to damage the residential environment and deficiency in infrastructure. The quality of urban life and in particular on levels of privacy was reduced, the neighborhood changed gradually to be a central business district that provides services to the whole Khartoum town. The change of house type may be attributed to a demand-led housing market and absence of policy. The results showed that Alryad is not fully sustainable and self-contained, street network characteristics and mixed land-uses development are compatible with the principles of sustainability. The area of streets represents 27.4% of the total area of the neighborhood. Residential density is 4,620 people/ km², that is lower than the recommendations, and the limited block land-use specialization is higher than 10% of the blocks. Most inhabitants have a high income so that there is no social mix in the neighborhood. The study recommended revision of the current zoning regulations in order to control and regulate undesirable development in the neighborhood and provide new solutions which allow promoting the neighborhood sustainable development.

Keywords: compact neighborhood, land uses, mixed use, residential area, transformation

Procedia PDF Downloads 106