Search results for: technical indicators
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3652

Search results for: technical indicators

442 Applying Biculturalism in Studying Tourism Host Community Cultural Integrity and Individual Member Stress

Authors: Shawn P. Daly

Abstract:

Communities heavily engaged in the tourism industry discover their values intersect, meld, and conflict with those of visitors. Maintaining cultural integrity in the face of powerful external pressures causes stress among society members. This effect represents a less studied aspect of sustainable tourism. The present paper brings a perspective unique to the tourism literature: biculturalism. The grounded theories, coherent hypotheses, and validated constructs and indicators of biculturalism represent a sound base from which to consider sociocultural issues in sustainable tourism. Five models describe the psychological state of individuals operating at cultural crossroads: assimilation (joining the new culture), acculturation (grasping the new culture but remaining of the original culture), alternation (varying behavior to cultural context), multicultural (maintaining distinct cultures), and fusion (blending cultures). These five processes divide into two units of analysis (individual and society), permitting research questions at levels important for considering sociocultural sustainability. Acculturation modelling has morphed into dual processes of acculturation (new culture adaptation) and enculturation (original culture adaptation). This dichotomy divides sustainability research questions into human impacts from assimilation (acquiring new culture, throwing away original), separation (rejecting new culture, keeping original), integration (acquiring new culture, keeping original), and marginalization (rejecting new culture, throwing away original). Biculturalism is often cast in terms of its emotional, behavioral, and cognitive dimensions. Required cultural adjustments and varying levels of cultural competence lead to physical, psychological, and emotional outcomes, including depression, lowered life satisfaction and self-esteem, headaches, and back pain—or enhanced career success, social skills, and life styles. Numerous studies provide empirical scales and research hypotheses for sustainability research into tourism’s causality and effect on local well-being. One key issue in applying biculturalism to sustainability scholarship concerns identification and specification of the alternative new culture contacting local culture. Evidence exists for tourism industry, universal tourist, and location/event-specific tourist culture. The biculturalism paradigm holds promise for researchers examining evolving cultural identity and integrity in response to mass tourism. In particular, confirmed constructs and scales simplify operationalization of tourism sustainability studies in terms of human impact and adjustment.

Keywords: biculturalism, cultural integrity, psychological and sociocultural adjustment, tourist culture

Procedia PDF Downloads 387
441 Analysis of Sea Waves Characteristics and Assessment of Potential Wave Power in Egyptian Mediterranean Waters

Authors: Ahmed A. El-Gindy, Elham S. El-Nashar, Abdallah Nafaa, Sameh El-Kafrawy

Abstract:

The generation of energy from marine energy became one of the most preferable resources since it is a clean source and friendly to environment. Egypt has long shores along Mediterranean with important cities that need energy resources with significant wave energy. No detailed studies have been done on wave energy distribution in the Egyptian waters. The objective of this paper is to assess the energy wave power available in the Egyptian waters for the choice of the most suitable devices to be used in this area. This paper deals the characteristics and power of the offshore waves in the Egyptian waters. Since the field observations of waves are not frequent and need much technical work, the European Centre for Medium-Range Weather Forecasts (ECMWF) interim reanalysis data in Mediterranean, with a grid size 0.75 degree, which is a relatively course grid, are considered in the present study for preliminary assessment of sea waves characteristics and power. The used data covers the period from 2012 to 2014. The data used are significant wave height (swh), mean wave period (mwp) and wave direction taken at six hourly intervals, at seven chosen stations, and at grid points covering the Egyptian waters. The wave power (wp) formula was used to calculate energy flux. Descriptive statistical analysis including monthly means and standard deviations of the swh, mwp, and wp. The percentiles of wave heights and their corresponding power are done, as a tool of choice of the best technology suitable for the site. The surfer is used to show spatial distributions of wp. The analysis of data at chosen 7 stations determined the potential of wp off important Egyptian cities. Offshore of Al Saloum and Marsa Matruh, the highest wp occurred in January and February (16.93-18.05) ± (18.08-22.12) kw/m while the lowest occurred in June and October (1.49-1.69) ± (1.45-1.74) kw/m. In front of Alexandria and Rashid, the highest wp occurred in January and February (16.93-18.05) ± (18.08-22.12) kw/m while the lowest occurred in June and September (1.29-2.01) ± (1.31-1.83) kw/m. In front of Damietta and Port Said, the highest wp occurred in February (14.29-17.61) ± (21.61-27.10) kw/m and the lowest occurred in June (0.94-0.96) ± (0.71-0.72) kw/m. In winter, the probabilities of waves higher than 0.8 m in percentage were, at Al Saloum and Marsa Matruh (76.56-80.33) ± (11.62-12.05), at Alexandria and Rashid (73.67-74.79) ± (16.21-18.59) and at Damietta and Port Said (66.28-68.69) ± (17.88-17.90). In spring, the percentiles were, at Al Saloum and Marsa Matruh, (48.17-50.92) ± (5.79-6.56), at Alexandria and Rashid, (39.38-43.59) ± (9.06-9.34) and at Damietta and Port Said, (31.59-33.61) ± (10.72-11.25). In summer, the probabilities were, at Al Saloum and Marsa Matruh (57.70-66.67) ± (4.87-6.83), at Alexandria and Rashid (59.96-65.13) ± (9.14-9.35) and at Damietta and Port Said (46.38-49.28) ± (10.89-11.47). In autumn, the probabilities were, at Al Saloum and Marsa Matruh (58.75-59.56) ± (2.55-5.84), at Alexandria and Rashid (47.78-52.13) ± (3.11-7.08) and at Damietta and Port Said (41.16-42.52) ± (7.52-8.34).

Keywords: distribution of sea waves energy, Egyptian Mediterranean waters, waves characteristics, waves power

Procedia PDF Downloads 168
440 The Diversity of Contexts within Which Adolescents Engage with Digital Media: Contributing to More Challenging Tasks for Parents and a Need for Third Party Mediation

Authors: Ifeanyi Adigwe, Thomas Van der Walt

Abstract:

Digital media has been integrated into the social and entertainment life of young children, and as such, the impact of digital media appears to affect young people of all ages and it is believed that this will continue to shape the world of young children. Since, technological advancement of digital media presents adolescents with diverse contexts, platforms and avenues to engage with digital media outside the home environment and from parents' supervision, a wide range of new challenges has further complicated the already difficult tasks for parents and altered the landscape of parenting. Despite the fact that adolescents now have access to a wide range of digital media technologies both at home and in the learning environment, parenting practices such as active, restrictive, co-use, participatory and technical mediations are important in mitigating of online risks adolescents may encounter as a result of digital media use. However, these mediation practices only focus on the home environment including digital media present in the home and may not necessarily transcend outside the home and other learning environments where adolescents use digital media for school work and other activities. This poses the question of who mediates adolescent's digital media use outside the home environment. The learning environment could be a ''loose platform'' where an adolescent can maximise digital media use considering the fact that there is no restriction in terms of content and time allotted to using digital media during school hours. That is to say that an adolescent can play the ''bad boy'' online in school because there is little or no restriction of digital media use and be exposed to online risks and play the ''good boy'' at home because of ''heavy'' parental mediation. This is the reason why parent mediation practices have been ineffective because a parent may not be able to track adolescents digital media use considering the diversity of contexts, platforms and avenues adolescents use digital media. This study argues that due to the diverse nature of digital media technology, parents may not be able to monitor the 'whereabouts' of their children in the digital space. This is because adolescent digital media usage may not only be confined to the home environment but other learning environments like schools. This calls for urgent attention on the part of teachers to understand the intricacies of how digital media continue to shape the world in which young children are developing and learning. It is, therefore, imperative for parents to liaise with the schools of their children to mediate digital media use during school hours. The implication of parents- teachers mediation practices are discussed. The article concludes by suggesting that third party mediation by teachers in schools and other learning environments should be encouraged and future research needs to consider the emergent strategy of teacher-children mediation approach and the implication for policy for both the home and learning environments.

Keywords: digital media, digital age, parent mediation, third party mediation

Procedia PDF Downloads 141
439 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees

Authors: Alexandru-Ion Marinescu

Abstract:

There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.

Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution

Procedia PDF Downloads 101
438 Psychological Variables Predicting Academic Achievement in Argentinian Students: Scales Development and Recent Findings

Authors: Fernandez liporace, Mercedes Uriel Fabiana

Abstract:

Academic achievement in high school and college students is currently a matter of concern. National and international assessments show high schoolers as low achievers, and local statistics indicate alarming dropout percentages in this educational level. Even so, 80% of those students intend attending higher education. On the other hand, applications to Public National Universities are free and non-selective by examination procedures. Though initial registrations are massive (307.894 students), only 50% of freshmen pass their first year classes, and 23% achieves a degree. Low performances use to be a common problem. Hence, freshmen adaptation, their adjustment, dropout and low academic achievement arise as topics of agenda. Besides, the hinge between high school and college must be examined in depth, in order to get an integrated and successful path from one educational stratum to the other. Psychology aims at developing two main research lines to analyse the situation. One regarding psychometric scales, designing and/or adapting tests, examining their technical properties and their theoretical validity (e.g., academic motivation, learning strategies, learning styles, coping, perceived social support, parenting styles and parental consistency, paradoxical personality as correlated to creative skills, psychopathological symptomatology). The second research line emphasizes relationships within the variables measured by the former scales, facing the formulation and testing of predictive models of academic achievement, establishing differences by sex, age, educational level (high school vs college), and career. Pursuing these goals, several studies were carried out in recent years, reporting findings and producing assessment technology useful to detect students academically at risk as well as good achievers. Multiple samples were analysed totalizing more than 3500 participants (2500 from college and 1000 from high school), including descriptive, correlational, group differences and explicative designs. A brief on the most relevant results is presented. Providing information to design specific interventions according to every learner’s features and his/her educational environment comes up as a mid-term accomplishment. Furthermore, that information might be helpful to adapt curricula by career, as well as for implementing special didactic strategies differentiated by sex and personal characteristics.

Keywords: academic achievement, higher education, high school, psychological assessment

Procedia PDF Downloads 351
437 Reducing Pressure Drop in Microscale Channel Using Constructal Theory

Authors: K. X. Cheng, A. L. Goh, K. T. Ooi

Abstract:

The effectiveness of microchannels in enhancing heat transfer has been demonstrated in the semiconductor industry. In order to tap the microscale heat transfer effects into macro geometries, overcoming the cost and technological constraints, microscale passages were created in macro geometries machined using conventional fabrication methods. A cylindrical insert was placed within a pipe, and geometrical profiles were created on the outer surface of the insert to enhance heat transfer under steady-state single-phase liquid flow conditions. However, while heat transfer coefficient values of above 10 kW/m2·K were achieved, the heat transfer enhancement was accompanied by undesirable pressure drop increment. Therefore, this study aims to address the high pressure drop issue using Constructal theory, a universal design law for both animate and inanimate systems. Two designs based on Constructal theory were developed to study the effectiveness of Constructal features in reducing the pressure drop increment as compared to parallel channels, which are commonly found in microchannel fabrication. The hydrodynamic and heat transfer performance for the Tree insert and Constructal fin (Cfin) insert were studied using experimental methods, and the underlying mechanisms were substantiated by numerical results. In technical terms, the objective is to achieve at least comparable increment in both heat transfer coefficient and pressure drop, if not higher increment in the former parameter. Results show that the Tree insert improved the heat transfer performance by more than 16 percent at low flow rates, as compared to the Tree-parallel insert. However, the heat transfer enhancement reduced to less than 5 percent at high Reynolds numbers. On the other hand, the pressure drop increment stayed almost constant at 20 percent. This suggests that the Tree insert has better heat transfer performance in the low Reynolds number region. More importantly, the Cfin insert displayed improved heat transfer performance along with favourable hydrodynamic performance, as compared to Cfinparallel insert, at all flow rates in this study. At 2 L/min, the enhancement of heat transfer was more than 30 percent, with 20 percent pressure drop increment, as compared to Cfin-parallel insert. Furthermore, comparable increment in both heat transfer coefficient and pressure drop was observed at 8 L/min. In other words, the Cfin insert successfully achieved the objective of this study. Analysis of the results suggests that bifurcation of flows is effective in reducing the increment in pressure drop relative to heat transfer enhancement. Optimising the geometries of the Constructal fins is therefore the potential future study in achieving a bigger stride in energy efficiency at much lower costs.

Keywords: constructal theory, enhanced heat transfer, microchannel, pressure drop

Procedia PDF Downloads 318
436 Behavioral and EEG Reactions in Native Turkic-Speaking Inhabitants of Siberia and Siberian Russians during Recognition of Syntactic Errors in Sentences in Native and Foreign Languages

Authors: Tatiana N. Astakhova, Alexander E. Saprygin, Tatyana A. Golovko, Alexander N. Savostyanov, Mikhail S. Vlasov, Natalia V. Borisova, Alexandera G. Karpova, Urana N. Kavai-ool, Elena D. Mokur-ool, Nikolay A. Kolchanov, Lubomir I. Aftanas

Abstract:

The aim of the study is to compare behaviorally and EEG reactions in Turkic-speaking inhabitants of Siberia (Tuvinians and Yakuts) and Russians during the recognition of syntax errors in native and foreign languages. 63 healthy aboriginals of the Tyva Republic, 29 inhabitants of the Sakha (Yakutia) Republic, and 55 Russians from Novosibirsk participated in the study. All participants completed a linguistic task, in which they had to find a syntax error in the written sentences. Russian participants completed the task in Russian and in English. Tuvinian and Yakut participants completed the task in Russian, English, and Tuvinian or Yakut, respectively. EEG’s were recorded during the solving of tasks. For Russian participants, EEG's were recorded using 128-channels. The electrodes were placed according to the extended International 10-10 system, and the signals were amplified using ‘Neuroscan (USA)’ amplifiers. For Tuvinians and Yakuts EEG's were recorded using 64-channels and amplifiers Brain Products, Germany. In all groups 0.3-100 Hz analog filtering, sampling rate 1000 Hz were used. Response speed and the accuracy of recognition error were used as parameters of behavioral reactions. Event-related potentials (ERP) responses P300 and P600 were used as indicators of brain activity. The accuracy of solving tasks and response speed in Russians were higher for Russian than for English. The P300 amplitudes in Russians were higher for English; the P600 amplitudes in the left temporal cortex were higher for the Russian language. Both Tuvinians and Yakuts have no difference in accuracy of solving tasks in Russian and in their respective national languages (Tuvinian and Yakut). However, the response speed was faster for tasks in Russian than for tasks in their national language. Tuvinians and Yakuts showed bad accuracy in English, but the response speed was higher for English than for Russian and the national languages. With Tuvinians, there were no differences in the P300 and P600 amplitudes and in cortical topology for Russian and Tuvinian, but there was a difference for English. In Yakuts, the P300 and P600 amplitudes and topology of ERP for Russian were the same as Russians had for Russian. In Yakuts, brain reactions during Yakut and English comprehension had no difference and were reflected foreign language comprehension -while the Russian language comprehension was reflected native language comprehension. We found out that the Tuvinians recognized both Russian and Tuvinian as native languages, and English as a foreign language. The Yakuts recognized both English and Yakut as a foreign language, only Russian as a native language. According to the inquirer, both Tuvinians and Yakuts use the national language as a spoken language, whereas they don’t use it for writing. It can well be a reason that Yakuts perceive the Yakut writing language as a foreign language while writing Russian as their native.

Keywords: EEG, language comprehension, native and foreign languages, Siberian inhabitants

Procedia PDF Downloads 520
435 Volunteered Geographic Information Coupled with Wildfire Fire Progression Maps: A Spatial and Temporal Tool for Incident Storytelling

Authors: Cassandra Hansen, Paul Doherty, Chris Ferner, German Whitley, Holly Torpey

Abstract:

Wildfire is a natural and inevitable occurrence, yet changing climatic conditions have increased the severity, frequency, and risk to human populations in the wildland/urban interface (WUI) of the Western United States. Rapid dissemination of accurate wildfire information is critical to both the Incident Management Team (IMT) and the affected community. With the advent of increasingly sophisticated information systems, GIS can now be used as a web platform for sharing geographic information in new and innovative ways, such as virtual story map applications. Crowdsourced information can be extraordinarily useful when coupled with authoritative information. Information abounds in the form of social media, emergency alerts, radio, and news outlets, yet many of these resources lack a spatial component when first distributed. In this study, we describe how twenty-eight volunteer GIS professionals across nine Geographic Area Coordination Centers (GACC) sourced, curated, and distributed Volunteered Geographic Information (VGI) from authoritative social media accounts focused on disseminating information about wildfires and public safety. The combination of fire progression maps with VGI incident information helps answer three critical questions about an incident, such as: where the first started. How and why the fire behaved in an extreme manner and how we can learn from the fire incident's story to respond and prepare for future fires in this area. By adding a spatial component to that shared information, this team has been able to visualize shared information about wildfire starts in an interactive map that answers three critical questions in a more intuitive way. Additionally, long-term social and technical impacts on communities are examined in relation to situational awareness of the disaster through map layers and agency links, the number of views in a particular region of a disaster, community involvement and sharing of this critical resource. Combined with a GIS platform and disaster VGI applications, this workflow and information become invaluable to communities within the WUI and bring spatial awareness for disaster preparedness, response, mitigation, and recovery. This study highlights progression maps as the ultimate storytelling mechanism through incident case studies and demonstrates the impact of VGI and sophisticated applied cartographic methodology make this an indispensable resource for authoritative information sharing.

Keywords: storytelling, wildfire progression maps, volunteered geographic information, spatial and temporal

Procedia PDF Downloads 154
434 High-Performance Thin-layer Chromatography (HPTLC) Analysis of Multi-Ingredient Traditional Chinese Medicine Supplement

Authors: Martin Cai, Khadijah B. Hashim, Leng Leo, Edmund F. Tian

Abstract:

Analysis of traditional Chinese medicinal (TCM) supplements has always been a laborious task, particularly in the case of multi‐ingredient formulations. Traditionally, herbal extracts are analysed using one or few markers compounds. In the recent years, however, pharmaceutical companies are introducing health supplements of TCM active ingredients to cater to the needs of consumers in the fast-paced society in this age. As such, new problems arise in the aspects of composition identification as well as quality analysis. In most cases of products or supplements formulated with multiple TCM herbs, the chemical composition, and nature of each raw material differs greatly from the others in the formulation. This results in a requirement for individual analytical processes in order to identify the marker compounds in the various botanicals. Thin-layer Chromatography (TLC) is a simple, cost effective, yet well-regarded method for the analysis of natural products, both as a Pharmacopeia-approved method for identification and authentication of herbs, and a great analytical tool for the discovery of chemical compositions in herbal extracts. Recent technical advances introduced High-Performance TLC (HPTLC) where, with the help of automated equipment and improvements on the chromatographic materials, both the quality and reproducibility are greatly improved, allowing for highly standardised analysis with greater details. Here we report an industrial consultancy project with ONI Global Pte Ltd for the analysis of LAC Liver Protector, a TCM formulation aimed at improving liver health. The aim of this study was to identify 4 key components of the supplement using HPTLC, following protocols derived from Chinese Pharmacopeia standards. By comparing the TLC profiles of the supplement to the extracts of the herbs reported in the label, this project proposes a simple and cost-effective analysis of the presence of the 4 marker compounds in the multi‐ingredient formulation by using 4 different HPTLC methods. With the increasing trend of small and medium-sized enterprises (SMEs) bringing natural products and health supplements into the market, it is crucial that the qualities of both raw materials and end products be well-assured for the protection of consumers. With the technology of HPTLC, science can be incorporated to help SMEs with their quality control, thereby ensuring product quality.

Keywords: traditional Chinese medicine supplement, high performance thin layer chromatography, active ingredients, product quality

Procedia PDF Downloads 261
433 Determination of Cyanotoxins from Leeukraal and Klipvoor Dams

Authors: Moletsane Makgotso, Mogakabe Elijah, Marrengane Zinhle

Abstract:

South Africa’s water resources quality is becoming more and more weakened by eutrophication, which deteriorates its usability. Thirty five percent of fresh water resources are eutrophic to hypertrophic, including grossly-enriched reservoirs that go beyond the globally-accepted definition of hypertrophy. Failing infrastructure adds to the problem of contaminated urban runoff which encompasses an important fraction of flows to inland reservoirs, particularly in the non-coastal, economic heartland of the country. Eutrophication threatens the provision of potable and irrigation water in the country because of the dependence on fresh water resources. Eutrophicated water reservoirs increase water treatment costs, leads to unsuitability for recreational purposes and health risks to human and animal livelihood due to algal proliferation. Eutrophication is caused by high concentrations of phosphorus and nitrogen in water bodies. In South Africa, Microsystis and Anabaena are widely distributed cyanobacteria, with Microcystis being the most dominant bloom-forming cyanobacterial species associated with toxin production. Two impoundments were selected, namely the Klipvoor and Leeukraal dams as they are mainly used for fishing, recreational, agricultural and to some extent, potable water purposes. The total oxidized nitrogen and total phosphorus concentration were determined as causative nutrients for eutrophication. Chlorophyll a and total microcystins, as well as the identification of cyanobacteria was conducted as indicators of cyanobacterial infestation. The orthophosphate concentration was determined by subjecting the samples to digestion and filtration followed by spectrophotometric analysis of total phosphates and dissolved phosphates using Aquakem kits. The total oxidized nitrates analysis was conducted by initially conducting filtration followed by spectrophotometric analysis. Chlorophyll a was quantified spectrophotometrically by measuring the absorbance of before and after acidification. Microcystins were detected using the Quantiplate Microcystin Kit, as well as microscopic identification of cyanobacterial species. The Klipvoor dam was found to be hypertrophic throughout the study period as the mean Chlorophyll a concentration was 269.4µg/l which exceeds the mean value for the hypertrophic state. The mean Total Phosphorus concentration was >0.130mg/l, and the total microcystin concentration was > 2.5µg/l throughout the study. The most predominant algal species were found to be the Microcystis. The Leeukraal dam was found to be mesotrophic with the potential of it becoming eutrophic as the mean concentration for chlorophyll a was 18.49 µg/l with the mean Total Phosphorus > 0.130mg/l and the Total Microcystin concentration < 0.16µg/l. The cyanobacterial species identified in Leeukraal have been classified as those that do not pose a potential risk to any impoundment. Microcystis was present throughout the sampling period and dominant during the warmer seasons. The high nutrient concentrations led to the dominance of Microcystis that resulted in high levels of microcystins rendering the impoundments, particularly Klipvoor undesirable for utilisation.

Keywords: nitrogen, phosphorus, cyanobacteria, microcystins

Procedia PDF Downloads 262
432 The Protection of Artificial Intelligence (AI)-Generated Creative Works Through Authorship: A Comparative Analysis Between the UK and Nigerian Copyright Experience to Determine Lessons to Be Learnt from the UK

Authors: Esther Ekundayo

Abstract:

The nature of AI-generated works makes it difficult to identify an author. Although, some scholars have suggested that all the players involved in its creation should be allocated authorship according to their respective contribution. From the programmer who creates and designs the AI to the investor who finances the AI and to the user of the AI who most likely ends up creating the work in question. While others suggested that this issue may be resolved by the UK computer-generated works (CGW) provision under Section 9(3) of the Copyright Designs and Patents Act 1988. However, under the UK and Nigerian copyright law, only human-created works are recognised. This is usually assessed based on their originality. This simply means that the work must have been created as a result of its author’s creative and intellectual abilities and not copied. Such works are literary, dramatic, musical and artistic works and are those that have recently been a topic of discussion with regards to generative artificial intelligence (Generative AI). Unlike Nigeria, the UK CDPA recognises computer-generated works and vests its authorship with the human who made the necessary arrangement for its creation . However, making necessary arrangement in the case of Nova Productions Ltd v Mazooma Games Ltd was interpreted similarly to the traditional authorship principle, which requires the skills of the creator to prove originality. Although, some recommend that computer-generated works complicates this issue, and AI-generated works should enter the public domain as authorship cannot be allocated to AI itself. Additionally, the UKIPO recognising these issues in line with the growing AI trend in a public consultation launched in the year 2022, considered whether computer-generated works should be protected at all and why. If not, whether a new right with a different scope and term of protection should be introduced. However, it concluded that the issue of computer-generated works would be revisited as AI was still in its early stages. Conversely, due to the recent developments in this area with regards to Generative AI systems such as ChatGPT, Midjourney, DALL-E and AIVA, amongst others, which can produce human-like copyright creations, it is therefore important to examine the relevant issues which have the possibility of altering traditional copyright principles as we know it. Considering that the UK and Nigeria are both common law jurisdictions but with slightly differing approaches to this area, this research, therefore, seeks to answer the following questions by comparative analysis: 1)Who is the author of an AI-generated work? 2)Is the UK’s CGW provision worthy of emulation by the Nigerian law? 3) Would a sui generis law be capable of protecting AI-generated works and its author under both jurisdictions? This research further examines the possible barriers to the implementation of the new law in Nigeria, such as limited technical expertise and lack of awareness by the policymakers, amongst others.

Keywords: authorship, artificial intelligence (AI), generative ai, computer-generated works, copyright, technology

Procedia PDF Downloads 63
431 A Bottleneck-Aware Power Management Scheme in Heterogeneous Processors for Web Apps

Authors: Inyoung Park, Youngjoo Woo, Euiseong Seo

Abstract:

With the advent of WebGL, Web apps are now able to provide high quality graphics by utilizing the underlying graphic processing units (GPUs). Despite that the Web apps are becoming common and popular, the current power management schemes, which were devised for the conventional native applications, are suboptimal for Web apps because of the additional layer, the Web browser, between OS and application. The Web browser running on a CPU issues GL commands, which are for rendering images to be displayed by the Web app currently running, to the GPU and the GPU processes them. The size and number of issued GL commands determine the processing load of the GPU. While the GPU is processing the GL commands, CPU simultaneously executes the other compute intensive threads. The actual user experience will be determined by either CPU processing or GPU processing depending on which of the two is the more demanded resource. For example, when the GPU work queue is saturated by the outstanding commands, lowering the performance level of the CPU does not affect the user experience because it is already deteriorated by the retarded execution of GPU commands. Consequently, it would be desirable to lower CPU or GPU performance level to save energy when the other resource is saturated and becomes a bottleneck in the execution flow. Based on this observation, we propose a power management scheme that is specialized for the Web app runtime environment. This approach incurs two technical challenges; identification of the bottleneck resource and determination of the appropriate performance level for unsaturated resource. The proposed power management scheme uses the CPU utilization level of the Window Manager to tell which one is the bottleneck if exists. The Window Manager draws the final screen using the processed results delivered from the GPU. Thus, the Window Manager is on the critical path that determines the quality of user experience and purely executed by the CPU. The proposed scheme uses the weighted average of the Window Manager utilization to prevent excessive sensitivity and fluctuation. We classified Web apps into three categories using the analysis results that measure frame-per-second (FPS) changes under diverse CPU/GPU clock combinations. The results showed that the capability of the CPU decides user experience when the Window Manager utilization is above 90% and consequently, the proposed scheme decreases the performance level of CPU by one step. On the contrary, when its utilization is less than 60%, the bottleneck usually lies in the GPU and it is desirable to decrease the performance of GPU. Even the processing unit that is not on critical path, excessive performance drop can occur and that may adversely affect the user experience. Therefore, our scheme lowers the frequency gradually, until it finds an appropriate level by periodically checking the CPU utilization. The proposed scheme reduced the energy consumption by 10.34% on average in comparison to the conventional Linux kernel, and it worsened their FPS by 1.07% only on average.

Keywords: interactive applications, power management, QoS, Web apps, WebGL

Procedia PDF Downloads 178
430 Leuco Dye-Based Thermochromic Systems for Application in Temperature Sensing

Authors: Magdalena Wilk-Kozubek, Magdalena Rowińska, Krzysztof Rola, Joanna Cybińska

Abstract:

Leuco dye-based thermochromic systems are classified as intelligent materials because they exhibit thermally induced color changes. Thanks to this feature, they are mainly used as temperature sensors in many industrial sectors. For example, placing a thermochromic material on a chemical reactor may warn about exceeding the maximum permitted temperature for a chemical process. Usually two components, a color former and a developer are needed to produce a system with irreversible color change. The color former is an electron donating (proton accepting) compound such as fluoran leuco dye. The developer is an electron accepting (proton donating) compound such as organic carboxylic acid. When the developer melts, the color former - developer complex is created and the termochromic system becomes colored. Typically, the melting point of the applied developer determines the temperature at which the color change occurs. When the lactone ring of the color former is closed, then the dye is in its colorless state. The ring opening, induced by the addition of a proton, causes the dye to turn into its colored state. Since the color former and the developer are often solid, they can be incorporated into polymer films to facilitate their practical use in industry. The objective of this research was to fabricate a leuco dye-based termochromic system that will irreversibly change color after reaching the temperature of 100°C. For this purpose, benzofluoran leuco dye (as color former) and phenoxyacetic acid (as developer with a melting point of 100°C) were introduced into the polymer films during the drop casting process. The film preparation process was optimized in order to obtain thin films with appropriate properties such as transparency, flexibility and homogeneity. Among the optimized factors were the concentration of benzofluoran leuco dye and phenoxyacetic acid, the type, average molecular weight and concentration of the polymer, and the type and concentration of the surfactant. The selected films, containing benzofluoran leuco dye and phenoxyacetic acid, were combined by mild heat treatment. Structural characterization of single and combined films was carried out by FTIR spectroscopy, morphological analysis was performed by optical microscopy and SEM, phase transitions were examined by DSC, color changes were investigated by digital photography and UV-Vis spectroscopy, while emission changes were studied by photoluminescence spectroscopy. The resulting thermochromic system is colorless at room temperature, but after reaching 100°C the developer melts and it turns irreversibly pink. Therefore, it could be used as an additional sensor to warn against boiling of water in power plants using water cooling. Currently used electronic temperature indicators are prone to faults and unwanted third-party actions. The sensor constructed in this work is transparent, thanks to which it can be unnoticed by an outsider and constitute a reliable reference for the person responsible for the apparatus.

Keywords: color developer, leuco dye, thin film, thermochromism

Procedia PDF Downloads 81
429 Children and Communities Benefit from Mother-Tongue Based Multi-Lingual Education

Authors: Binay Pattanayak

Abstract:

Multilingual state, Jharkhand is home to more than 19 tribal and regional languages. These are used by more than 33 communities in the state. The state has declared 12 of these languages as official languages of the state. However, schools in the state do not recognize any of these community languages even in early grades! Children, who speak in their mother tongues at home, local market and playground, find it very difficult to understand their teacher and textbooks in school. They fail to acquire basic literacy and numeracy skills in early grades. Out of frustration due to lack of comprehension, the majority of children leave school. Jharkhand sees the highest dropout in early grades in India. To address this, the state under the guidance of the author designed a mother tongue based pre-school education programme named Bhasha Puliya and bilingual picture dictionaries in 9 tribal and regional mother tongues of children. This contributed significantly to children’s school readiness in the school. Followed by this, the state designed a mother-tongue based multilingual education programme (MTB-MLE) for multilingual context. The author guided textbook development in 5 tribal (Santhali, Mundari, Ho, Kurukh and Kharia) and two regional (Odia and Bangla) languages. Teachers and community members were trained for MTB-MLE in around 1,000 schools of the concerned language pockets. Community resource groups were constituted along with their academic calendars in each school to promote story-telling, singing, painting, dancing, riddles, etc. with community support. This, on the one hand, created rich learning environments for children. On the other hand, the communities have discovered a great potential in the process of developing a wide variety of learning materials for children in own mother-tongue using their local stories, songs, riddles, paintings, idioms, skits, etc. as a process of their literary, cultural and technical enrichment. The majority of children are acquiring strong early grade reading skills (basic literacy and numeracy) in grades I-II thereby getting well prepared for higher studies. In a phased manner they are learning Hindi and English after 4-5 years of MTB-MLE using the foundational language learning skills. Community members have started designing new books, audio-visual learning materials in their mother-tongues seeing a great potential for their cultural and technological rejuvenation.

Keywords: community resource groups, MTB-MLE, multilingual, socio-linguistic survey, learning

Procedia PDF Downloads 181
428 Building Information Modelling: A Solution to the Limitations of Prefabricated Construction

Authors: Lucas Peries, Rolla Monib

Abstract:

The construction industry plays a vital role in the global economy, contributing billions of dollars annually. However, the industry has been struggling with persistently low productivity levels for years, unlike other sectors that have shown significant improvements. Modular and prefabricated construction methods have been identified as potential solutions to boost productivity in the construction industry. These methods offer time advantages over traditional construction methods. Despite their potential benefits, modular and prefabricated construction face hindrances and limitations that are not present in traditional building systems. Building information modelling (BIM) has the potential to address some of these hindrances, but barriers are preventing its widespread adoption in the construction industry. This research aims to enhance understanding of the shortcomings of modular and prefabricated building systems and develop BIM-based solutions to alleviate or eliminate these hindrances. The research objectives include identifying and analysing key issues hindering the use of modular and prefabricated building systems, investigating the current state of BIM adoption in the construction industry and factors affecting its successful implementation, proposing BIM-based solutions to address the issues associated with modular and prefabricated building systems, and assessing the effectiveness of the developed solutions in removing barriers to their use. The research methodology involves conducting a critical literature review to identify the key issues and challenges in modular and prefabricated construction and BIM adoption. Additionally, an online questionnaire will be used to collect primary data from construction industry professionals, allowing for feedback and evaluation of the proposed BIM-based solutions. The data collected will be analysed to evaluate the effectiveness of the solutions and their potential impact on the adoption of modular and prefabricated building systems. The main findings of the research indicate that the identified issues from the literature review align with the opinions of industry professionals, and the proposed BIM-based solutions are considered effective in addressing the challenges associated with modular and prefabricated construction. However, the research has limitations, such as a small sample size and the need to assess the feasibility of implementing the proposed solutions. In conclusion, this research contributes to enhancing the understanding of modular and prefabricated building systems' limitations and proposes BIM-based solutions to overcome these limitations. The findings are valuable to construction industry professionals and BIM software developers, providing insights into the challenges and potential solutions for implementing modular and prefabricated construction systems in future projects. Further research should focus on addressing the limitations and assessing the feasibility of implementing the proposed solutions from technical and legal perspectives.

Keywords: building information modelling, modularisation, prefabrication, technology

Procedia PDF Downloads 79
427 Exploration Tools for Tantalum-Bearing Pegmatites along Kibara Belt, Central and Southwestern Uganda

Authors: Sadat Sembatya

Abstract:

Tantalum metal is used in addressing capacitance challenge in the 21st-century technology growth. Tantalum is rarely found in its elemental form. Hence it’s often found with niobium and the radioactive elements of thorium and uranium. Industrial processes are required to extract pure tantalum. Its deposits are mainly oxide associated and exist in Ta-Nb oxides such as tapiolite, wodginite, ixiolite, rutile and pyrochlore-supergroup minerals are of minor importance. The stability and chemical inertness of tantalum makes it a valuable substance for laboratory equipment and a substitute for platinum. Each period of Tantalum ore formation is characterized by specific mineralogical and geochemical features. Compositions of Columbite-Group Minerals (CGM) are variable: Fe-rich types predominate in the Man Shield (Sierra Leone), the Congo Craton (DR Congo), the Kamativi Belt (Zimbabwe) and the Jos Plateau (Nigeria). Mn-rich columbite-tantalite is typical of the Alto Ligonha Province (Mozambique), the Arabian-Nubian Shield (Egypt, Ethiopia) and the Tantalite Valley pegmatites (southern Namibia). There are large compositional variations through Fe-Mn fractionation, followed by Nb-Ta fractionation. These are typical for pegmatites usually associated with very coarse quartz-feldspar-mica granites. They are young granitic systems of the Kibara Belt of Central Africa and the Older Granites of Nigeria. Unlike ‘simple’ Be-pegmatites, most Ta-Nb rich pegmatites have the most complex zoning. Hence we need systematic exploration tools to find and rapidly assess the potential of different pegmatites. The pegmatites exist as known deposits (e.g., abandoned mines) and the exposed or buried pegmatites. We investigate rocks and minerals to trace for the possibility of the effect of hydrothermal alteration mainly for exposed pegmatites, do mineralogical study to prove evidence of gradual replacement and geochemistry to report the availability of trace elements which are good indicators of mineralisation. Pegmatites are not good geophysical responders resulting to the exclusion of the geophysics option. As for more advanced prospecting, we bulk samples from different zones first to establish their grades and characteristics, then make a pilot test plant because of big samples to aid in the quantitative characterization of zones, and then drill to reveal distribution and extent of different zones but not necessarily grade due to nugget effect. Rapid assessment tools are needed to assess grade and degree of fractionation in order to ‘rule in’ or ‘rule out’ a given pegmatite for future work. Pegmatite exploration is also unique, high risk and expensive hence right traceability system and certification for 3Ts are highly needed.

Keywords: exploration, mineralogy, pegmatites, tantalum

Procedia PDF Downloads 130
426 Combination of Unmanned Aerial Vehicle and Terrestrial Laser Scanner Data for Citrus Yield Estimation

Authors: Mohammed Hmimou, Khalid Amediaz, Imane Sebari, Nabil Bounajma

Abstract:

Annual crop production is one of the most important macroeconomic indicators for the majority of countries around the world. This information is valuable, especially for exporting countries which need a yield estimation before harvest in order to correctly plan the supply chain. When it comes to estimating agricultural yield, especially for arboriculture, conventional methods are mostly applied. In the case of the citrus industry, the sale before harvest is largely practiced, which requires an estimation of the production when the fruit is on the tree. However, conventional method based on the sampling surveys of some trees within the field is always used to perform yield estimation, and the success of this process mainly depends on the expertise of the ‘estimator agent’. The present study aims to propose a methodology based on the combination of unmanned aerial vehicle (UAV) images and terrestrial laser scanner (TLS) point cloud to estimate citrus production. During data acquisition, a fixed wing and rotatory drones, as well as a terrestrial laser scanner, were tested. After that, a pre-processing step was performed in order to generate point cloud and digital surface model. At the processing stage, a machine vision workflow was implemented to extract points corresponding to fruits from the whole tree point cloud, cluster them into fruits, and model them geometrically in a 3D space. By linking the resulting geometric properties to the fruit weight, the yield can be estimated, and the statistical distribution of fruits size can be generated. This later property, which is information required by importing countries of citrus, cannot be estimated before harvest using the conventional method. Since terrestrial laser scanner is static, data gathering using this technology can be performed over only some trees. So, integration of drone data was thought in order to estimate the yield over a whole orchard. To achieve that, features derived from drone digital surface model were linked to yield estimation by laser scanner of some trees to build a regression model that predicts the yield of a tree given its features. Several missions were carried out to collect drone and laser scanner data within citrus orchards of different varieties by testing several data acquisition parameters (fly height, images overlap, fly mission plan). The accuracy of the obtained results by the proposed methodology in comparison to the yield estimation results by the conventional method varies from 65% to 94% depending mainly on the phenological stage of the studied citrus variety during the data acquisition mission. The proposed approach demonstrates its strong potential for early estimation of citrus production and the possibility of its extension to other fruit trees.

Keywords: citrus, digital surface model, point cloud, terrestrial laser scanner, UAV, yield estimation, 3D modeling

Procedia PDF Downloads 123
425 Possibilities of Psychodiagnostics in the Context of Highly Challenging Situations in Military Leadership

Authors: Markéta Chmelíková, David Ullrich, Iva Burešová

Abstract:

The paper maps the possibilities and limits of diagnosing selected personality and performance characteristics of military leadership and psychology students in the context of coping with challenging situations. Individuals vary greatly inter-individually in their ability to effectively manage extreme situations, yet existing diagnostic tools are often criticized mainly for their low predictive power. Nowadays, every modern army focuses primarily on the systematic minimization of potential risks, including the prediction of desirable forms of behavior and the performance of military commanders. The context of military leadership is well known for its life-threatening nature. Therefore, it is crucial to research stress load in the specific context of military leadership for the purpose of possible anticipation of human failure in managing extreme situations of military leadership. The aim of the submitted pilot study, using an experiment of 24 hours duration, is to verify the possibilities of a specific combination of psychodiagnostic to predict people who possess suitable equipment for coping with increased stress load. In our pilot study, we conducted an experiment of 24 hours duration with an experimental group (N=13) in the bomb shelter and a control group (N=11) in a classroom. Both groups were represented by military leadership students (N=11) and psychology students (N=13). Both groups were equalized in terms of study type and gender. Participants were administered the following test battery of personality characteristics: Big Five Inventory 2 (BFI-2), Short Dark Triad (SD-3), Emotion Regulation Questionnaire (ERQ), Fatigue Severity Scale (FSS), and Impulsive Behavior Scale (UPPS-P). This test battery was administered only once at the beginning of the experiment. Along with this, they were administered a test battery consisting of the Test of Attention (d2) and the Bourdon test four times overall with 6 hours ranges. To better simulate an extreme situation – we tried to induce sleep deprivation - participants were required to try not to fall asleep throughout the experiment. Despite the assumption that a stay in an underground bomb shelter will manifest in impaired cognitive performance, this expectation has been significantly confirmed in only one measurement, which can be interpreted as marginal in the context of multiple testing. This finding is a fundamental insight into the issue of stress management in extreme situations, which is crucial for effective military leadership. The results suggest that a 24-hour stay in a shelter, together with sleep deprivation, does not seem to simulate sufficient stress for an individual, which would be reflected in the level of cognitive performance. In the context of these findings, it would be interesting in future to extend the diagnostic battery with physiological indicators of stress, such as: heart rate, stress score, physical stress, mental stress ect.

Keywords: bomb shelter, extreme situation, military leadership, psychodiagnostic

Procedia PDF Downloads 78
424 Heat Stress a Risk Factor for Poor Maternal Health- Evidence from South India

Authors: Vidhya Venugopal, Rekha S.

Abstract:

Introduction: Climate change and the growing frequency of higher average temperatures and heat waves have detrimental health effects, especially for certain vulnerable groups with limited socioeconomic status (SES) or physiological capacity to adapt to or endure high temperatures. Little research has been conducted on the effects of heat stress on pregnant women and fetuses in tropical regions such as India. Very high ambient temperatures may worsen Adverse Pregnancy Outcomes (APOs) and are a major worry in the scenario of climate change. The relationship between rising temperatures and APO must be better understood in order to design more effective interventions. Methodology: We conducted an observational cohort study involving 865 pregnant women in various districts of Tamil Nadu districts between 2014 and 2021. Physiological Heat Strain Indicators (HSI) such as morning and evening Core Body Temperature (CBT) and Urine Specific Gravity (USG) were monitored using an infrared thermometer and refractometer, respectively. A validated, modified version of the HOTHAPS questionnaire was utilised to collect self-reported health symptoms. A follow-up was undertaken with the mothers to collect information regarding birth outcomes and APOs, such as spontaneous abortions, stillbirths, Preterm Birth (PTB), birth abnormalities, and Low Birth Weight (LBW). Major findings of the study: According to the findings of our study, ambient temperatures (mean WBGT°C) were substantially higher (>28°C) for approximately 46% of women performing moderate daily life activities. 82% versus 43% of these women experienced dehydration and heat-related complaints. 34% of women had USG >1.020, which is symptomatic of dehydration. APOs, which include spontaneous abortions, were prevalent at 2.2%, stillbirth/preterm birth/birth abnormalities were prevalent at 2.2%, and low birth weight was prevalent at 16.3%. With exposures to WBGT>28°C, the incidence of miscarriage or unexpected abortion rose by approximately 2.7 times (95% CI: 1.1-6.9). In addition, higher WBGT exposures were associated with a 1.4-fold increased risk of unfavorable birth outcomes (95% Confidence Interval [CI]: 1.02-1.09). The risk of spontaneous abortions was 2.8 times higher among women who conceived during the hotter months (February – September) compared to those women who conceived in the cooler months (October – January) (95% CI: 1.04-7.4). Positive relationships between ambient heat and APOs found in this study necessitate further exploration into the underlying factors for extensive cohort studies to generate information to enable the formulation of policies that can effectively protect these women against excessive heat stress for enhanced maternal and fetal health.

Keywords: heat exposures, community, pregnant women, physiological strain, adverse outcome, interventions

Procedia PDF Downloads 67
423 Conceptualization and Assessment of Key Competencies for Children in Preschools: A Case Study in Southwest China

Authors: Yumei Han, Naiqing Song, Xiaoping Yang, Yuping Han

Abstract:

This study explores the conceptualization of key competencies that children are expected to develop in three year preschools (age 3-6) and the assessment practices of such key competencies in China. Assessment of children development has been put into the central place of early childhood education quality evaluation system in China. In the context of students key competencies development centered education reform in China, defining and selecting key competencies of children in preschools are of great significance in that they would lay a solid foundation for children’s lifelong learning path, and they would lead to curriculum and instruction reform, teacher development reform as well as quality evaluation reform in the early childhood education area. Based on sense making theory and framework, this study adopted multiple stakeholders’ (early childhood educators, parents, evaluation administrators, scholars in the early childhood education field) perspectives and grass root voices to conceptualize and operationalize key competencies for children in preschools in Southwest China. On the ground of children development theories, Chinese and international literature related to children development and key competencies, and key competencies frameworks by UNESCO, OECD and other nations, the authors designed a two-phase sequential mixed method study to address three main questions: (a) How is early childhood key competency defined or labeled from literature and from different stakeholders’ views? (b) Based on the definitions explicated in the literature and the surveys on different stakeholders, what domains and components are regarded to constitute the key competency framework of children in three-year preschools in China? (c) How have early childhood key competencies been assessed and measured, and how such assessment and measurement contribute to enhancing early childhood development quality? On the first phase, a series of focus group surveys were conducted among different types of stakeholders around the research questions. Moreover, on the second phase, based on the coding of the participants’ answers, together with literature synthesis findings, a questionnaire survey was designed and conducted to select most commonly expected components of preschool children’s key competencies. Semi-structured open questions were also included in the questionnaire for the participants to add on competencies beyond the checklist. Rudimentary findings show agreeable concerns on the significance and necessity of conceptualization and assessment of key competencies for children in preschools, and a key competencies framework composed of 7 domains and 25 indicators was constructed. Meanwhile, the findings also show issues in the current assessment practices of children’s competencies, such as lack of effective assessment tools, lack of teacher capacity in applying the tools to evaluating children and advancing children development accordingly. Finally, the authors put forth suggestions and implications for China and international communities in terms of restructuring early childhood key competencies framework, and promoting child development centered reform in early childhood education quality evaluation and development.

Keywords: assessment, conceptualization, early childhood education quality in China, key competencies

Procedia PDF Downloads 231
422 Knowledge Graph Development to Connect Earth Metadata and Standard English Queries

Authors: Gabriel Montague, Max Vilgalys, Catherine H. Crawford, Jorge Ortiz, Dava Newman

Abstract:

There has never been so much publicly accessible atmospheric and environmental data. The possibilities of these data are exciting, but the sheer volume of available datasets represents a new challenge for researchers. The task of identifying and working with a new dataset has become more difficult with the amount and variety of available data. Datasets are often documented in ways that differ substantially from the common English used to describe the same topics. This presents a barrier not only for new scientists, but for researchers looking to find comparisons across multiple datasets or specialists from other disciplines hoping to collaborate. This paper proposes a method for addressing this obstacle: creating a knowledge graph to bridge the gap between everyday English language and the technical language surrounding these datasets. Knowledge graph generation is already a well-established field, although there are some unique challenges posed by working with Earth data. One is the sheer size of the databases – it would be infeasible to replicate or analyze all the data stored by an organization like The National Aeronautics and Space Administration (NASA) or the European Space Agency. Instead, this approach identifies topics from metadata available for datasets in NASA’s Earthdata database, which can then be used to directly request and access the raw data from NASA. By starting with a single metadata standard, this paper establishes an approach that can be generalized to different databases, but leaves the challenge of metadata harmonization for future work. Topics generated from the metadata are then linked to topics from a collection of English queries through a variety of standard and custom natural language processing (NLP) methods. The results from this method are then compared to a baseline of elastic search applied to the metadata. This comparison shows the benefits of the proposed knowledge graph system over existing methods, particularly in interpreting natural language queries and interpreting topics in metadata. For the research community, this work introduces an application of NLP to the ecological and environmental sciences, expanding the possibilities of how machine learning can be applied in this discipline. But perhaps more importantly, it establishes the foundation for a platform that can enable common English to access knowledge that previously required considerable effort and experience. By making this public data accessible to the full public, this work has the potential to transform environmental understanding, engagement, and action.

Keywords: earth metadata, knowledge graphs, natural language processing, question-answer systems

Procedia PDF Downloads 130
421 Empowering South African Female Farmers through Organic Lamb Production: A Cost Analysis Case Study

Authors: J. M. Geyser

Abstract:

Lamb is a popular meat throughout the world, particularly in Europe, the Middle East and Oceania. However, the conventional lamb industry faces challenges related to environmental sustainability, climate change, consumer health and dwindling profit margins. This has stimulated an increasing demand for organic lamb, as it is perceived to increase environmental sustainability, offer superior quality, taste, and nutritional value, which is appealing to farmers, including small-scale and female farmers, as it often commands a premium price. Despite its advantages, organic lamb production presents challenges, with a significant hurdle being the high production costs encompassing organic certification, lower stocking rates, higher mortality rates and marketing cost. These costs impact the profitability and competitiveness or organic lamb producers, particularly female and small-scale farmers, who often encounter additional obstacles, such as limited access to resources and markets. Therefore, this paper examines the cost of producing organic lambs and its impact on female farmers and raises the research question: “Is organic lamb production the saving grace for female and small-scale farmers?” Objectives include estimating and comparing production costs and profitability or organic lamb production with conventional lamb production, analyzing influencing factors, and assessing opportunities and challenges for female and small-scale farmers. The hypothesis states that organic lamb production can be a viable and beneficial option for female and small-scale farmers, provided that they can overcome high production costs and access premium markets. The study uses a mixed-method approach, combining qualitative and quantitative data. Qualitative data involves semi-structured interviews with ten female and small-scale farmers engaged in organic lamb production in South Africa. The interview covered topics such as farm characteristics, practices, cost components, mortality rates, income sources and empowerment indicators. Quantitative data used secondary published information and primary data from a female farmer. The research findings indicate that when a female farmer moves from conventional lamb production to organic lamb production, the cost in the first year of organic lamb production exceed those of conventional lamb production by over 100%. This is due to lower stocking rates and higher mortality rates in the organic system. However, costs start decreasing in the second year as stocking rates increase due to manure applications on grazing and lower mortality rates due to better worm resistance in the herd. In conclusion, this article sheds light on the economic dynamics of organic lamb production, particularly focusing on its impact on female farmers. To empower female farmers and to promote sustainable agricultural practices, it is imperative to understand the cost structures and profitability of organic lamb production.

Keywords: cost analysis, empowerment, female farmers, organic lamb production

Procedia PDF Downloads 52
420 Applying Push Notifications with Behavioral Change Strategies in Fitness Applications: A Survey of User's Perception Based on Consumer Engagement

Authors: Yali Liu, Maria Avello Iturriagagoitia

Abstract:

Background: Fitness applications (apps) are one of the most popular mobile health (mHealth) apps. These apps can help prevent/control health issues such as obesity, which is one of the most serious public health challenges in the developed world in recent decades. Compared with the traditional intervention like face-to-face treatment, it is cheaper and more convenient to use fitness apps to interfere with physical activities and healthy behaviors. Nevertheless, fitness applications apps tend to have high abandonment rates and low levels of user engagement. Therefore, maintaining the endurance of users' usage is challenging. In fact, previous research shows a variety of strategies -goal-setting, self-monitoring, coaching, etc.- for promoting fitness and health behavior change. These strategies can influence the users’ perseverance and self-monitoring of the program as well as favoring their adherence to routines that involve a long-term behavioral change. However, commercial fitness apps rarely incorporate these strategies into their design, thus leading to a lack of engagement with the apps. Most of today’s mobile services and brands engage their users proactively via push notifications. Push notifications. These notifications are visual or auditory alerts to inform mobile users about a wide range of topics that entails an effective and personal mean of communication between the app and the user. One of the research purposes of this article is to implement the application of behavior change strategies through push notifications. Proposes: This study aims to better understand the influence that effective use of push notifications combined with the behavioral change strategies will have on users’ engagement with the fitness app. And the secondary objectives are 1) to discuss the sociodemographic differences in utilization of push notifications of fitness apps; 2) to determine the impact of each strategy in customer engagement. Methods: The study uses a combination of the Consumer Engagement Theory and UTAUT2 based model to conduct an online survey among current users of fitness apps. The questionnaire assessed attitudes to each behavioral change strategy, and sociodemographic variables. Findings: Results show the positive effect of push notifications in the generation of consumer engagement and the different impacts of each strategy among different groups of population in customer engagement. Conclusions: Fitness apps with behavior change strategies have a positive impact on increasing users’ usage time and customer engagement. Theoretical experts can participate in designing fitness applications, along with technical designers.

Keywords: behavioral change, customer engagement, fitness app, push notification, UTAUT2

Procedia PDF Downloads 113
419 Three-Dimensional Model of Leisure Activities: Activity, Relationship, and Expertise

Authors: Taekyun Hur, Yoonyoung Kim, Junkyu Lim

Abstract:

Previous works on leisure activities had been categorizing activities arbitrarily and subjectively while focusing on a single dimension (e.g. active-passive, individual-group). To overcome these problems, this study proposed a Korean leisure activities’ matrix model that considered multidimensional features of leisure activities, which was comprised of 3 main factors and 6 sub factors: (a) Active (physical, mental), (b) Relational (quantity, quality), (c) Expert (entry barrier, possibility of improving). We developed items for measuring the degree of each dimension for every leisure activity. Using the developed Leisure Activities Dimensions (LAD) questionnaire, we investigated the presented dimensions of a total of 78 leisure activities which had been enjoyed by most Koreans recently (e.g. watching movie, taking a walk, watching media). The study sample consisted of 1348 people (726 men, 658 women) ranging in age from teenagers to elderlies in their seventies. This study gathered 60 data for each leisure activity, a total of 4860 data, which were used for statistical analysis. First, this study compared 3-factor model (Activity, Relation, Expertise) fit with 6-factor model (physical activity, mental activity, relational quantity, relational quality, entry barrier, possibility of improving) fit by using confirmatory factor analysis. Based on several goodness-of-fit indicators, the 6-factor model for leisure activities was a better fit for the data. This result indicates that it is adequate to take account of enough dimensions of leisure activities (6-dimensions in our study) to specifically apprehend each leisure attributes. In addition, the 78 leisure activities were cluster-analyzed with the scores calculated based on the 6-factor model, which resulted in 8 leisure activity groups. Cluster 1 (e.g. group sports, group musical activity) and Cluster 5 (e.g. individual sports) had generally higher scores on all dimensions than others, but Cluster 5 had lower relational quantity than Cluster 1. In contrast, Cluster 3 (e.g. SNS, shopping) and Cluster 6 (e.g. playing a lottery, taking a nap) had low scores on a whole, though Cluster 3 showed medium levels of relational quantity and quality. Cluster 2 (e.g. machine operating, handwork/invention) required high expertise and mental activity, but low physical activity. Cluster 4 indicated high mental activity and relational quantity despite low expertise. Cluster 7 (e.g. tour, joining festival) required not only moderate degrees of physical activity and relation, but low expertise. Lastly, Cluster 8 (e.g. meditation, information searching) had the appearance of high mental activity. Even though clusters of our study had a few similarities with preexisting taxonomy of leisure activities, there was clear distinctiveness between them. Unlike the preexisting taxonomy that had been created subjectively, we assorted 78 leisure activities based on objective figures of 6-dimensions. We also could identify that some leisure activities, which used to belong to the same leisure group, were included in different clusters (e.g. filed ball sports, net sports) because of different features. In other words, the results can provide a different perspective on leisure activities research and be helpful for figuring out what various characteristics leisure participants have.

Keywords: leisure, dimensional model, activity, relationship, expertise

Procedia PDF Downloads 286
418 Security Issues in Long Term Evolution-Based Vehicle-To-Everything Communication Networks

Authors: Mujahid Muhammad, Paul Kearney, Adel Aneiba

Abstract:

The ability for vehicles to communicate with other vehicles (V2V), the physical (V2I) and network (V2N) infrastructures, pedestrians (V2P), etc. – collectively known as V2X (Vehicle to Everything) – will enable a broad and growing set of applications and services within the intelligent transport domain for improving road safety, alleviate traffic congestion and support autonomous driving. The telecommunication research and industry communities and standardization bodies (notably 3GPP) has finally approved in Release 14, cellular communications connectivity to support V2X communication (known as LTE – V2X). LTE – V2X system will combine simultaneous connectivity across existing LTE network infrastructures via LTE-Uu interface and direct device-to-device (D2D) communications. In order for V2X services to function effectively, a robust security mechanism is needed to ensure legal and safe interaction among authenticated V2X entities in the LTE-based V2X architecture. The characteristics of vehicular networks, and the nature of most V2X applications, which involve human safety makes it significant to protect V2X messages from attacks that can result in catastrophically wrong decisions/actions include ones affecting road safety. Attack vectors include impersonation attacks, modification, masquerading, replay, MiM attacks, and Sybil attacks. In this paper, we focus our attention on LTE-based V2X security and access control mechanisms. The current LTE-A security framework provides its own access authentication scheme, the AKA protocol for mutual authentication and other essential cryptographic operations between UEs and the network. V2N systems can leverage this protocol to achieve mutual authentication between vehicles and the mobile core network. However, this protocol experiences technical challenges, such as high signaling overhead, lack of synchronization, handover delay and potential control plane signaling overloads, as well as privacy preservation issues, which cannot satisfy the adequate security requirements for majority of LTE-based V2X services. This paper examines these challenges and points to possible ways by which they can be addressed. One possible solution, is the implementation of the distributed peer-to-peer LTE security mechanism based on the Bitcoin/Namecoin framework, to allow for security operations with minimal overhead cost, which is desirable for V2X services. The proposed architecture can ensure fast, secure and robust V2X services under LTE network while meeting V2X security requirements.

Keywords: authentication, long term evolution, security, vehicle-to-everything

Procedia PDF Downloads 153
417 ‘Doctor Knows Best’: Reconsidering Paternalism in the NICU

Authors: Rebecca Greenberg, Nipa Chauhan, Rashad Rehman

Abstract:

Paternalism, in its traditional form, seems largely incompatible with Western medicine. In contrast, Family-Centred Care, a partial response to historically authoritative paternalism, carries its own challenges, particularly when operationalized as family-directed care. Specifically, in neonatology, decision-making is left entirely to Substitute Decision Makers (most commonly parents). Most models of shared decision-making employ both the parents’ and medical team’s perspectives but do not recognize the inherent asymmetry of information and experience – asking parents to act like physicians to evaluate technical data and encourage physicians to refrain from strong medical opinions and proposals. They also do not fully appreciate the difficulties in adjudicating which perspective to prioritize and, moreover, how to mitigate disagreement. Introducing a mild form of paternalism can harness the unique skillset both parents and clinicians bring to shared decision-making and ultimately work towards decision-making in the best interest of the child. The notion expressed here is that within the model of shared decision-making, mild paternalism is prioritized inasmuch as optimal care is prioritized. This mild form of paternalism is known as Beneficent Paternalism and justifies our encouragement for physicians to root down in their own medical expertise to propose treatment plans informed by medical expertise, standards of care, and the parents’ values. This does not mean that we forget that paternalism was historically justified on ‘beneficent’ grounds; however, our recommendation is that a re-integration of mild paternalism is appropriate within our current Western healthcare climate. Through illustrative examples from the NICU, this paper explores the appropriateness and merits of Beneficent Paternalism and ultimately its use in promoting family-centered care, patient’s best interests and reducing moral distress. A distinctive feature of the NICU is the fact that communication regarding a patient’s treatment is exclusively done with substitute decision-makers and not the patient, i.e., the neonate themselves. This leaves the burden of responsibility entirely on substitute decision-makers and the clinical team; the patient in the NICU does not have any prior wishes, values, or beliefs that can guide decision-making on their behalf. Therefore, the wishes, values, and beliefs of the parent become the map upon which clinical proposals are made, giving extra weight to the family’s decision-making responsibility. This leads to why Family Directed Care is common in the NICU, where shared decision-making is mandatory. However, the zone of parental discretion is not as all-encompassing as it is currently considered; there are appropriate times when the clinical team should strongly root down in medical expertise and perhaps take the lead in guiding family decision-making: this is just what it means to adopt Beneficent Paternalism.

Keywords: care, ethics, expertise, NICU, paternalism

Procedia PDF Downloads 122
416 Challenges for Competency-Based Learning Design in Primary School Mathematics in Mozambique

Authors: Satoshi Kusaka

Abstract:

The term ‘competency’ is attracting considerable scholarly attention worldwide with the advance of globalization in the 21st century and with the arrival of a knowledge-based society. In the current world environment, familiarity with varied disciplines is regarded to be vital for personal success. The idea of a competency-based educational system was mooted by the ‘Definition and Selection of Competencies (DeSeCo)’ project that was conducted by the Organization for Economic Cooperation and Development (OECD). Further, attention to this topic is not limited to developed countries; it can also be observed in developing countries. For instance, the importance of a competency-based curriculum was mentioned in the ‘2013 Harmonized Curriculum Framework for the East African Community’, which recommends key competencies that should be developed in primary schools. The introduction of such curricula and the reviews of programs are actively being executed, primarily in the East African Community but also in neighboring nations. Taking Mozambique as a case in point, the present paper examines the conception of ‘competency’ as a target of frontline education in developing countries. It also aims to discover the manner in which the syllabus, textbooks and lessons, among other things, in primary-level math education are developed and to determine the challenges faced in the process. This study employs the perspective of competency-based education design to analyze how the term ‘competency’ is defined in the primary-level math syllabus, how it is reflected in the textbooks, and how the lessons are actually developed. ‘Practical competency’ is mentioned in the syllabus, and the description of the term lays emphasis on learners' ability to interactively apply socio-cultural and technical tools, which is one of the key competencies that are advocated in OECD's ‘Definition and Selection of Competencies’ project. However, most of the content of the textbooks pertains to ‘basic academic ability’, and in actual classroom practice, teachers often impart lessons straight from the textbooks. It is clear that the aptitude of teachers and their classroom routines are greatly dependent on the cultivation of their own ‘practical competency’ as it is defined in the syllabus. In other words, there is great divergence between the ‘syllabus’, which is the intended curriculum, and the content of the ‘textbooks’. In fact, the material in the textbooks should serve as the bridge between the syllabus, which forms the guideline, and the lessons, which represent the ‘implemented curriculum’. Moreover, the results obtained from this investigation reveal that the problem can only be resolved through the cultivation of ‘practical competency’ in teachers, which is currently not sufficient.

Keywords: competency, curriculum, mathematics education, Mozambique

Procedia PDF Downloads 170
415 Sustainable Technology and the Production of Housing

Authors: S. Arias

Abstract:

New housing developments and the technological changes that this implies, adapt the styles of living of its residents, as well as new family structures and forms of work due to the particular needs of a specific group of people which involves different techniques of dealing with, organize, equip and use a particular territory. Currently, own their own space is increasingly important and the cities are faced with the challenge of providing the opportunity for such demands, as well as energy, water and waste removal necessary in the process of construction and occupation of new human settlements. Until the day of today, not has failed to give full response to these demands and needs, resulting in cities that grow without control, badly used land, avenues and congested streets. Buildings and dwellings have an important impact on the environment and on the health of the people, therefore environmental quality associated with the comfort of humans to the sustainable development of natural resources. Applied to architecture, this concept involves the incorporation of new technologies in all the constructive process of a dwelling, changing customs of developers and users, what must be a greater effort in planning energy savings and thus reducing the emissions Greenhouse Gases (GHG) depending on the geographical location where it is planned to develop. Since the techniques of occupation of the territory are not the same everywhere, must take into account that these depend on the geographical, social, political, economic and climatic-environmental circumstances of place, which in modified according to the degree of development reached. In the analysis that must be undertaken to check the degree of sustainability of the place, it is necessary to make estimates of the energy used in artificial air conditioning and lighting. In the same way is required to diagnose the availability and distribution of the water resources used for hygiene and for the cooling of artificially air-conditioned spaces, as well as the waste resulting from these technological processes. Based on the results obtained through the different stages of the analysis, it is possible to perform an energy audit in the process of proposing recommendations of sustainability in architectural spaces in search of energy saving, rational use of water and natural resources optimization. The above can be carried out through the development of a sustainable building code in develop technical recommendations to the regional characteristics of each study site. These codes would seek to build bases to promote a building regulations applicable to new human settlements looking for is generated at the same time quality, protection and safety in them. This building regulation must be consistent with other regulations both national and municipal and State, such as the laws of human settlements, urban development and zoning regulations.

Keywords: building regulations, housing, sustainability, technology

Procedia PDF Downloads 335
414 Economic Efficiency of Cassava Production in Nimba County, Liberia: An Output-Oriented Approach

Authors: Kollie B. Dogba, Willis Oluoch-Kosura, Chepchumba Chumo

Abstract:

In Liberia, many of the agricultural households cultivate cassava for either sustenance purposes, or to generate farm income. Many of the concentrated cassava farmers reside in Nimba, a north-eastern County that borders two other economies: the Republics of Cote D’Ivoire and Guinea. With a high demand for cassava output and products in emerging Asian markets coupled with an objective of the Liberia agriculture policies to increase the competitiveness of valued agriculture crops; there is a need to examine the level of resource-use efficiency for many agriculture crops. However, there is a scarcity of information on the efficiency of many agriculture crops, including cassava. Hence the study applying an output-oriented method seeks to assess the economic efficiency of cassava farmers in Nimba County, Liberia. A multi-stage sampling technique was employed to generate a sample for the study. From 216 cassava farmers, data related to on-farm attributes, socio-economic and institutional factors were collected. The stochastic frontier models, using the Translog functional forms, of production and revenue, were used to determine the level of revenue efficiency and its determinants. The result showed that most of the cassava farmers are male (60%). Many of the farmers are either married, engaged or living together with a spouse (83%), with a mean household size of nine persons. Farmland is prevalently obtained by inheritance (95%), average farm size is 1.34 hectares, and most cassava farmers did not access agriculture credits (76%) and extension services (91%). The mean cassava output per hectare is 1,506.02 kg, which estimates average revenue of L$23,551.16 (Liberian dollars). Empirical results showed that the revenue efficiency of cassava farmers varies from 0.1% to 73.5%; with the mean revenue efficiency of 12.9%. This indicates that on average, there is a vast potential of 87.1% to increase the economic efficiency of cassava farmers in Nimba by improving technical and allocative efficiencies. For the significant determinants of revenue efficiency, age and group membership had negative effects on revenue efficiency of cassava production; while farming experience, access to extension, formal education, and average wage rate have positive effects. The study recommends the setting-up and incentivizing of farmer field schools for cassava farmers to primarily share their farming experiences with others and to learn robust cultivation techniques of sustainable agriculture. Also, farm managers and farmers should consider a fix wage rate in labor contracts for all stages of cassava farming.

Keywords: economic efficiency, frontier production and revenue functions, Nimba County, Liberia, output-oriented approach, revenue efficiency, sustainable agriculture

Procedia PDF Downloads 112
413 An Informative Marketing Platform: Methodology and Architecture

Authors: Martina Marinelli, Samanta Vellante, Francesco Pilotti, Daniele Di Valerio, Gaetanino Paolone

Abstract:

Any development in web marketing technology requires changes in information engineering to identify instruments and techniques suitable for the production of software applications for informative marketing. Moreover, for large web solutions, designing an interface that enables human interactions is a complex process that must bridge between informative marketing requirements and the developed solution. A user-friendly interface in web marketing applications is crucial for a successful business. The paper introduces mkInfo - a software platform that implements informative marketing. Informative marketing is a new interpretation of marketing which places the information at the center of every marketing action. The creative team includes software engineering researchers who have recently authored an article on automatic code generation. The authors have created the mkInfo software platform to generate informative marketing web applications. For each web application, it is possible to automatically implement an opt in page, a landing page, a sales page, and a thank you page: one only needs to insert the content. mkInfo implements an autoresponder to send mail according to a predetermined schedule. The mkInfo platform also includes e-commerce for a product or service. The stakeholder can access any opt-in page and get basic information about a product or service. If he wants to know more, he will need to provide an e-mail address to access a landing page that will generate an e-mail sequence. It will provide him with complete information about the product or the service. From this point on, the stakeholder becomes a user and is now able to purchase the product or related services through the mkInfo platform. This paper suggests a possible definition for Informative Marketing, illustrates its basic principles, and finally details the mkInfo platform that implements it. This paper also offers some Informative Marketing models, which are implemented in the mkInfo platform. Informative marketing can be applied to products or services. It is necessary to realize a web application for each product or service. The mkInfo platform enables the product or the service producer to send information concerning a specific product or service to all stakeholders. In conclusion, the technical contributions of this paper are: a different interpretation of marketing based on information; a modular architecture for web applications, particularly for one with standard features such as information storage, exchange, and delivery; multiple models to implement informative marketing; a software platform enabling the implementation of such models in a web application. Future research aims to enable stakeholders to provide information about a product or a service so that the information gathered about a product or a service includes both the producer’s and the stakeholders' point of view. The purpose is to create an all-inclusive management system of the knowledge regarding a specific product or service: a system that includes everything about the product or service and is able to address even unexpected questions.

Keywords: informative marketing, opt in page, software platform, web application

Procedia PDF Downloads 112