Search results for: computer/notebook
212 Non-Perturbative Vacuum Polarization Effects in One- and Two-Dimensional Supercritical Dirac-Coulomb System
Authors: Andrey Davydov, Konstantin Sveshnikov, Yulia Voronina
Abstract:
There is now a lot of interest to the non-perturbative QED-effects, caused by diving of discrete levels into the negative continuum in the supercritical static or adiabatically slowly varying Coulomb fields, that are created by the localized extended sources with Z > Z_cr. Such effects have attracted a considerable amount of theoretical and experimental activity, since in 3+1 QED for Z > Z_cr,1 ≈ 170 a non-perturbative reconstruction of the vacuum state is predicted, which should be accompanied by a number of nontrivial effects, including the vacuum positron emission. Similar in essence effects should be expected also in both 2+1 D (planar graphene-based hetero-structures) and 1+1 D (one-dimensional ‘hydrogen ion’). This report is devoted to the study of such essentially non-perturbative vacuum effects for the supercritical Dirac-Coulomb systems in 1+1D and 2+1D, with the main attention drawn to the vacuum polarization energy. Although the most of works considers the vacuum charge density as the main polarization observable, vacuum energy turns out to be not less informative and in many respects complementary to the vacuum density. Moreover, the main non-perturbative effects, which appear in vacuum polarization for supercritical fields due to the levels diving into the lower continuum, show up in the behavior of vacuum energy even more clear, demonstrating explicitly their possible role in the supercritical region. Both in 1+1D and 2+1D, we explore firstly the renormalized vacuum density in the supercritical region using the Wichmann-Kroll method. Thereafter, taking into account the results for the vacuum density, we formulate the renormalization procedure for the vacuum energy. To evaluate the latter explicitly, an original technique, based on a special combination of analytical methods, computer algebra tools and numerical calculations, is applied. It is shown that, for a wide range of the external source parameters (the charge Z and size R), in the supercritical region the renormalized vacuum energy could significantly deviate from the perturbative quadratic growth up to pronouncedly decreasing behavior with jumps by (-2 x mc^2), which occur each time, when the next discrete level dives into the negative continuum. In the considered range of variation of Z and R, the vacuum energy behaves like ~ -Z^2/R in 1+1D and ~ -Z^3/R in 2+1D, exceeding deeply negative values. Such behavior confirms the assumption of the neutral vacuum transmutation into the charged one, and thereby of the spontaneous positron emission, accompanying the emergence of the next vacuum shell due to the total charge conservation. To the end, we also note that the methods, developed for the vacuum energy evaluation in 2+1 D, with minimal complements could be carried over to the three-dimensional case, where the vacuum energy is expected to be ~ -Z^4/R and so could be competitive with the classical electrostatic energy of the Coulomb source.Keywords: non-perturbative QED-effects, one- and two-dimensional Dirac-Coulomb systems, supercritical fields, vacuum polarization
Procedia PDF Downloads 202211 AI Applications in Accounting: Transforming Finance with Technology
Authors: Alireza Karimi
Abstract:
Artificial Intelligence (AI) is reshaping various industries, and accounting is no exception. With the ability to process vast amounts of data quickly and accurately, AI is revolutionizing how financial professionals manage, analyze, and report financial information. In this article, we will explore the diverse applications of AI in accounting and its profound impact on the field. Automation of Repetitive Tasks: One of the most significant contributions of AI in accounting is automating repetitive tasks. AI-powered software can handle data entry, invoice processing, and reconciliation with minimal human intervention. This not only saves time but also reduces the risk of errors, leading to more accurate financial records. Pattern Recognition and Anomaly Detection: AI algorithms excel at pattern recognition. In accounting, this capability is leveraged to identify unusual patterns in financial data that might indicate fraud or errors. AI can swiftly detect discrepancies, enabling auditors and accountants to focus on resolving issues rather than hunting for them. Real-Time Financial Insights: AI-driven tools, using natural language processing and computer vision, can process documents faster than ever. This enables organizations to have real-time insights into their financial status, empowering decision-makers with up-to-date information for strategic planning. Fraud Detection and Prevention: AI is a powerful tool in the fight against financial fraud. It can analyze vast transaction datasets, flagging suspicious activities and reducing the likelihood of financial misconduct going unnoticed. This proactive approach safeguards a company's financial integrity. Enhanced Data Analysis and Forecasting: Machine learning, a subset of AI, is used for data analysis and forecasting. By examining historical financial data, AI models can provide forecasts and insights, aiding businesses in making informed financial decisions and optimizing their financial strategies. Artificial Intelligence is fundamentally transforming the accounting profession. From automating mundane tasks to enhancing data analysis and fraud detection, AI is making financial processes more efficient, accurate, and insightful. As AI continues to evolve, its role in accounting will only become more significant, offering accountants and finance professionals powerful tools to navigate the complexities of modern finance. Embracing AI in accounting is not just a trend; it's a necessity for staying competitive in the evolving financial landscape.Keywords: artificial intelligence, accounting automation, financial analysis, fraud detection, machine learning in finance
Procedia PDF Downloads 63210 Enabling Self-Care and Shared Decision Making for People Living with Dementia
Authors: Jonathan Turner, Julie Doyle, Laura O’Philbin, Dympna O’Sullivan
Abstract:
People living with dementia should be at the centre of decision-making regarding goals for daily living. These goals include basic activities (dressing, hygiene, and mobility), advanced activities (finances, transportation, and shopping), and meaningful activities that promote well-being (pastimes and intellectual pursuits). However, there is limited involvement of people living with dementia in the design of technology to support their goals. A project is described that is co-designing intelligent computer-based support for, and with, people affected by dementia and their carers. The technology will support self-management, empower participation in shared decision-making with carers and help people living with dementia remain healthy and independent in their homes for longer. It includes information from the patient’s care plan, which documents medications, contacts, and the patient's wishes on end-of-life care. Importantly for this work, the plan can outline activities that should be maintained or worked towards, such as exercise or social contact. The authors discuss how to integrate care goal information from such a care plan with data collected from passive sensors in the patient’s home in order to deliver individualized planning and interventions for persons with dementia. A number of scientific challenges are addressed: First, to co-design with dementia patients and their carers computerized support for shared decision-making about their care while allowing the patient to share the care plan. Second, to develop a new and open monitoring framework with which to configure sensor technologies to collect data about whether goals and actions specified for a person in their care plan are being achieved. This is developed top-down by associating care quality types and metrics elicited from the co-design activities with types of data that can be collected within the home, from passive and active sensors, and from the patient’s feedback collected through a simple co-designed interface. These activities and data will be mapped to appropriate sensors and technological infrastructure with which to collect the data. Third, the application of machine learning models to analyze data collected via the sensing devices in order to investigate whether and to what extent activities outlined via the care plan are being achieved. The models will capture longitudinal data to track disease progression over time; as the disease progresses and captured data show that activities outlined in the care plan are not being achieved, the care plan may recommend alternative activities. Disease progression may also require care changes, and a data-driven approach can capture changes in a condition more quickly and allow care plans to evolve and be updated.Keywords: care goals, decision-making, dementia, self-care, sensors
Procedia PDF Downloads 169209 Fuzzy Optimization for Identifying Anticancer Targets in Genome-Scale Metabolic Models of Colon Cancer
Authors: Feng-Sheng Wang, Chao-Ting Cheng
Abstract:
Developing a drug from conception to launch is costly and time-consuming. Computer-aided methods can reduce research costs and accelerate the development process during the early drug discovery and development stages. This study developed a fuzzy multi-objective hierarchical optimization framework for identifying potential anticancer targets in a metabolic model. First, RNA-seq expression data of colorectal cancer samples and their healthy counterparts were used to reconstruct tissue-specific genome-scale metabolic models. The aim of the optimization framework was to identify anticancer targets that lead to cancer cell death and evaluate metabolic flux perturbations in normal cells that have been caused by cancer treatment. Four objectives were established in the optimization framework to evaluate the mortality of cancer cells for treatment and to minimize side effects causing toxicity-induced tumorigenesis on normal cells and smaller metabolic perturbations. Through fuzzy set theory, a multiobjective optimization problem was converted into a trilevel maximizing decision-making (MDM) problem. The applied nested hybrid differential evolution was applied to solve the trilevel MDM problem using two nutrient media to identify anticancer targets in the genome-scale metabolic model of colorectal cancer, respectively. Using Dulbecco’s Modified Eagle Medium (DMEM), the computational results reveal that the identified anticancer targets were mostly involved in cholesterol biosynthesis, pyrimidine and purine metabolisms, glycerophospholipid biosynthetic pathway and sphingolipid pathway. However, using Ham’s medium, the genes involved in cholesterol biosynthesis were unidentifiable. A comparison of the uptake reactions for the DMEM and Ham’s medium revealed that no cholesterol uptake reaction was included in DMEM. Two additional media, i.e., a cholesterol uptake reaction was included in DMEM and excluded in HAM, were respectively used to investigate the relationship of tumor cell growth with nutrient components and anticancer target genes. The genes involved in the cholesterol biosynthesis were also revealed to be determinable if a cholesterol uptake reaction was not induced when the cells were in the culture medium. However, the genes involved in cholesterol biosynthesis became unidentifiable if such a reaction was induced.Keywords: Cancer metabolism, genome-scale metabolic model, constraint-based model, multilevel optimization, fuzzy optimization, hybrid differential evolution
Procedia PDF Downloads 80208 Coping Strategies of Female English Teachers and Housewives to Face the Challenges Associated to the COVID-19 Pandemic Lockdown
Authors: Lisseth Rojas Barreto, Carlos Muñoz Hernández
Abstract:
The COVID-19 pandemic led to many abrupt changes, including a prolonged lockdown, which brought about work and personal challenges to the population worldwide. Among the most affected populations are women who are workers and housewives at the same time, and especially those who are also parenting. These women were faced with the challenge to perform their usual varied roles during the lockdown from the same physical space, which inevitably had strong repercussions for each of them. This paper will present some results of a research study whose main objective was to examine the possible effects that the COVID-19 pandemic lockdown may have caused in the work, social, family, and personal environments of female English teachers who are also housewives and, by extension in the teaching and learning processes that they lead. Participants included five female English language teachers of a public foreign language school, they are all married, and two of them have children. Similarly, we examined some of the coping strategies these teachers used to tackle the pandemic-related challenges in their different roles, especially those used for their language teaching role; coping strategies are understood as a repertoire of behaviors in response to incidents that can be stressful for the subject, possible challenging events or situations that involve emotions with behaviors and decision-making of people which are used in order to find a meaning or positive result (Lazarus &Folkman, 1986) Following a qualitative-case study design, we gathered the data through a survey and a focus group interview with the participant teachers who work at a public language school in southern Colombia. Preliminary findings indicate that the circumstances that emerged as a result of the pandemic lockdown affected the participants in different ways, including financial, personal, family, health, and work-related issues. Among the strategies that participants found valuable to deal with the novel circumstances, we can highlight the reorganization of the household and work tasks and the increased awareness of time management for the household, work, and leisure. Additionally, we were able to evidence that the participants faced the circumstances with a positive view. Finally, in order to cope with their teaching duties, some participants acknowledged their lack of computer or technology literacy in order to deliver their classes online, which made them find support from their students or more knowledgeable peers to cope with it. Others indicated that they used strategies such as self-learning in order to get acquainted and be able to use the different technological tools and web-based platforms available.Keywords: coping strategies, language teaching, female teachers, pandemic lockdown
Procedia PDF Downloads 106207 Sweepline Algorithm for Voronoi Diagram of Polygonal Sites
Authors: Dmitry A. Koptelov, Leonid M. Mestetskiy
Abstract:
Voronoi Diagram (VD) of finite set of disjoint simple polygons, called sites, is a partition of plane into loci (for each site at the locus) – regions, consisting of points that are closer to a given site than to all other. Set of polygons is a universal model for many applications in engineering, geoinformatics, design, computer vision, and graphics. VD of polygons construction usually done with a reduction to task of constructing VD of segments, for which there are effective O(n log n) algorithms for n segments. Preprocessing – constructing segments from polygons’ sides, and postprocessing – polygon’s loci construction by merging the loci of the sides of each polygon are also included in reduction. This approach doesn’t take into account two specific properties of the resulting segment sites. Firstly, all this segments are connected in pairs in the vertices of the polygons. Secondly, on the one side of each segment lies the interior of the polygon. The polygon is obviously included in its locus. Using this properties in the algorithm for VD construction is a resource to reduce computations. The article proposes an algorithm for the direct construction of VD of polygonal sites. Algorithm is based on sweepline paradigm, allowing to effectively take into account these properties. The solution is performed based on reduction. Preprocessing is the constructing of set of sites from vertices and edges of polygons. Each site has an orientation such that the interior of the polygon lies to the left of it. Proposed algorithm constructs VD for set of oriented sites with sweepline paradigm. Postprocessing is a selecting of edges of this VD formed by the centers of empty circles touching different polygons. Improving the efficiency of the proposed sweepline algorithm in comparison with the general Fortune algorithm is achieved due to the following fundamental solutions: 1. Algorithm constructs only such VD edges, which are on the outside of polygons. Concept of oriented sites allowed to avoid construction of VD edges located inside the polygons. 2. The list of events in sweepline algorithm has a special property: the majority of events are connected with “medium” polygon vertices, where one incident polygon side lies behind the sweepline and the other in front of it. The proposed algorithm processes such events in constant time and not in logarithmic time, as in the general Fortune algorithm. The proposed algorithm is fully implemented and tested on a large number of examples. The high reliability and efficiency of the algorithm is also confirmed by computational experiments with complex sets of several thousand polygons. It should be noted that, despite the considerable time that has passed since the publication of Fortune's algorithm in 1986, a full-scale implementation of this algorithm for an arbitrary set of segment sites has not been made. The proposed algorithm fills this gap for an important special case - a set of sites formed by polygons.Keywords: voronoi diagram, sweepline, polygon sites, fortunes' algorithm, segment sites
Procedia PDF Downloads 177206 Thermal and Visual Comfort Assessment in Office Buildings in Relation to Space Depth
Authors: Elham Soltani Dehnavi
Abstract:
In today’s compact cities, bringing daylighting and fresh air to buildings is a significant challenge, but it also presents opportunities to reduce energy consumption in buildings by reducing the need for artificial lighting and mechanical systems. Simple adjustments to building form can contribute to their efficiency. This paper examines how the relationship between the width and depth of the rooms in office buildings affects visual and thermal comfort, and consequently energy savings. Based on these evaluations, we can determine the best location for sedentary areas in a room. We can also propose improvements to occupant experience and minimize the difference between the predicted and measured performance in buildings by changing other design parameters, such as natural ventilation strategies, glazing properties, and shading. This study investigates the condition of spatial daylighting and thermal comfort for a range of room configurations using computer simulations, then it suggests the best depth for optimizing both daylighting and thermal comfort, and consequently energy performance in each room type. The Window-to-Wall Ratio (WWR) is 40% with 0.8m window sill and 0.4m window head. Also, there are some fixed parameters chosen according to building codes and standards, and the simulations are done in Seattle, USA. The simulation results are presented as evaluation grids using the thresholds for different metrics such as Daylight Autonomy (DA), spatial Daylight Autonomy (sDA), Annual Sunlight Exposure (ASE), and Daylight Glare Probability (DGP) for visual comfort, and Predicted Mean Vote (PMV), Predicted Percentage of Dissatisfied (PPD), occupied Thermal Comfort Percentage (occTCP), over-heated percent, under-heated percent, and Standard Effective Temperature (SET) for thermal comfort that are extracted from Grasshopper scripts. The simulation tools are Grasshopper plugins such as Ladybug, Honeybee, and EnergyPlus. According to the results, some metrics do not change much along the room depth and some of them change significantly. So, we can overlap these grids in order to determine the comfort zone. The overlapped grids contain 8 metrics, and the pixels that meet all 8 mentioned metrics’ thresholds define the comfort zone. With these overlapped maps, we can determine the comfort zones inside rooms and locate sedentary areas there. Other parts can be used for other tasks that are not used permanently or need lower or higher amounts of daylight and thermal comfort is less critical to user experience. The results can be reflected in a table to be used as a guideline by designers in the early stages of the design process.Keywords: occupant experience, office buildings, space depth, thermal comfort, visual comfort
Procedia PDF Downloads 183205 Dosimetric Comparison among Different Head and Neck Radiotherapy Techniques Using PRESAGE™ Dosimeter
Authors: Jalil ur Rehman, Ramesh C. Tailor, Muhammad Isa Khan, Jahnzeeb Ashraf, Muhammad Afzal, Geofferry S. Ibbott
Abstract:
Purpose: The purpose of this analysis was to investigate dose distribution of different techniques (3D-CRT, IMRT and VMAT) of head and neck cancer using 3-dimensional dosimeter called PRESAGETM Dosimeter. Materials and Methods: Computer tomography (CT) scans of radiological physics center (RPC) head and neck anthropomorphic phantom with both RPC standard insert and PRESAGETM insert were acquired separated with Philipp’s CT scanner and both CT scans were exported via DICOM to the Pinnacle version 9.4 treatment planning system (TPS). Each plan was delivered twice to the RPC phantom first containing the RPC standard insert having TLD and film dosimeters and then again containing the Presage insert having 3-D dosimeter (PRESAGETM) by using a Varian True Beam linear accelerator. After irradiation, the standard insert including point dose measurements (TLD) and planar Gafchromic® EBT film measurement were read using RPC standard procedure. The 3D dose distribution from PRESAGETM was read out with the Duke Midsized optical scanner dedicated to RPC (DMOS-RPC). Dose volume histogram (DVH), mean and maximal doses for organs at risk were calculated and compared among each head and neck technique. The prescription dose was same for all head and neck radiotherapy techniques which was 6.60 Gy/friction. Beam profile comparison and gamma analysis were used to quantify agreements among film measurement, PRESAGETM measurement and calculated dose distribution. Quality assurances of all plans were performed by using ArcCHECK method. Results: VMAT delivered the lowest mean and maximum doses to organ at risk (spinal cord, parotid) than IMRT and 3DCRT. Such dose distribution was verified by absolute dose distribution using thermoluminescent dosimeter (TLD) system. The central axial, sagittal and coronal planes were evaluated using 2D gamma map criteria(± 5%/3 mm) and results were 99.82% (axial), 99.78% (sagital), 98.38% (coronal) for VMAT plan and found the agreement between PRESAGE and pinnacle was better than IMRT and 3D-CRT plan excludes a 7 mm rim at the edge of the dosimeter. Profile showed good agreement for all plans between film, PRESAGE and pinnacle and 3D gamma was performed for PTV and OARs, VMAT and 3DCRT endow with better agreement than IMRT. Conclusion: VMAT delivered lowered mean and maximal doses to organs at risk and better PTV coverage during head and neck radiotherapy. TLD, EBT film and PRESAGETM dosimeters suggest that VMAT was better for the treatment of head and neck cancer than IMRT and 3D-CRT.Keywords: RPC, 3DCRT, IMRT, VMAT, EBT2 film, TLD, PRESAGETM
Procedia PDF Downloads 395204 Enhancing Athlete Training using Real Time Pose Estimation with Neural Networks
Authors: Jeh Patel, Chandrahas Paidi, Ahmed Hambaba
Abstract:
Traditional methods for analyzing athlete movement often lack the detail and immediacy required for optimal training. This project aims to address this limitation by developing a Real-time human pose estimation system specifically designed to enhance athlete training across various sports. This system leverages the power of convolutional neural networks (CNNs) to provide a comprehensive and immediate analysis of an athlete’s movement patterns during training sessions. The core architecture utilizes dilated convolutions to capture crucial long-range dependencies within video frames. Combining this with the robust encoder-decoder architecture to further refine pose estimation accuracy. This capability is essential for precise joint localization across the diverse range of athletic poses encountered in different sports. Furthermore, by quantifying movement efficiency, power output, and range of motion, the system provides data-driven insights that can be used to optimize training programs. Pose estimation data analysis can also be used to develop personalized training plans that target specific weaknesses identified in an athlete’s movement patterns. To overcome the limitations posed by outdoor environments, the project employs strategies such as multi-camera configurations or depth sensing techniques. These approaches can enhance pose estimation accuracy in challenging lighting and occlusion scenarios, where pose estimation accuracy in challenging lighting and occlusion scenarios. A dataset is collected From the labs of Martin Luther King at San Jose State University. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing different poses, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced pose detection model and lays the groundwork for future innovations in assistive enhancement technologies.Keywords: computer vision, deep learning, human pose estimation, U-NET, CNN
Procedia PDF Downloads 54203 A Feature Clustering-Based Sequential Selection Approach for Color Texture Classification
Authors: Mohamed Alimoussa, Alice Porebski, Nicolas Vandenbroucke, Rachid Oulad Haj Thami, Sana El Fkihi
Abstract:
Color and texture are highly discriminant visual cues that provide an essential information in many types of images. Color texture representation and classification is therefore one of the most challenging problems in computer vision and image processing applications. Color textures can be represented in different color spaces by using multiple image descriptors which generate a high dimensional set of texture features. In order to reduce the dimensionality of the feature set, feature selection techniques can be used. The goal of feature selection is to find a relevant subset from an original feature space that can improve the accuracy and efficiency of a classification algorithm. Traditionally, feature selection is focused on removing irrelevant features, neglecting the possible redundancy between relevant ones. This is why some feature selection approaches prefer to use feature clustering analysis to aid and guide the search. These techniques can be divided into two categories. i) Feature clustering-based ranking algorithm uses feature clustering as an analysis that comes before feature ranking. Indeed, after dividing the feature set into groups, these approaches perform a feature ranking in order to select the most discriminant feature of each group. ii) Feature clustering-based subset search algorithms can use feature clustering following one of three strategies; as an initial step that comes before the search, binded and combined with the search or as the search alternative and replacement. In this paper, we propose a new feature clustering-based sequential selection approach for the purpose of color texture representation and classification. Our approach is a three step algorithm. First, irrelevant features are removed from the feature set thanks to a class-correlation measure. Then, introducing a new automatic feature clustering algorithm, the feature set is divided into several feature clusters. Finally, a sequential search algorithm, based on a filter model and a separability measure, builds a relevant and non redundant feature subset: at each step, a feature is selected and features of the same cluster are removed and thus not considered thereafter. This allows to significantly speed up the selection process since large number of redundant features are eliminated at each step. The proposed algorithm uses the clustering algorithm binded and combined with the search. Experiments using a combination of two well known texture descriptors, namely Haralick features extracted from Reduced Size Chromatic Co-occurence Matrices (RSCCMs) and features extracted from Local Binary patterns (LBP) image histograms, on five color texture data sets, Outex, NewBarktex, Parquet, Stex and USPtex demonstrate the efficiency of our method compared to seven of the state of the art methods in terms of accuracy and computation time.Keywords: feature selection, color texture classification, feature clustering, color LBP, chromatic cooccurrence matrix
Procedia PDF Downloads 135202 Thermoregulatory Responses of Holstein Cows Exposed to Intense Heat Stress
Authors: Rodrigo De A. Ferrazza, Henry D. M. Garcia, Viviana H. V. Aristizabal, Camilla De S. Nogueira, Cecilia J. Verissimo, Jose Roberto Sartori, Roberto Sartori, Joao Carlos P. Ferreira
Abstract:
Environmental factors adversely influence sustainability in livestock production system. Dairy herds are the most affected by heat stress among livestock industries. This clearly implies in development of new strategies for mitigating heat, which should be based on physiological and metabolic adaptations of the animal. In this study, we incorporated the effect of climate variables and heat exposure time on the thermoregulatory responses in order to clarify the adaptive mechanisms for bovine heat dissipation under intense thermal stress induced experimentally in climate chamber. Non-lactating Holstein cows were contemporaneously and randomly assigned to thermoneutral (TN; n=12) or heat stress (HS; n=12) treatments during 16 days. Vaginal temperature (VT) was measured every 15 min with a microprocessor-controlled data logger (HOBO®, Onset Computer Corporation, Bourne, MA, USA) attached to a modified vaginal controlled internal drug release insert (Sincrogest®, Ourofino, Brazil). Rectal temperature (RT), respiratory rate (RR) and heart rate (HR) were measured twice a day (0700 and 1500h) and dry matter intake (DMI) was estimated daily. The ambient temperature and air relative humidity were 25.9±0.2°C and 73.0±0.8%, respectively for TN, and 36.3± 0.3°C and 60.9±0.9%, respectively for HS. Respiratory rate of HS cows increased immediately after exposure to heat and was higher (76.02±1.70bpm; P<0.001) than TN (39.70±0.71bpm), followed by rising of RT (39.87°C±0.07 for HS versus 38.56±0.03°C for TN; P<0.001) and VT (39.82±0.10°C for HS versus 38.26±0.03°C for TN; P<0.001). A diurnal pattern was detected, with higher (P<0.01) afternoon temperatures than morning and this effect was aggravated for HS cows. There was decrease (P<0.05) of HR for HS cows (62.13±0.99bpm) compared to TN (66.23±0.79bpm), but the magnitude of the differences was not the same over time. From the third day, there was a decrease of DMI for HS in attempt to maintain homeothermy, while TN cows increased DMI (8.27kg±0.33kg d-1 for HS versus 14.03±0.29kg d-1 for TN; P<0.001). By regression analysis, RT and RR better reflected the response of cows to changes in the Temperature Humidity Index and the effect of climate variables from the previous day to influence the physiological parameters and DMI was more important than the current day, with ambient temperature the most important factor. Comparison between acute (0 to 3 days) and chronic (13 to 16 days) exposure to heat stress showed decreasing of the slope of the regression equations for RR and DMI, suggesting an adaptive adjustment, however with no change for RT. In conclusion, intense heat stress exerted strong influence on the thermoregulatory mechanisms, but the acclimation process was only partial.Keywords: acclimation, bovine, climate chamber, hyperthermia, thermoregulation
Procedia PDF Downloads 218201 Online Guidance and Counselling Needs and Preferences of University Undergraduates in a Nigerian University
Authors: Olusegun F. Adebowale
Abstract:
Research has confirmed that the emergence of information technology is significantly reflected in the field of psychology and its related disciplines due to its widespread use at reasonable price and its user-friendliness. It is consequently affecting ordinary life in many areas like shopping, advertising, corresponding and educating. Specifically the innovations of computer technology led to several new forms of communication, all with implications and applicability for counselling and psychotherapy practices. This is premise on which online counselling is based. Most institutions of higher learning in Nigeria have established their presence on the Internet and have deployed a variety of applications through ICT. Some are currently attempting to include counselling services in such applications with the belief that many counselling needs of students are likely to be met. This study therefore explored different challenges and preferences students present in online counselling interaction in a given Nigerian university with the view to guide new universities that may want to invest into these areas as to necessary preparations and referral requirements. The study is a mixed method research incorporating qualitative and quantitative methodologies to sample the preferences and concerns students express in online interaction. The sample comprised all the 876 students who visited the university online counselling platform either voluntarily, by invitation or by referral. The instrument for data collection was the online counselling platform of the university 'OAU Online counsellors'. The period of data collection spanned between January 2011 and October 2012. Data were analysed quantitatively (using percentages and Mann-Whitney U test) and qualitatively (using Interpretative Phenomenological Analysis (IPA)). The results showed that the students seem to prefer real-time chatting as their online medium of communicating with the online counsellor. The majority of students resorted to e-mail when their effort to use real-time chatting were becoming thwarted. Also, students preferred to enter into online counselling relationships voluntarily to other modes of entry. The results further showed that the prevalent counselling needs presented by students during online counselling sessions were mainly in the areas of social interaction and academic/educational concerns. Academic concerns were found to be prevalent, in form of course offerings, studentship matters and academic finance matters. The personal/social concerns were in form of students’ welfare, career related concerns and relationship matters. The study concludes students’ preferences include voluntary entry into online counselling, communication by real-time chatting and a specific focus on their academic concerns. It also recommends that all efforts should be made to encourage students’ voluntary entry into online counselling through reliable and stable internet infrastructure that will be able to support real-time chatting.Keywords: online, counselling, needs, preferences
Procedia PDF Downloads 290200 Construction and Validation of Allied Bank-Teller Aptitude Test
Authors: Muhammad Kashif Fida
Abstract:
In the bank, teller’s job (cash officer) is highly important and critical as at one end it requires soft and brisk customer services and on the other side, handling cash with integrity. It is always challenging for recruiters to hire competent and trustworthy tellers. According to author’s knowledge, there is no comprehensive test available that may provide assistance in recruitment in Pakistan. So there is a dire need of a psychometric battery that could provide support in recruitment of potential candidates for the teller’ position. So, the aim of the present study was to construct ABL-Teller Aptitude Test (ABL-TApT). Three major phases have been designed by following American Psychological Association’s guidelines. The first phase was qualitative, indicators of the test have been explored by content analysis of the a) teller’s job descriptions (n=3), b) interview with senior tellers (n=6) and c) interview with HR personals (n=4). Content analysis of above yielded three border constructs; i). Personality, ii). Integrity/honesty, iii). Professional Work Aptitude. Identified indicators operationalized and statements (k=170) were generated using verbatim. It was then forwarded to the five experts for review of content validity. They finalized 156 items. In the second phase; ABL-TApT (k=156) administered on 323 participants through a computer application. The overall reliability of the test shows significant alpha coefficient (α=.81). Reliability of subscales have also significant alpha coefficients. Confirmatory Factor Analysis (CFA) performed to estimate the construct validity, confirms four main factors comprising of eight personality traits (Confidence, Organized, Compliance, Goal-oriented, Persistent, Forecasting, Patience, Caution), one Integrity/honesty factor, four factors of professional work aptitude (basic numerical ability and perceptual accuracy of letters, numbers and signature) and two factors for customer services (customer services, emotional maturity). Values of GFI, AGFI, NNFI, CFI, RFI and RMSEA are in recommended range depicting significant model fit. In third phase concurrent validity evidences have been pursued. Personality and integrity part of this scale has significant correlations with ‘conscientiousness’ factor of NEO-PI-R, reflecting strong concurrent validity. Customer services and emotional maturity have significant correlations with ‘Bar-On EQI’ showing another evidence of strong concurrent validity. It is concluded that ABL-TAPT is significantly reliable and valid battery of tests, will assist in objective recruitment of tellers and help recruiters in finding a more suitable human resource.Keywords: concurrent validity, construct validity, content validity, reliability, teller aptitude test, objective recruitment
Procedia PDF Downloads 225199 Inertial Spreading of Drop on Porous Surfaces
Authors: Shilpa Sahoo, Michel Louge, Anthony Reeves, Olivier Desjardins, Susan Daniel, Sadik Omowunmi
Abstract:
The microgravity on the International Space Station (ISS) was exploited to study the imbibition of water into a network of hydrophilic cylindrical capillaries on time and length scales long enough to observe details hitherto inaccessible under Earth gravity. When a drop touches a porous medium, it spreads as if laid on a composite surface. The surface first behaves as a hydrophobic material, as liquid must penetrate pores filled with air. When contact is established, some of the liquid is drawn into pores by a capillarity that is resisted by viscous forces growing with length of the imbibed region. This process always begins with an inertial regime that is complicated by possible contact pinning. To study imbibition on Earth, time and distance must be shrunk to mitigate gravity-induced distortion. These small scales make it impossible to observe the inertial and pinning processes in detail. Instead, in the International Space Station (ISS), astronaut Luca Parmitano slowly extruded water spheres until they touched any of nine capillary plates. The 12mm diameter droplets were large enough for high-speed GX1050C video cameras on top and side to visualize details near individual capillaries, and long enough to observe dynamics of the entire imbibition process. To investigate the role of contact pinning, a text matrix was produced which consisted nine kinds of porous capillary plates made of gold-coated brass treated with Self-Assembled Monolayers (SAM) that fixed advancing and receding contact angles to known values. In the ISS, long-term microgravity allowed unambiguous observations of the role of contact line pinning during the inertial phase of imbibition. The high-speed videos of spreading and imbibition on the porous plates were analyzed using computer vision software to calculate the radius of the droplet contact patch with the plate and height of the droplet vs time. These observations are compared with numerical simulations and with data that we obtained at the ESA ZARM free-fall tower in Bremen with a unique mechanism producing relatively large water spheres and similarity in the results were observed. The data obtained from the ISS can be used as a benchmark for further numerical simulations in the field.Keywords: droplet imbibition, hydrophilic surface, inertial phase, porous medium
Procedia PDF Downloads 139198 The Role of Goal Orientation on the Structural-Psychological Empowerment Link in the Public Sector
Authors: Beatriz Garcia-Juan, Ana B. Escrig-Tena, Vicente Roca-Puig
Abstract:
The aim of this article is to conduct a theoretical and empirical study in order to examine how the goal orientation (GO) of public employees affects the relationship between the structural and psychological empowerment that they experience at their workplaces. In doing so, we follow structural empowerment (SE) and psychological empowerment (PE) conceptualizations, and relate them to the public administration framework. Moreover, we review arguments from GO theories, and previous related contributions. Empowerment has emerged as an important issue in the public sector organization setting in the wake of mainstream New Public Management (NPM), the new orientation in the public sector that aims to provide a better service for citizens. It is closely linked to the drive to improve organizational effectiveness through the wise use of human resources. Nevertheless, it is necessary to combine structural (managerial) and psychological (individual) approaches in an integrative study of empowerment. SE refers to a set of initiatives that aim the transference of power from managerial positions to the rest of employees. PE is defined as psychological state of competence, self-determination, impact, and meaning that an employee feels at work. Linking these two perspectives will lead to arrive at a broader understanding of the empowerment process. Specifically in the public sector, empirical contributions on this relationship are therefore important, particularly as empowerment is a very useful tool with which to face the challenges of the new public context. There is also a need to examine the moderating variables involved in this relationship, as well as to extend research on work motivation in public management. It is proposed the study of the effect of individual orientations, such as GO. GO concept refers to the individual disposition toward developing or confirming one’s capacity in achievement situations. Employees’ GO may be a key factor at work and in workforce selection processes, since it explains the differences in personal work interests, and in receptiveness to and interpretations of professional development activities. SE practices could affect PE feelings in different ways, depending on employees’ GO, since they perceive and respond differently to such practices, which is likely to yield distinct PE results. The model is tested on a sample of 521 Spanish local authority employees. Hierarchical regression analysis was conducted to test the research hypotheses using SPSS 22 computer software. The results do not confirm the direct link between SE and PE, but show that learning goal orientation has considerable moderating power in this relationship, and its interaction with SE affects employees’ PE levels. Therefore, the combination of SE practices and employees’ high levels of LGO are important factors for creating psychologically empowered staff in public organizations.Keywords: goal orientation, moderating effect, psychological empowerment, structural empowerment
Procedia PDF Downloads 281197 The Development and Testing of a Small Scale Dry Electrostatic Precipitator for the Removal of Particulate Matter
Authors: Derek Wardle, Tarik Al-Shemmeri, Neil Packer
Abstract:
This paper presents a small tube/wire type electrostatic precipitator (ESP). In the ESPs present form, particle charging and collecting voltages and airflow rates were individually varied throughout 200 ambient temperature test runs ranging from 10 to 30 kV in increments on 5 kV and 0.5 m/s to 1.5 m/s, respectively. It was repeatedly observed that, at input air velocities of between 0.5 and 0.9 m/s and voltage settings of 20 kV to 30 kV, the collection efficiency remained above 95%. The outcomes of preliminary tests at combustion flue temperatures are, at present, inconclusive although indications are that there is little or no drop in comparable performance during ideal test conditions. A limited set of similar tests was carried out during which the collecting electrode was grounded, having been disconnected from the static generator. The collecting efficiency fell significantly, and for that reason, this approach was not pursued further. The collecting efficiencies during ambient temperature tests were determined by mass balance between incoming and outgoing dry PM. The efficiencies of combustion temperature runs are determined by analysing the difference in opacity of the flue gas at inlet and outlet compared to a reference light source. In addition, an array of Leit tabs (carbon coated, electrically conductive adhesive discs) was placed at inlet and outlet for a number of four-day continuous ambient temperature runs. Analysis of the discs’ contamination was carried out using scanning electron microscopy and ImageJ computer software that confirmed collection efficiencies of over 99% which gave unequivocal support to all the previous tests. The average efficiency for these runs was 99.409%. Emissions collected from a woody biomass combustion unit, classified to a diameter of 100 µm, were used in all ambient temperature trials test runs apart from two which collected airborne dust from within the laboratory. Sawdust and wood pellets were chosen for laboratory and field combustion trials. Video recordings were made of three ambient temperature test runs in which the smoke from a wood smoke generator was drawn through the precipitator. Although these runs were visual indicators only, with no objective other than to display, they provided a strong argument for the device’s claimed efficiency, as no emissions were visible at exit when energised. The theoretical performance of ESPs, when applied to the geometry and configuration of the tested model, was compared to the actual performance and was shown to be in good agreement with it.Keywords: electrostatic precipitators, air quality, particulates emissions, electron microscopy, image j
Procedia PDF Downloads 253196 Machine Learning in Patent Law: How Genetic Breeding Algorithms Challenge Modern Patent Law Regimes
Authors: Stefan Papastefanou
Abstract:
Artificial intelligence (AI) is an interdisciplinary field of computer science with the aim of creating intelligent machine behavior. Early approaches to AI have been configured to operate in very constrained environments where the behavior of the AI system was previously determined by formal rules. Knowledge was presented as a set of rules that allowed the AI system to determine the results for specific problems; as a structure of if-else rules that could be traversed to find a solution to a particular problem or question. However, such rule-based systems typically have not been able to generalize beyond the knowledge provided. All over the world and especially in IT-heavy industries such as the United States, the European Union, Singapore, and China, machine learning has developed to be an immense asset, and its applications are becoming more and more significant. It has to be examined how such products of machine learning models can and should be protected by IP law and for the purpose of this paper patent law specifically, since it is the IP law regime closest to technical inventions and computing methods in technical applications. Genetic breeding models are currently less popular than recursive neural network method and deep learning, but this approach can be more easily described by referring to the evolution of natural organisms, and with increasing computational power; the genetic breeding method as a subset of the evolutionary algorithms models is expected to be regaining popularity. The research method focuses on patentability (according to the world’s most significant patent law regimes such as China, Singapore, the European Union, and the United States) of AI inventions and machine learning. Questions of the technical nature of the problem to be solved, the inventive step as such, and the question of the state of the art and the associated obviousness of the solution arise in the current patenting processes. Most importantly, and the key focus of this paper is the problem of patenting inventions that themselves are developed through machine learning. The inventor of a patent application must be a natural person or a group of persons according to the current legal situation in most patent law regimes. In order to be considered an 'inventor', a person must actually have developed part of the inventive concept. The mere application of machine learning or an AI algorithm to a particular problem should not be construed as the algorithm that contributes to a part of the inventive concept. However, when machine learning or the AI algorithm has contributed to a part of the inventive concept, there is currently a lack of clarity regarding the ownership of artificially created inventions. Since not only all European patent law regimes but also the Chinese and Singaporean patent law approaches include identical terms, this paper ultimately offers a comparative analysis of the most relevant patent law regimes.Keywords: algorithms, inventor, genetic breeding models, machine learning, patentability
Procedia PDF Downloads 108195 Ethicality of Algorithmic Pricing and Consumers’ Resistance
Authors: Zainab Atia, Hongwei He, Panagiotis Sarantopoulos
Abstract:
Over the past few years, firms have witnessed a massive increase in sophisticated algorithmic deployment, which has become quite pervasive in today’s modern society. With the wide availability of data for retailers, the ability to track consumers using algorithmic pricing has become an integral option in online platforms. As more companies are transforming their businesses and relying more on massive technological advancement, pricing algorithmic systems have brought attention and given rise to its wide adoption, with many accompanying benefits and challenges to be found within its usage. With the overall aim of increasing profits by organizations, algorithmic pricing is becoming a sound option by enabling suppliers to cut costs, allowing better services, improving efficiency and product availability, and enhancing overall consumer experiences. The adoption of algorithms in retail has been pioneered and widely used in literature across varied fields, including marketing, computer science, engineering, economics, and public policy. However, what is more, alarming today is the comprehensive understanding and focus of this technology and its associated ethical influence on consumers’ perceptions and behaviours. Indeed, due to algorithmic ethical concerns, consumers are found to be reluctant in some instances to share their personal data with retailers, which reduces their retention and leads to negative consumer outcomes in some instances. This, in its turn, raises the question of whether firms can still manifest the acceptance of such technologies by consumers while minimizing the ethical transgressions accompanied by their deployment. As recent modest research within the area of marketing and consumer behavior, the current research advances the literature on algorithmic pricing, pricing ethics, consumers’ perceptions, and price fairness literature. With its empirical focus, this paper aims to contribute to the literature by applying the distinction of the two common types of algorithmic pricing, dynamic and personalized, while measuring their relative effect on consumers’ behavioural outcomes. From a managerial perspective, this research offers significant implications that pertain to providing a better human-machine interactive environment (whether online or offline) to improve both businesses’ overall performance and consumers’ wellbeing. Therefore, by allowing more transparent pricing systems, businesses can harness their generated ethical strategies, which fosters consumers’ loyalty and extend their post-purchase behaviour. Thus, by defining the correct balance of pricing and right measures, whether using dynamic or personalized (or both), managers can hence approach consumers more ethically while taking their expectations and responses at a critical stance.Keywords: algorithmic pricing, dynamic pricing, personalized pricing, price ethicality
Procedia PDF Downloads 91194 High Performance Computing Enhancement of Agent-Based Economic Models
Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna
Abstract:
This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process
Procedia PDF Downloads 127193 Technology, Ethics and Experience: Understanding Interactions as Ethical Practice
Authors: Joan Casas-Roma
Abstract:
Technology has become one of the main channels through which people engage in most of their everyday activities; from working to learning, or even when socializing, technology often acts as both an enabler and a mediator of such activities. Moreover, the affordances and interactions created by those technological tools determine the way in which the users interact with one another, as well as how they relate to the relevant environment, thus favoring certain kinds of actions and behaviors while discouraging others. In this regard, virtue ethics theories place a strong focus on a person's daily practice (understood as their decisions, actions, and behaviors) as the means to develop and enhance their habits and ethical competences --such as their awareness and sensitivity towards certain ethically-desirable principles. Under this understanding of ethics, this set of technologically-enabled affordances and interactions can be seen as the possibility space where the daily practice of their users takes place in a wide plethora of contexts and situations. At this point, the following question pops into mind: could these affordances and interactions be shaped in a way that would promote behaviors and habits basedonethically-desirable principles into their users? In the field of game design, the MDA framework (which stands for Mechanics, Dynamics, Aesthetics) explores how the interactions enabled within the possibility space of a game can lead to creating certain experiences and provoking specific reactions to the players. In this sense, these interactions can be shaped in ways thatcreate experiences to raise the players' awareness and sensitivity towards certain topics or principles. This research brings together the notions of technological affordances, the notions of practice and practical wisdom from virtue ethics, and the MDA framework from game design in order to explore how the possibility space created by technological interactions can be shaped in ways that enable and promote actions and behaviors supporting certain ethically-desirable principles. When shaped accordingly, interactions supporting certain ethically-desirable principlescould allow their users to carry out the kind of practice that, according to virtue ethics theories, provides the grounds to develop and enhance their awareness, sensitivity, and ethical reasoning capabilities. Moreover, and because ethical practice can happen collaterally in almost every context, decision, and action, this additional layer could potentially be applied in a wide variety of technological tools, contexts, and functionalities. This work explores the theoretical background, as well as the initial considerations and steps that would be needed in order to harness the potential ethically-desirable benefits that technology can bring, once it is understood as the space where most of their users' daily practice takes place.Keywords: ethics, design methodology, human-computer interaction, philosophy of technology
Procedia PDF Downloads 158192 Studies on the Histomorphometry of the Digestive Tract and Associated Digestive Glands in Ostrich (Struthio camelus) with Gender and Progressing Age in Pakistan
Authors: Zaima Umar, Anas S. Qureshi, Adeel Sarfraz, Saqib Umar, Talha Umar, Muhammad Usman
Abstract:
Ostrich has been a good source of food and income for people across the world. To get a better understanding of health and health-related problems, the knowledge of its digestive system is of utmost importance. The present study was conducted to determine the morphological and histometrical variations in the digestive system and associated glands of ostrich (Struthio camelus) as regard to the gender and progressive age. A total of 40 apparently healthy ostriches of both genders and two progressive age groups; young one (less than two year, group A); and adult (2-15 years, group B) in equal number were used in this study. Digestive organs including tongue, esophagus, proventriculus, gizzard, small and large intestines and associated glands like liver and pancreas were collected immediately after slaughtering the birds. The organs of the digestive system and associated glands of each group were studied grossly and histologically. Grossly colour, shape consistency, weight and various dimensions (length, width, and circumference) of organs of the digestive tract and associated glands were recorded. The mean (± SEM) of all gross anatomical parameters in group A were significantly (p ≤ 0.01) different from that of group B. For microscopic studies, 1-2 cm tissue samples of organs of the digestive system and associated glands were taken. The tissue was marked and fixed in the neutral buffer formaldehyde solution for histological studies. After fixation, the sections of 5-7 µm were cut and stained by haematoxylin and eosin stain. All the layers (epithelium, lamina propria, lamina muscularis, submucosa and tunica muscularis) were measured (µm) with the help of automated computer software Image J®. The results of this study provide valuable information on the gender and age-related histological and histometrical variations in the digestive organs of ostrich (Struthio camelus). The microscopic studies of different parts of the digestive system revealed highly significant differences (p ≤ 0.01) among the two groups. The esophagus was lined by non-keratinized stratified squamous epithelium. The duodenum, jejunum, and ileum showed similar histological structures. Statistical analysis revealed significant (p ≤ 0.05) increase in the thickness of different tunics of the gastrointestinal tract in adult birds (up to 15 years) as compared with young ones (less than two years). Therefore, it can be concluded that there is a gradual but consistent growth in the observed digestive organs mimicking that of other poultry species and may be helpful in determining the growth pattern in this bird. However, there is a need to record the changes at closer time intervals.Keywords: ostrich, digestive system, histomorphometry, grossly
Procedia PDF Downloads 145191 Human Interaction Skills and Employability in Courses with Internships: Report of a Decade of Success in Information Technology
Authors: Filomena Lopes, Miguel Magalhaes, Carla Santos Pereira, Natercia Durao, Cristina Costa-Lobo
Abstract:
The option to implement curricular internships with undergraduate students is a pedagogical option with some good results perceived by academic staff, employers, and among graduates in general and IT (Information Technology) in particular. Knowing that this type of exercise has never been so relevant, as one tries to give meaning to the future in a landscape of rapid and deep changes. We have as an example the potential disruptive impact on the jobs of advances in robotics, artificial intelligence and 3-D printing, which is a focus of fierce debate. It is in this context that more and more students and employers engage in the pursuit of career-promoting responses and business development, making their investment decisions of training and hiring. Three decades of experience and research in computer science degree and in information systems technologies degree at the Portucalense University, Portuguese private university, has provided strong evidence of its advantages. The Human Interaction Skills development as well as the attractiveness of such experiences for students are topics assumed as core in the Ccnception and management of the activities implemented in these study cycles. The objective of this paper is to gather evidence of the Human Interaction Skills explained and valued within the curriculum internship experiences of IT students employability. Data collection was based on the application of questionnaire to intern counselors and to students who have completed internships in these undergraduate courses in the last decade. The trainee supervisor, responsible for monitoring the performance of IT students in the evolution of traineeship activities, evaluates the following Human Interaction Skills: Motivation and interest in the activities developed, interpersonal relationship, cooperation in company activities, assiduity, ease of knowledge apprehension, Compliance with norms, insertion in the work environment, productivity, initiative, ability to take responsibility, creativity in proposing solutions, and self-confidence. The results show that these undergraduate courses promote the development of Human Interaction Skills and that these students, once they finish their degree, are able to initiate remunerated work functions, mainly by invitation of the institutions in which they perform curricular internships. Findings obtained from the present study contribute to widen the analysis of its effectiveness in terms of future research and actions in regard to the transition from Higher Education pathways to the Labour Market.Keywords: human interaction skills, employability, internships, information technology, higher education
Procedia PDF Downloads 287190 Beyond Geometry: The Importance of Surface Properties in Space Syntax Research
Authors: Christoph Opperer
Abstract:
Space syntax is a theory and method for analyzing the spatial layout of buildings and urban environments to understand how they can influence patterns of human movement, social interaction, and behavior. While direct visibility is a key factor in space syntax research, important visual information such as light, color, texture, etc., are typically not considered, even though psychological studies have shown a strong correlation to the human perceptual experience within physical space – with light and color, for example, playing a crucial role in shaping the perception of spaciousness. Furthermore, these surface properties are often the visual features that are most salient and responsible for drawing attention to certain elements within the environment. This paper explores the potential of integrating these factors into general space syntax methods and visibility-based analysis of space, particularly for architectural spatial layouts. To this end, we use a combination of geometric (isovist) and topological (visibility graph) approaches together with image-based methods, allowing a comprehensive exploration of the relationship between spatial geometry, visual aesthetics, and human experience. Custom-coded ray-tracing techniques are employed to generate spherical panorama images, encoding three-dimensional spatial data in the form of two-dimensional images. These images are then processed through computer vision algorithms to generate saliency-maps, which serve as a visual representation of areas most likely to attract human attention based on their visual properties. The maps are subsequently used to weight the vertices of isovists and the visibility graph, placing greater emphasis on areas with high saliency. Compared to traditional methods, our weighted visibility analysis introduces an additional layer of information density by assigning different weights or importance levels to various aspects within the field of view. This extends general space syntax measures to provide a more nuanced understanding of visibility patterns that better reflect the dynamics of human attention and perception. Furthermore, by drawing parallels to traditional isovist and VGA analysis, our weighted approach emphasizes a crucial distinction, which has been pointed out by Ervin and Steinitz: the difference between what is possible to see and what is likely to be seen. Therefore, this paper emphasizes the importance of including surface properties in visibility-based analysis to gain deeper insights into how people interact with their surroundings and to establish a stronger connection with human attention and perception.Keywords: space syntax, visibility analysis, isovist, visibility graph, visual features, human perception, saliency detection, raytracing, spherical images
Procedia PDF Downloads 74189 The Correspondence between Self-regulated Learning, Learning Efficiency and Frequency of ICT Use
Authors: Maria David, Tunde A. Tasko, Katalin Hejja-Nagy, Laszlo Dorner
Abstract:
The authors have been concerned with research on learning since 1998. Recently, the focus of our interest is how prevalent use of information and communication technology (ICT) influences students' learning abilities, skills of self-regulated learning and learning efficiency. Nowadays, there are three dominant theories about the psychic effects of ICT use: According to social optimists, modern ICT devices have a positive effect on thinking. As to social pessimists, this effect is rather negative. And, regarding the views of biological optimists, the change is obvious, but these changes can fit into the mankind's evolved neurological system as did writing long ago. Mentality of 'digital natives' differ from that of elder people. They process information coming from the outside world in an other way, and different experiences result in different cerebral conformation. In this regard, researchers report about both positive and negative effects of ICT use. According to several studies, it has a positive effect on cognitive skills, intelligence, school efficiency, development of self-regulated learning, and self-esteem regarding learning. It is also proven, that computers improve skills of visual intelligence such as spacial orientation, iconic skills and visual attention. Among negative effects of frequent ICT use, researchers mention the decrease of critical thinking, as permanent flow of information does not give scope for deeper cognitive processing. Aims of our present study were to uncover developmental characteristics of self-regulated learning in different age groups and to study correlations of learning efficiency, the level of self-regulated learning and frequency of use of computers. Our subjects (N=1600) were primary and secondary school students and university students. We studied four age groups (age 10, 14, 18, 22), 400 subjects of each. We used the following methods: the research team developed a questionnaire for measuring level of self-regulated learning and a questionnaire for measuring ICT use, and we used documentary analysis to gain information about grade point average (GPA) and results of competence-measures. Finally, we used computer tasks to measure cognitive abilities. Data is currently under analysis, but as to our preliminary results, frequent use of computers results in shorter response time regarding every age groups. Our results show that an ordinary extent of ICT use tend to increase reading competence, and had a positive effect on students' abilities, though it didn't show relationship with school marks (GPA). As time passes, GPA gets worse along with the learning material getting more and more difficult. This phenomenon draws attention to the fact that students are unable to switch from guided to independent learning, so it is important to consciously develop skills of self-regulated learning.Keywords: digital natives, ICT, learning efficiency, reading competence, self-regulated learning
Procedia PDF Downloads 361188 Investigating the Influences of Long-Term, as Compared to Short-Term, Phonological Memory on the Word Recognition Abilities of Arabic Readers vs. Arabic Native Speakers: A Word-Recognition Study
Authors: Insiya Bhalloo
Abstract:
It is quite common in the Muslim faith for non-Arabic speakers to be able to convert written Arabic, especially Quranic Arabic, into a phonological code without significant semantic or syntactic knowledge. This is due to prior experience learning to read the Quran (a religious text written in Classical Arabic), from a very young age such as via enrolment in Quranic Arabic classes. As compared to native speakers of Arabic, these Arabic readers do not have a comprehensive morpho-syntactic knowledge of the Arabic language, nor can understand, or engage in Arabic conversation. The study seeks to investigate whether mere phonological experience (as indicated by the Arabic readers’ experience with Arabic phonology and the sound-system) is sufficient to cause phonological-interference during word recognition of previously-heard words, despite the participants’ non-native status. Both native speakers of Arabic and non-native speakers of Arabic, i.e., those individuals that learned to read the Quran from a young age, will be recruited. Each experimental session will include two phases: An exposure phase and a test phase. During the exposure phase, participants will be presented with Arabic words (n=40) on a computer screen. Half of these words will be common words found in the Quran while the other half will be words commonly found in Modern Standard Arabic (MSA) but either non-existent or prevalent at a significantly lower frequency within the Quran. During the test phase, participants will then be presented with both familiar (n = 20; i.e., those words presented during the exposure phase) and novel Arabic words (n = 20; i.e., words not presented during the exposure phase. ½ of these presented words will be common Quranic Arabic words and the other ½ will be common MSA words but not Quranic words. Moreover, ½ the Quranic Arabic and MSA words presented will be comprised of nouns, while ½ the Quranic Arabic and MSA will be comprised of verbs, thereby eliminating word-processing issues affected by lexical category. Participants will then determine if they had seen that word during the exposure phase. This study seeks to investigate whether long-term phonological memory, such as via childhood exposure to Quranic Arabic orthography, has a differential effect on the word-recognition capacities of native Arabic speakers and Arabic readers; we seek to compare the effects of long-term phonological memory in comparison to short-term phonological exposure (as indicated by the presentation of familiar words from the exposure phase). The researcher’s hypothesis is that, despite the lack of lexical knowledge, early experience with converting written Quranic Arabic text into a phonological code will help participants recall the familiar Quranic words that appeared during the exposure phase more accurately than those that were not presented during the exposure phase. Moreover, it is anticipated that the non-native Arabic readers will also report more false alarms to the unfamiliar Quranic words, due to early childhood phonological exposure to Quranic Arabic script - thereby causing false phonological facilitatory effects.Keywords: modern standard arabic, phonological facilitation, phonological memory, Quranic arabic, word recognition
Procedia PDF Downloads 357187 Developing of Ecological Internal Insulation Composite Boards for Innovative Retrofitting of Heritage Buildings
Authors: J. N. Nackler, K. Saleh Pascha, W. Winter
Abstract:
WHISCERS™ (Whole House In-Situ Carbon and Energy Reduction Solution) is an innovative process for Internal Wall Insulation (IWI) for energy-efficient retrofitting of heritage building, which uses laser measuring to determine the dimensions of a room, off-site insulation board cutting and rapid installation to complete the process. As part of a multinational investigation consortium the Austrian part adapted the WHISCERS system to local conditions of Vienna where most historical buildings have valuable stucco facades, precluding the application of an external insulation. The Austrian project contribution addresses the replacement of commonly used extruded polystyrene foam (XPS) with renewable materials such as wood and wood products to develop a more sustainable IWI system. As the timber industry is a major industry in Austria, a new innovative and more sustainable IWI solution could also open up new markets. The first approach of investigation was the Life Cycle Assessment (LCA) to define the performance of wood fibre board as insulation material in comparison to normally used XPS-boards. As one of the results the global-warming potential (GWP) of wood-fibre-board is 15 times less the equivalent to carbon dioxide while in the case of XPS it´s 72 times more. The hygrothermal simulation program WUFI was used to evaluate and simulate heat and moisture transport in multi-layer building components of the developed IWI solution. The results of the simulations prove in examined boundary conditions of selected representative brickwork constructions to be functional and usable without risk regarding vapour diffusion and liquid transport in proposed IWI. In a further stage three different solutions were developed and tested (1 - glued/mortared, 2 - with soft board, connected to wall with gypsum board as top layer, 3 - with soft board and clay board as top layer). All three solutions presents a flexible insulation layer out of wood fibre towards the existing wall, thus compensating irregularities of the wall surface. From first considerations at the beginning of the development phase, three different systems had been developed and optimized according to assembly technology and tested as small specimen in real object conditions. The built prototypes are monitored to detect performance and building physics problems and to validate the results of the computer simulation model. This paper illustrates the development and application of the Internal Wall Insulation system.Keywords: internal insulation, wood fibre, hygrothermal simulations, monitoring, clay, condensate
Procedia PDF Downloads 219186 Effect of E-Governance and E-Learning Platform on Access to University Education by Public Servants in Nigeria
Authors: Nwamaka Patricia Ibeme, Musa Zakari
Abstract:
E-learning is made more effective because; it is enable student to students to easily interact, share, and collaborate across time and space with the help of e-governance platform. Zoom and the Microsoft classroom team can invite students from all around the world to join a conversation on a certain subject simultaneously. E-governance may be able to work on problem solving skills, as well as brainstorming and developing ideas. As a result of the shared experiences and knowledge, students are able to express themselves and reflect on their own learning." For students, e-governance facilities provide greater opportunity for students to build critical (higher order) thinking abilities through constructive learning methods. Students' critical thinking abilities may improve with more time spent in an online classroom. Students' inventiveness can be enhanced through the use of computer-based instruction. Discover multimedia tools and produce products in the styles that are easily available through games, Compact Disks, and television. The use of e-learning has increased both teaching and learning quality by combining student autonomy, capacity, and creativity over time in developed countries." Teachers are catalysts for the integration of technology through Information and Communication Technology, and e-learning supports teaching by simplifying access to course content." Creating an Information and Communication Technology class will be much easier if educational institutions provide teachers with the assistance, equipment, and resources they need. The study adopted survey research design. The populations of the study are Students and staff. The study adopted a simple random sampling technique to select a representative population. Both primary and secondary method of data collection was used to obtain the data. A chi-square statistical technique was used to analyze. Finding from the study revealed that e-learning has increase accesses to universities educational by public servants in Nigeria. Public servants in Nigeria have utilized e-learning and Online Distance Learning (ODL) programme to into various degree programmes. Finding also shows that E-learning plays an important role in teaching because it is oriented toward the use of information and communication technologies that have become a part of the everyday life and day-to-day business. E-learning contributes to traditional teaching methods and provides many advantages to society and citizens. The study recommends that the e-learning tools and internet facilities should be upgrade to foster any network challenges in the online facilitation and lecture delivery system.Keywords: E-governance, E-learning, online distance learning, university education public servants, Nigeria
Procedia PDF Downloads 69185 Transmedia and Platformized Political Discourse in a Growing Democracy: A Study of Nigeria’s 2023 General Elections
Authors: Tunde Ope-Davies
Abstract:
Transmediality and platformization as online content-sharing protocols have continued to accentuate the growing impact of the unprecedented digital revolution across the world. The rapid transformation across all sectors as a result of this revolution has continued to spotlight the increasing importance of new media technologies in redefining and reshaping the rhythm and dynamics of our private and public discursive practices. Equally, social and political activities are being impacted daily through the creation and transmission of political discourse content through multi-channel platforms such as mobile telephone communication, social media networks and the internet. It has been observed that digital platforms have become central to the production, processing, and distribution of multimodal social data and cultural content. The platformization paradigm thus underpins our understanding of how digital platforms enhance the production and heterogenous distribution of media and cultural content through these platforms and how this process facilitates socioeconomic and political activities. The use of multiple digital platforms to share and transmit political discourse material synchronously and asynchronously has gained some exciting momentum in the last few years. Nigeria’s 2023 general elections amplified the usage of social media and other online platforms as tools for electioneering campaigns, socio-political mobilizations and civic engagement. The study, therefore, focuses on transmedia and platformed political discourse as a new strategy to promote political candidates and their manifesto in order to mobilize support and woo voters. This innovative transmedia digital discourse model involves a constellation of online texts and images transmitted through different online platforms almost simultaneously. The data for the study was extracted from the 2023 general elections campaigns in Nigeria between January- March 2023 through media monitoring, manual download and the use of software to harvest the online electioneering campaign material. I adopted a discursive-analytic qualitative technique with toolkits drawn from a computer-mediated multimodal discourse paradigm. The study maps the progressive development of digital political discourse in this young democracy. The findings also demonstrate the inevitable transformation of modern democratic practice through platform-dependent and transmedia political discourse. Political actors and media practitioners now deploy layers of social media network platforms to convey messages and mobilize supporters in order to aggregate and maximize the impact of their media campaign projects and audience reach.Keywords: social media, digital humanities, political discourse, platformized discourse, multimodal discourse
Procedia PDF Downloads 83184 Intelligent Control of Agricultural Farms, Gardens, Greenhouses, Livestock
Authors: Vahid Bairami Rad
Abstract:
The intelligentization of agricultural fields can control the temperature, humidity, and variables affecting the growth of agricultural products online and on a mobile phone or computer. Smarting agricultural fields and gardens is one of the best and best ways to optimize agricultural equipment and has a 100 percent direct effect on the growth of plants and agricultural products and farms. Smart farms are the topic that we are going to discuss today, the Internet of Things and artificial intelligence. Agriculture is becoming smarter every day. From large industrial operations to individuals growing organic produce locally, technology is at the forefront of reducing costs, improving results and ensuring optimal delivery to market. A key element to having a smart agriculture is the use of useful data. Modern farmers have more tools to collect intelligent data than in previous years. Data related to soil chemistry also allows people to make informed decisions about fertilizing farmland. Moisture meter sensors and accurate irrigation controllers have made the irrigation processes to be optimized and at the same time reduce the cost of water consumption. Drones can apply pesticides precisely on the desired point. Automated harvesting machines navigate crop fields based on position and capacity sensors. The list goes on. Almost any process related to agriculture can use sensors that collect data to optimize existing processes and make informed decisions. The Internet of Things (IoT) is at the center of this great transformation. Internet of Things hardware has grown and developed rapidly to provide low-cost sensors for people's needs. These sensors are embedded in IoT devices with a battery and can be evaluated over the years and have access to a low-power and cost-effective mobile network. IoT device management platforms have also evolved rapidly and can now be used securely and manage existing devices at scale. IoT cloud services also provide a set of application enablement services that can be easily used by developers and allow them to build application business logic. Focus on yourself. These development processes have created powerful and new applications in the field of Internet of Things, and these programs can be used in various industries such as agriculture and building smart farms. But the question is, what makes today's farms truly smart farms? Let us put this question in another way. When will the technologies associated with smart farms reach the point where the range of intelligence they provide can exceed the intelligence of experienced and professional farmers?Keywords: food security, IoT automation, wireless communication, hybrid lifestyle, arduino Uno
Procedia PDF Downloads 56183 Understanding Evidence Dispersal Caused by the Effects of Using Unmanned Aerial Vehicles in Active Indoor Crime Scenes
Authors: Elizabeth Parrott, Harry Pointon, Frederic Bezombes, Heather Panter
Abstract:
Unmanned aerial vehicles (UAV’s) are making a profound effect within policing, forensic and fire service procedures worldwide. These intelligent devices have already proven useful in photographing and recording large-scale outdoor and indoor sites using orthomosaic and three-dimensional (3D) modelling techniques, for the purpose of capturing and recording sites during and post-incident. UAV’s are becoming an established tool as they are extending the reach of the photographer and offering new perspectives without the expense and restrictions of deploying full-scale aircraft. 3D reconstruction quality is directly linked to the resolution of captured images; therefore, close proximity flights are required for more detailed models. As technology advances deployment of UAVs in confined spaces is becoming more common. With this in mind, this study investigates the effects of UAV operation within active crimes scenes with regard to the dispersal of particulate evidence. To date, there has been little consideration given to the potential effects of using UAV’s within active crime scenes aside from a legislation point of view. Although potentially the technology can reduce the likelihood of contamination by replacing some of the roles of investigating practitioners. There is the risk of evidence dispersal caused by the effect of the strong airflow beneath the UAV, from the downwash of the propellers. The initial results of this study are therefore presented to determine the height of least effect at which to fly, and the commercial propeller type to choose to generate the smallest amount of disturbance from the dataset tested. In this study, a range of commercially available 4-inch propellers were chosen as a starting point due to the common availability and their small size makes them well suited for operation within confined spaces. To perform the testing, a rig was configured to support a single motor and propeller powered with a standalone mains power supply and controlled via a microcontroller. This was to mimic a complete throttle cycle and control the device to ensure repeatability. By removing the variances of battery packs and complex UAV structures to allow for a more robust setup. Therefore, the only changing factors were the propeller and operating height. The results were calculated via computer vision analysis of the recorded dispersal of the sample particles placed below the arm-mounted propeller. The aim of this initial study is to give practitioners an insight into the technology to use when operating within confined spaces as well as recognizing some of the issues caused by UAV’s within active crime scenes.Keywords: dispersal, evidence, propeller, UAV
Procedia PDF Downloads 163