Search results for: hardy cross networks accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9689

Search results for: hardy cross networks accuracy

1259 Investigating the Sloshing Characteristics of a Liquid by Using an Image Processing Method

Authors: Ufuk Tosun, Reza Aghazadeh, Mehmet Bülent Özer

Abstract:

This study puts forward a method to analyze the sloshing characteristics of liquid in a tuned sloshing absorber system by using image processing tools. Tuned sloshing vibration absorbers have recently attracted researchers’ attention as a seismic load damper in constructions due to its practical and logistical convenience. The absorber is liquid which sloshes and applies a force in opposite phase to the motion of structure. Experimentally characterization of the sloshing behavior can be utilized as means of verifying the results of numerical analysis. It can also be used to identify the accuracy of assumptions related to the motion of the liquid. There are extensive theoretical and experimental studies in the literature related to the dynamical and structural behavior of tuned sloshing dampers. In most of these works there are efforts to estimate the sloshing behavior of the liquid such as free surface motion and total force applied by liquid to the wall of container. For these purposes the use of sensors such as load cells and ultrasonic sensors are prevalent in experimental works. Load cells are only capable of measuring the force and requires conducting tests both with and without liquid to obtain pure sloshing force. Ultrasonic level sensors give point-wise measurements and hence they are not applicable to measure the whole free surface motion. Furthermore, in the case of liquid splashing it may give incorrect data. In this work a method for evaluating the sloshing wave height by using camera records and image processing techniques is presented. In this method the motion of the liquid and its container, made of a transparent material, is recorded by a high speed camera which is aligned to the free surface of the liquid. The video captured by the camera is processed frame by frame by using MATLAB Image Processing toolbox. The process starts with cropping the desired region. By recognizing the regions containing liquid and eliminating noise and liquid splashing, the final picture depicting the free surface of liquid is achieved. This picture then is used to obtain the height of the liquid through the length of container. This process is verified by ultrasonic sensors that measured fluid height on the surface of liquid.

Keywords: fluid structure interaction, image processing, sloshing, tuned liquid damper

Procedia PDF Downloads 341
1258 Delivery of Contraceptive and Maternal Health Commodities with Drones in the Most Remote Areas of Madagascar

Authors: Josiane Yaguibou, Ngoy Kishimba, Issiaka V. Coulibaly, Sabrina Pestilli, Falinirina Razanalison, Hantanirina Andremanisa

Abstract:

Background: Madagascar has one of the least developed road networks in the world with a majority of its national and local roads being earth roads and in poor condition. In addition, the country is affected by frequent natural disasters that further affect the road conditions limiting the accessibility to some parts of the country. In 2021 and 2022, 2.21 million people were affected by drought in the Grand Sud region, and by cyclones and floods in the coastal regions, with disruptions of the health system including last mile distribution of lifesaving maternal health commodities and reproductive health commodities in the health facilities. Program intervention: The intervention uses drone technology to deliver maternal health and family planning commodities in hard-to-reach health facilities in the Grand Sud and Sud-Est of Madagascar, the regions more affected by natural disasters. Methodology The intervention was developed in two phases. A first phase, conducted in the Grand Sud, used drones leased from a private company to deliver commodities in isolated health facilities. Based on the lesson learnt and encouraging results of the first phase, in the second phase (2023) the intervention has been extended to the Sud Est regions with the purchase of drones and the recruitment of pilots to reduce costs and ensure sustainability. Key findings: The drones ensure deliveries of lifesaving commodities in the Grand Sud of Madagascar. In 2023, 297 deliveries in commodities in forty hard-to-reach health facilities have been carried out. Drone technology reduced delivery times from the usual 3 - 7 days necessary by road or boat to only a few hours. Program Implications: The use of innovative drone technology demonstrated to be successful in the Madagascar context to reduce dramatically the distribution time of commodities in hard-to-reach health facilities and avoid stockouts of life-saving medicines. When the intervention reaches full scale with the completion of the second phase and the extension in the Sud-Est, 150 hard-to-reach facilities will receive drone deliveries, avoiding stockouts and improving the quality of maternal health and family planning services offered to 1,4 million people in targeted areas.

Keywords: commodities, drones, last-mile distribution, lifesaving supplies

Procedia PDF Downloads 60
1257 Analytical Study and Conservation Processes of Scribe Box from Old Kingdom

Authors: Mohamed Moustafa, Medhat Abdallah, Ramy Magdy, Ahmed Abdrabou, Mohamed Badr

Abstract:

The scribe box under study dates back to the old kingdom. It was excavated by the Italian expedition in Qena (1935-1937). The box consists of 2pieces, the lid and the body. The inner side of the lid is decorated with ancient Egyptian inscriptions written with a black pigment. The box was made using several panels assembled together by wooden dowels and secured with plant ropes. The entire box is covered with a red pigment. This study aims to use analytical techniques in order to identify and have deep understanding for the box components. Moreover, the authors were significantly interested in using infrared reflectance transmission imaging (RTI-IR) to improve the hidden inscriptions on the lid. The identification of wood species included in this study. The visual observation and assessment were done to understand the condition of this box. 3Ddimensions and 2D programs were used to illustrate wood joints techniques. Optical microscopy (OM), X-ray diffraction (XRD), X-ray fluorescence portable (XRF) and Fourier Transform Infrared spectroscopy (FTIR) were used in this study in order to identify wood species, remains of insects bodies, red pigment, fibers plant and previous conservation adhesives, also RTI-IR technique was very effective to improve hidden inscriptions. The analysis results proved that wooden panels and dowels were identified as Acacia nilotica, wooden rail was Salix sp. the insects were identified as Lasioderma serricorne and Gibbium psylloids, the red pigment was Hematite, while the fiber plants were linen, previous adhesive was identified as cellulose nitrates. The historical study for the inscriptions proved that it’s a Hieratic writings of a funerary Text. After its transportation from the Egyptian museum storage to the wood conservation laboratory of the Grand Egyptian museum –conservation center (GEM-CC), conservation techniques were applied with high accuracy in order to restore the object including cleaning , consolidating of friable pigments and writings, removal of previous adhesive and reassembly, finally the conservation process that were applied were extremely effective for this box which became ready for display or storage in the grand Egyptian museum.

Keywords: scribe box, hieratic, 3D program, Acacia nilotica, XRD, cellulose nitrate, conservation

Procedia PDF Downloads 268
1256 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 280
1255 “I” on the Web: Social Penetration Theory Revised

Authors: Dr. Dionysis Panos Dpt. Communication, Internet Studies Cyprus University of Technology

Abstract:

The widespread use of New Media and particularly Social Media, through fixed or mobile devices, has changed in a staggering way our perception about what is “intimate" and "safe" and what is not, in interpersonal communication and social relationships. The distribution of self and identity-related information in communication now evolves under new and different conditions and contexts. Consequently, this new framework forces us to rethink processes and mechanisms, such as what "exposure" means in interpersonal communication contexts, how the distinction between the "private" and the "public" nature of information is being negotiated online, how the "audiences" we interact with are understood and constructed. Drawing from an interdisciplinary perspective that combines sociology, communication psychology, media theory, new media and social networks research, as well as from the empirical findings of a longitudinal comparative research, this work proposes an integrative model for comprehending mechanisms of personal information management in interpersonal communication, which can be applied to both types of online (Computer-Mediated) and offline (Face-To-Face) communication. The presentation is based on conclusions drawn from a longitudinal qualitative research study with 458 new media users from 24 countries for almost over a decade. Some of these main conclusions include: (1) There is a clear and evidenced shift in users’ perception about the degree of "security" and "familiarity" of the Web, between the pre- and the post- Web 2.0 era. The role of Social Media in this shift was catalytic. (2) Basic Web 2.0 applications changed dramatically the nature of the Internet itself, transforming it from a place reserved for “elite users / technical knowledge keepers" into a place of "open sociability” for anyone. (3) Web 2.0 and Social Media brought about a significant change in the concept of “audience” we address in interpersonal communication. The previous "general and unknown audience" of personal home pages, converted into an "individual & personal" audience chosen by the user under various criteria. (4) The way we negotiate the nature of 'private' and 'public' of the Personal Information, has changed in a fundamental way. (5) The different features of the mediated environment of online communication and the critical changes occurred since the Web 2.0 advance, lead to the need of reconsideration and updating the theoretical models and analysis tools we use in our effort to comprehend the mechanisms of interpersonal communication and personal information management. Therefore, is proposed here a new model for understanding the way interpersonal communication evolves, based on a revision of social penetration theory.

Keywords: new media, interpersonal communication, social penetration theory, communication exposure, private information, public information

Procedia PDF Downloads 364
1254 Impact of Urban Densification on Travel Behaviour: Case of Surat and Udaipur, India

Authors: Darshini Mahadevia, Kanika Gounder, Saumya Lathia

Abstract:

Cities, an outcome of natural growth and migration, are ever-expanding due to urban sprawl. In the Global South, urban areas are experiencing a switch from public transport to private vehicles, coupled with intensified urban agglomeration, leading to frequent longer commutes by automobiles. This increase in travel distance and motorized vehicle kilometres lead to unsustainable cities. To achieve the nationally pledged GHG emission mitigation goal, the government is prioritizing a modal shift to low-carbon transport modes like mass transit and paratransit. Mixed land-use and urban densification are crucial for the economic viability of these projects. Informed by desktop assessment of mobility plans and in-person primary surveys, the paper explores the challenges around urban densification and travel patterns in two Indian cities of contrasting nature- Surat, a metropolitan industrial city with a 5.9 million population and a very compact urban form, and Udaipur, a heritage city attracting large international tourists’ footfall, with limited scope for further densification. Dense, mixed-use urban areas often improve access to basic services and economic opportunities by reducing distances and enabling people who don't own personal vehicles to reach them on foot/ cycle. But residents travelling on different modes end up contributing to similar trip lengths, highlighting the non-uniform distribution of land-uses and lack of planned transport infrastructure in the city and the urban-peri urban networks. Additionally, it is imperative to manage these densities to reduce negative externalities like congestion, air/noise pollution, lack of public spaces, loss of livelihood, etc. The study presents a comparison of the relationship between transport systems with the built form in both cities. The paper concludes with recommendations for managing densities in urban areas along with promoting low-carbon transport choices like improved non-motorized transport and public transport infrastructure and minimizing personal vehicle usage in the Global South.

Keywords: India, low-carbon transport, travel behaviour, trip length, urban densification

Procedia PDF Downloads 213
1253 Different Stages for the Creation of Electric Arc Plasma through Slow Rate Current Injection to Single Exploding Wire, by Simulation and Experiment

Authors: Ali Kadivar, Kaveh Niayesh

Abstract:

This work simulates the voltage drop and resistance of the explosion of copper wires of diameters 25, 40, and 100 µm surrounded by 1 bar nitrogen exposed to a 150 A current and before plasma formation. The absorption of electrical energy in an exploding wire is greatly diminished when the plasma is formed. This study shows the importance of considering radiation and heat conductivity in the accuracy of the circuit simulations. The radiation of the dense plasma formed on the wire surface is modeled with the Net Emission Coefficient (NEC) and is mixed with heat conductivity through PLASIMO® software. A time-transient code for analyzing wire explosions driven by a slow current rise rate is developed. It solves a circuit equation coupled with one-dimensional (1D) equations for the copper electrical conductivity as a function of its physical state and Net Emission Coefficient (NEC) radiation. At first, an initial voltage drop over the copper wire, current, and temperature distribution at the time of expansion is derived. The experiments have demonstrated that wires remain rather uniform lengthwise during the explosion and can be simulated utilizing 1D simulations. Data from the first stage are then used as the initial conditions of the second stage, in which a simplified 1D model for high-Mach-number flows is adopted to describe the expansion of the core. The current was carried by the vaporized wire material before it was dispersed in nitrogen by the shock wave. In the third stage, using a three-dimensional model of the test bench, the streamer threshold is estimated. Electrical breakdown voltage is calculated without solving a full-blown plasma model by integrating Townsend growth coefficients (TdGC) along electric field lines. BOLSIG⁺ and LAPLACE databases are used to calculate the TdGC at different mixture ratios of nitrogen/copper vapor. The simulations show both radiation and heat conductivity should be considered for an adequate description of wire resistance, and gaseous discharges start at lower voltages than expected due to ultraviolet radiation and the exploding shocks, which may have ionized the nitrogen.

Keywords: exploding wire, Townsend breakdown mechanism, streamer, metal vapor, shock waves

Procedia PDF Downloads 82
1252 Effect of Grain Size and Stress Parameters on Ratcheting Behaviour of Two Different Single Phase FCC Metals

Authors: Jayanta Kumar Mahato, Partha Sarathi De, Amrita Kundu, P. C. Chakraborti

Abstract:

Ratcheting is one of the most important phenomena to be considered for design and safety assessment of structural components subjected to stress controlled asymmetric cyclic loading in the elasto-plastic domain. In the present study uniaxial ratcheting behavior of commercially pure annealed OFHC copper and aluminium with two different grain sizes has been investigated. Stress-controlled tests have been conducted at various combinations of stress amplitude and mean stress. These stresses were selected in such a way that the ratio of equivalent stress amplitude (σₐeq) to ultimate tensile strength (σUTS) of the selected materials remains constant. It is found that irrespective of grain size the ratcheting fatigue lives decrease with the increase of both stress amplitude and mean stress following power relationships. However, the effect of stress amplitude on ratcheting lives is observed higher as compared to mean stress for both the FCC metals. It is also found that for both FCC metals ratcheting fatigue lives at a constant ratio of equivalent stress amplitude (σ ₐeq) to ultimate tensile strength (σUTS) are more in case fine grain size. So far ratcheting strain rate is concerned, it decreases rapidly within first few cycles and then a steady state is reached. Finally, the ratcheting strain rate increases up to the complete failure of the specimens due to a very large increase of true stress for a substantial reduction in cross-sectional area. The steady state ratcheting strain rate increases with the increase in both stress amplitude and mean stress. Interestingly, a unique perfectly power relationship between steady state ratcheting strain rate and cycles to failure has been found irrespective of stress combination for both FCC metals. Similar to ratcheting strain rate, the strain energy density decreases rapidly within first few cycles followed by steady state and then increases up to a failure of the specimens irrespective of stress combinations for both FCC metals; but strain energy density at steady state decreases with increase in mean stress and increases with the increase of stress amplitude. From the fractography study, it is found that the void density increases with the increase of maximum stress, but the void size and void density are almost same for any combination of stress parameters considering constant maximum stress.

Keywords: ratcheting phenomena, grain size, stress parameter, ratcheting lives, ratcheting strain rate

Procedia PDF Downloads 287
1251 Microstructure and Mechanical Properties Evaluation of Graphene-Reinforced AlSi10Mg Matrix Composite Produced by Powder Bed Fusion Process

Authors: Jitendar Kumar Tiwari, Ajay Mandal, N. Sathish, A. K. Srivastava

Abstract:

Since the last decade, graphene achieved great attention toward the progress of multifunction metal matrix composites, which are highly demanded in industries to develop energy-efficient systems. This study covers the two advanced aspects of the latest scientific endeavor, i.e., graphene as reinforcement in metallic materials and additive manufacturing (AM) as a processing technology. Herein, high-quality graphene and AlSi10Mg powder mechanically mixed by very low energy ball milling with 0.1 wt. % and 0.2 wt. % graphene. Mixed powder directly subjected to the powder bed fusion process, i.e., an AM technique to produce composite samples along with bare counterpart. The effects of graphene on porosity, microstructure, and mechanical properties were examined in this study. The volumetric distribution of pores was observed under X-ray computed tomography (CT). On the basis of relative density measurement by X-ray CT, it was observed that porosity increases after graphene addition, and pore morphology also transformed from spherical pores to enlarged flaky pores due to improper melting of composite powder. Furthermore, the microstructure suggests the grain refinement after graphene addition. The columnar grains were able to cross the melt pool boundaries in case of the bare sample, unlike composite samples. The smaller columnar grains were formed in composites due to heterogeneous nucleation by graphene platelets during solidification. The tensile properties get affected due to induced porosity irrespective of graphene reinforcement. The optimized tensile properties were achieved at 0.1 wt. % graphene. The increment in yield strength and ultimate tensile strength was 22% and 10%, respectively, for 0.1 wt. % graphene reinforced sample in comparison to bare counterpart while elongation decreases 20% for the same sample. The hardness indentations were taken mostly on the solid region in order to avoid the collapse of the pores. The hardness of the composite was increased progressively with graphene content. Around 30% of increment in hardness was achieved after the addition of 0.2 wt. % graphene. Therefore, it can be concluded that powder bed fusion can be adopted as a suitable technique to develop graphene reinforced AlSi10Mg composite. Though, some further process modification required to avoid the induced porosity after the addition of graphene, which can be addressed in future work.

Keywords: graphene, hardness, porosity, powder bed fusion, tensile properties

Procedia PDF Downloads 125
1250 Assessment of Urban Immunization Practices among Urban Mother's in Sri Lanka

Authors: Kasun U. G. Palihakkara

Abstract:

Although vaccine coverage in Sri Lanka is close to 100%, with the widely spreading vaccine rejection trend reaching South Asian regions, it is essential to catch on whether Sri Lankans are being misinformed from the common misconceptions regarding vaccines. As the rates of target diseases decrease, parents become less accepting of even minor common adverse events. It is essential to preserve the integrity of immunization programs and protect public health by finding out the prevalence of anti-immunization trends. The primary objective of this study was to assess the immunization practices and prevalence of trends related to anti-immunization among urban community in Sri Lanka. A descriptive cross-sectional quantitative study on 323 participants using convenient sampling with 213 self-administered questionnaires. Additionally, 110 online questionnaires were distributed. 31% of the study population does not maintain immunization records for their children. While majority seek information regarding immunization from reliable sources such as the family physician or specialist pediatricians, 30% also refer to unreliable sources such as online communities for their opinion. 31% of study population had not vaccinated for Japanese encephalitis. 73% of the study population had encountered with side effects of vaccination such as fever & 47% believed that such side effects are rare. 52% of the population had hostile attitude regarding the administration of several doses multiple vaccines within a child’s first year. Diseases like polio had been successfully eradicated from Sri Lanka with the help of vigorous vaccination programs. However, majority of the study population believe that there’s no need to keep vaccinating the children for those eradicated diseases and exposing the child for adverse effects of such vaccines. Majority of the population were aware of the existing misconceptions regarding immunization. The most popular misconceptions about vaccines popular among the study population were the MMR (Measles, Mumps, and Rubella) vaccine being a possible cause leading to autism and bowel disease and children getting infected with the disease even after they get vaccinated, may be due to the inactivated vaccine. Disturbingly 22% of the study population believed that vaccines are useless in preventing diseases nowadays. These data obtained from the urban study population reveals that even though statistically Sri Lankan immunization coverage is 100%, there is a possibility of anti-vaccination trend arising in Sri Lanka due to the prevalence of various misconceptions and rumors related to it. Therefore these data recommend the need for thorough awareness among the mothers.

Keywords: anti-vaccination, immunization, infectious diseases, pediatric health

Procedia PDF Downloads 135
1249 The Impact of Trait and Mathematical Anxiety on Oscillatory Brain Activity during Lexical and Numerical Error-Recognition Tasks

Authors: Alexander N. Savostyanov, Tatyana A. Dolgorukova, Elena A. Esipenko, Mikhail S. Zaleshin, Margherita Malanchini, Anna V. Budakova, Alexander E. Saprygin, Yulia V. Kovas

Abstract:

The present study compared spectral-power indexes and cortical topography of brain activity in a sample characterized by different levels of trait and mathematical anxiety. 52 healthy Russian-speakers (age 17-32; 30 males) participated in the study. Participants solved an error recognition task under 3 conditions: A lexical condition (simple sentences in Russian), and two numerical conditions (simple arithmetic and complicated algebraic problems). Trait and mathematical anxiety were measured using self-repot questionnaires. EEG activity was recorded simultaneously during task execution. Event-related spectral perturbations (ERSP) were used to analyze spectral-power changes in brain activity. Additionally, sLORETA was applied in order to localize the sources of brain activity. When exploring EEG activity recorded after tasks onset during lexical conditions, sLORETA revealed increased activation in frontal and left temporal cortical areas, mainly in the alpha/beta frequency ranges. When examining the EEG activity recorded after task onset during arithmetic and algebraic conditions, additional activation in delta/theta band in the right parietal cortex was observed. The ERSP plots reveled alpha/beta desynchronizations within a 500-3000 ms interval after task onset and slow-wave synchronization within an interval of 150-350 ms. Amplitudes of these intervals reflected the accuracy of error recognition, and were differently associated with the three (lexical, arithmetic and algebraic) conditions. The level of trait anxiety was positively correlated with the amplitude of alpha/beta desynchronization. The level of mathematical anxiety was negatively correlated with the amplitude of theta synchronization and of alpha/beta desynchronization. Overall, trait anxiety was related with an increase in brain activation during task execution, whereas mathematical anxiety was associated with increased inhibitory-related activity. We gratefully acknowledge the support from the №11.G34.31.0043 grant from the Government of the Russian Federation.

Keywords: anxiety, EEG, lexical and numerical error-recognition tasks, alpha/beta desynchronization

Procedia PDF Downloads 521
1248 Paternalistic Leadership and Organizational Citizenship Behavior: Moderating Role of Employee Loyalty to Supervisor

Authors: Obiajulu Anthony Ugochukwu Nnedum, Bernard Chukwukelue Chine, Jerome Ogochukwu Ezisi

Abstract:

A notable challenge of organizational citizenship behavior in Nigerian organizations is the prevalence of individualistic work cultures among employees, as this mindset can result in employees being less willing to go beyond their formal job requirements to contribute to the organization overall success. However, the dearth and scarce research on the antecedents of organizational citizenship behavior, such as paternalistic leadership and employee loyalty to supervisors in sub-Saharan African cultures such as Nigeria, motivated the current study to take a deep investigation into the moderating role of employee loyalty to supervisor on the relationship between paternalistic leadership and organizational citizenship behavior. The relevance of the current study ensures that when employees are loyal to their paternalistic leaders who show care and support, they are more likely to exhibit organizational citizenship behavior. The current study employed a sample size of four hundred and twenty participants (one hundred and five managers and three hundred and five subordinates) from eleven large organizations randomly selected through lucky dip from twenty-two large organizations from the directory of the Chamber of Commerce and Industry in Anambra state, south-eastern Nigeria. Also, a twelve-item organizational citizenship behavior scale, a thirty-nine-item paternalistic leadership scale, and a six-item loyalty to supervisor scale were employed for the collection of data for the current study. Adopting a one manager/Leader by triad subordinates cross-sectional survey design, Hayes process micro model and statistical package for social sciences (SPSS) version twenty-five, the findings from the result of the analysis of the hypotheses demonstrated that loyalty to supervisor moderated the relationship between paternalistic leadership and organizational citizenship behavior-conscientiousness. Also, the findings from the result revealed that loyalty to the supervisor moderated the relationship between authoritative leadership and organizational citizenship behavior identification. Furthermore, the findings from the result showed that loyalty to the supervisor moderated the relationship between moral leadership and organizational citizenship behavior. Accordingly, the result from the analysis implies that when employees are loyal to their supervisors, they are more likely to exhibit organizational citizenship behavior by going above and beyond their formal job requirements, as this loyalty can be fostered through a paternalistic leadership style that emphasizes a supportive and caring relationship between supervisors and subordinates.

Keywords: authoritative leadership, moral leadership, loyalty to supervisor, organizational citizenship behavior

Procedia PDF Downloads 55
1247 Characterisation of Human Attitudes in Software Requirements Elicitation

Authors: Mauro Callejas-Cuervo, Andrea C. Alarcon-Aldana

Abstract:

It is evident that there has been progress in the development and innovation of tools, techniques and methods in the development of software. Even so, there are few methodologies that include the human factor from the point of view of motivation, emotions and impact on the work environment; aspects that, when mishandled or not taken into consideration, increase the iterations in the requirements elicitation phase. This generates a broad number of changes in the characteristics of the system during its developmental process and an overinvestment of resources to obtain a final product that, often, does not live up to the expectations and needs of the client. The human factors such as emotions or personality traits are naturally associated with the process of developing software. However, most of these jobs are oriented towards the analysis of the final users of the software and do not take into consideration the emotions and motivations of the members of the development team. Given that in the industry, the strategies to select the requirements engineers and/or the analysts do not take said factors into account, it is important to identify and describe the characteristics or personality traits in order to elicit requirements effectively. This research describes the main personality traits associated with the requirements elicitation tasks through the analysis of the existing literature on the topic and a compilation of our experiences as software development project managers in the academic and productive sectors; allowing for the characterisation of a suitable profile for this job. Moreover, a psychometric test is used as an information gathering technique, and it is applied to the personnel of some local companies in the software development sector. Such information has become an important asset in order to make a comparative analysis between the degree of effectiveness in the way their software development teams are formed and the proposed profile. The results show that of the software development companies studied: 53.58% have selected the personnel for the task of requirements elicitation adequately, 37.71% possess some of the characteristics to perform the task, and 10.71% are inadequate. From the previous information, it is possible to conclude that 46.42% of the requirements engineers selected by the companies could perform other roles more adequately; a change which could improve the performance and competitiveness of the work team and, indirectly, the quality of the product developed. Likewise, the research allowed for the validation of the pertinence and usefulness of the psychometric instrument as well as the accuracy of the characteristics for the profile of requirements engineer proposed as a reference.

Keywords: emotions, human attitudes, personality traits, psychometric tests, requirements engineering

Procedia PDF Downloads 261
1246 Multi-Residue Analysis (GC-ECD) of Some Organochlorine Pesticides in Commercial Broiler Meat Marketed in Shivamogga City, Karnataka State, India

Authors: L. V. Lokesha, Jagadeesh S. Sanganal, Yogesh S. Gowda, Shekhar, N. B. Shridhar, N. Prakash, Prashantkumar Waghe, H. D. Narayanaswamy, Girish V. Kumar

Abstract:

Organochlorine (OC) insecticides are among the most important organotoxins and make a large group of pesticides. Physicochemical properties of these toxins, especially their lipophilicity, facilitate the absorption and storage of these toxins in the meat thus possess public health threat to humans. The presence of these toxins in broiler meat can be a quantitative and qualitative index for the presence of these toxins in animal bodies, which is attributed to Waste water of irrigation after spraying the crops, contaminated animal feeds with pesticides, polluted air are the potential sources of residues in animal products. Fifty broiler meat samples were collected from different retail outlets of Bengaluru city, Karnataka state, in ice cold conditions and later stored under -20°C until analysis. All the samples were subjected to Gas Chromatograph attached to Electron Capture Detector(GC-ECD, VARIAN make) screening and quantification of OC pesticides viz; Alachlor, Aldrin, Alpha-BHC, Beta-BHC, Dieldrin, Delta-BHC, o,p-DDE, p,p-DDE, o,p-DDD, p,p-DDD, o,p-DDT, p,p-DDT, Endosulfan-I, Endosulfan-II, Endosulfan Sulphate and Lindane(all the standards were procured from Merck). Extraction was undertaken by blending fifty grams (g) of meat sample with 50g Sodium Sulphate anahydrous, 120 ml of n-hexane, 120 ml acetone for 15 mins, extract is washed with distilled water and sample moisture is dried by sodium sulphate anahydrous, partitioning is done with 25 ml petroleum ether, 10 ml acetonitrile and 15 ml n-hexane shake vigorously for two minutes, sample clean up was done with florosil column. The reconstituted samples (using n-hexane) (Merck chem) were injected to Gas Chromatograph–Electron Capture Detector(GC-ECD). The present study reveals that, among the fifty chicken samples subjected for analysis, 60% (15/50), 32% (8/50), 28% (7/50), 20% (5/50) and 16% (4/50) of samples contaminated with DDTs, Delta-BHC, Dieldrin, Aldrin and Alachlor respectively. DDT metabolites, Delta-BHC were the most frequently detected OC pesticides. The detected levels of the pesticides were below the levels of MRL(according to Export Council of India notification for fresh poultry meat).

Keywords: accuracy, gas chromatography, meat, pesticide, petroleum ether

Procedia PDF Downloads 323
1245 Heuristic Approaches for Injury Reductions by Reduced Car Use in Urban Areas

Authors: Stig H. Jørgensen, Trond Nordfjærn, Øyvind Teige Hedenstrøm, Torbjørn Rundmo

Abstract:

The aim of the paper is to estimate and forecast road traffic injuries in the coming 10-15 years given new targets in urban transport policy and shifts of mode of transport, including injury cross-effects of mode changes. The paper discusses possibilities and limitations in measuring and quantifying possible injury reductions. Injury data (killed and seriously injured road users) from six urban areas in Norway from 1998-2012 (N= 4709 casualties) form the basis for estimates of changing injury patterns. For the coming period calculation of number of injuries and injury rates by type of road user (categories of motorized versus non-motorized) by sex, age and type of road are made. A prognosticated population increase (25 %) in total population within 2025 in the six urban areas will curb the proceeded fall in injury figures. However, policy strategies and measures geared towards a stronger modal shift from use of private vehicles to safer public transport (bus, train) will modify this effect. On the other side will door to door transport (pedestrians on their way to/from public transport nodes) imply a higher exposure for pedestrians (bikers) converting from private vehicle use (including fall accidents not registered as traffic accidents). The overall effect is the sum of these modal shifts in the increasing urban population and in addition diminishing return to the majority of road safety countermeasures has also to be taken into account. The paper demonstrates how uncertainties in the various estimates (prediction factors) on increasing injuries as well as decreasing injury figures may partly offset each other. The paper discusses road safety policy and welfare consequences of transport mode shift, including reduced use of private vehicles, and further environmental impacts. In this regard, safety and environmental issues will as a rule concur. However pursuing environmental goals (e.g. improved air quality, reduced co2 emissions) encouraging more biking may generate more biking injuries. The study was given financial grants from the Norwegian Research Council’s Transport Safety Program.

Keywords: road injuries, forecasting, reduced private care use, urban, Norway

Procedia PDF Downloads 231
1244 Is the Addition of Computed Tomography with Angiography Superior to a Non-Contrast Neuroimaging Only Strategy for Patients with Suspected Stroke or Transient Ischemic Attack Presenting to the Emergency Department?

Authors: Alisha M. Ebrahim, Bijoy K. Menon, Eddy Lang, Shelagh B. Coutts, Katie Lin

Abstract:

Introduction: Frontline emergency physicians require clear and evidence-based approaches to guide neuroimaging investigations for patients presenting with suspected acute stroke or transient ischemic attack (TIA). Various forms of computed tomography (CT) are currently available for initial investigation, including non-contrast CT (NCCT), CT angiography head and neck (CTA), and CT perfusion (CTP). However, there is uncertainty around optimal imaging choice for cost-effectiveness, particularly for minor or resolved neurological symptoms. In addition to the cost of CTA and CTP testing, there is also a concern for increased incidental findings, which may contribute to the burden of overdiagnosis. Methods: In this cross-sectional observational study, analysis was conducted on 586 anonymized triage and diagnostic imaging (DI) reports for neuroimaging orders completed on patients presenting to adult emergency departments (EDs) with a suspected stroke or TIA from January-December 2019. The primary outcome of interest is the diagnostic yield of NCCT+CTA compared to NCCT alone for patients presenting to urban academic EDs with Canadian Emergency Department Information System (CEDIS) complaints of “symptoms of stroke” (specifically acute stroke and TIA indications). DI reports were coded into 4 pre-specified categories (endorsed by a panel of stroke experts): no abnormalities, clinically significant findings (requiring immediate or follow-up clinical action), incidental findings (not meeting prespecified criteria for clinical significance), and both significant and incidental findings. Standard descriptive statistics were performed. A two-sided p-value <0.05 was considered significant. Results: 75% of patients received NCCT+CTA imaging, 21% received NCCT alone, and 4% received NCCT+CTA+CTP. The diagnostic yield of NCCT+CTA imaging for prespecified clinically significant findings was 24%, compared to only 9% in those who received NCCT alone. The proportion of incidental findings was 30% in the NCCT only group and 32% in the NCCT+CTA group. CTP did not significantly increase the yield of significant or incidental findings. Conclusion: In this cohort of patients presenting with suspected stroke or TIA, an NCCT+CTA neuroimaging strategy had a higher diagnostic yield for clinically significant findings than NCCT alone without significantly increasing the number of incidental findings identified.

Keywords: stroke, diagnostic yield, neuroimaging, emergency department, CT

Procedia PDF Downloads 93
1243 QSAR Study on Diverse Compounds for Effects on Thermal Stability of a Monoclonal Antibody

Authors: Olubukayo-Opeyemi Oyetayo, Oscar Mendez-Lucio, Andreas Bender, Hans Kiefer

Abstract:

The thermal melting curve of a protein provides information on its conformational stability and could provide cues on its aggregation behavior. Naturally-occurring osmolytes have been shown to improve the thermal stability of most proteins in a concentration-dependent manner. They are therefore commonly employed as additives in therapeutic protein purification and formulation. A number of intertwined and seemingly conflicting mechanisms have been put forward to explain the observed stabilizing effects, the most prominent being the preferential exclusion mechanism. We attempted to probe and summarize molecular mechanisms for thermal stabilization of a monoclonal antibody (mAb) by developing quantitative structure-activity relationships using a rationally-selected library of 120 osmolyte-like compounds in the polyhydric alcohols, amino acids and methylamines classes. Thermal stabilization potencies were experimentally determined by thermal shift assays based on differential scanning fluorimetry. The cross-validated QSAR model was developed by partial least squares regression using descriptors generated from Molecular Operating Environment software. Careful evaluation of the results with the use of variable importance in projection parameter (VIP) and regression coefficients guided the selection of the most relevant descriptors influencing mAb thermal stability. For the mAb studied and at pH 7, the thermal stabilization effects of tested compounds correlated positively with their fractional polar surface area and inversely with their fractional hydrophobic surface area. We cannot claim that the observed trends are universal for osmolyte-protein interactions because of protein-specific effects, however this approach should guide the quick selection of (de)stabilizing compounds for a protein from a chemical library. Further work with a large variety of proteins and at different pH values would help the derivation of a solid explanation as to the nature of favorable osmolyte-protein interactions for improved thermal stability. This approach may be beneficial in the design of novel protein stabilizers with optimal property values, especially when the influence of solution conditions like the pH and buffer species and the protein properties are factored in.

Keywords: thermal stability, monoclonal antibodies, quantitative structure-activity relationships, osmolytes

Procedia PDF Downloads 327
1242 Shaping Work Engagement through Intra-Organizational Coopetition: Case Study of the University of Zielona Gora in Poland

Authors: Marta Moczulska

Abstract:

One of the most important aspects of human management in an organization is the work engagement. In spite of the different perspectives of engagement, it is possible to see that it is expressed in the activity of the individual involved in the performance of tasks, the functioning of the organization. At the same time is considered not only in behavioural but also cognitive and emotional dimensions. Previous studies were related to sources, predictors of engagement and determinants, including organizational ones. Attention was paid to the importance of needs (including belonging, success, development, sense of work), values (such as trust, honesty, respect, justice) or interpersonal relationships, especially with the supervisor. Taking them into account and theories related to human acting, behaviour in the organization, interactions, it was recognized that engagement can be shaped through cooperation and competition. It was assumed that to shape the work engagement, it is necessary to simultaneously cooperate and compete in order to reduce the weaknesses of each of these activities and strengthen the strengths. Combining cooperation and competition is defined as 'coopetition'. However, research conducted in this field is primarily concerned with relations between companies. Intra-organizational coopetition is mainly considered as competing organizational branches or units (cross-functional coopetition). Less attention is paid to competing groups or individuals. It is worth noting the ambiguity of the concepts of cooperation and rivalry. Taking into account the terms used and their meaning, different levels of cooperation and forms of competition can be distinguished. Thus, several types of intra-organizational coopetition can be identified. The article aims at defining the potential for work engagement through intra-organizational coopetition. The aim of research was to know how levels of cooperation in competition conditions influence engagement. It is assumed that rivalry (positive competition) between teams (the highest level of cooperation) is a type of coopetition that contributes to working engagement. Qualitative research will be carried out among students of the University of Zielona Gora, realizing various types of projects. The first research groups will be students working in groups on one project for three months. The second research group will be composed of students working in groups on several projects in the same period (three months). Work engagement will be determined using the UWES questionnaire. Levels of cooperation will be determined using the author's research tool. Due to the fact that the research is ongoing, results will be presented in the final paper.

Keywords: competition, cooperation, intra-organizational coopetition, work engagement

Procedia PDF Downloads 142
1241 Towards End-To-End Disease Prediction from Raw Metagenomic Data

Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker

Abstract:

Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.

Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine

Procedia PDF Downloads 121
1240 Wind Energy Harvester Based on Triboelectricity: Large-Scale Energy Nanogenerator

Authors: Aravind Ravichandran, Marc Ramuz, Sylvain Blayac

Abstract:

With the rapid development of wearable electronics and sensor networks, batteries cannot meet the sustainable energy requirement due to their limited lifetime, size and degradation. Ambient energies such as wind have been considered as an attractive energy source due to its copious, ubiquity, and feasibility in nature. With miniaturization leading to high-power and robustness, triboelectric nanogenerator (TENG) have been conceived as a promising technology by harvesting mechanical energy for powering small electronics. TENG integration in large-scale applications is still unexplored considering its attractive properties. In this work, a state of the art design TENG based on wind venturi system is demonstrated for use in any complex environment. When wind introduces into the air gap of the homemade TENG venturi system, a thin flexible polymer repeatedly contacts with and separates from electrodes. This device structure makes the TENG suitable for large scale harvesting without massive volume. Multiple stacking not only amplifies the output power but also enables multi-directional wind utilization. The system converts ambient mechanical energy to electricity with 400V peak voltage by charging of a 1000mF super capacitor super rapidly. Its future implementation in an array of applications aids in environment friendly clean energy production in large scale medium and the proposed design performs with an exhaustive material testing. The relation between the interfacial micro-and nano structures and the electrical performance enhancement is comparatively studied. Nanostructures are more beneficial for the effective contact area, but they are not suitable for the anti-adhesion property due to the smaller restoring force. Considering these issues, the nano-patterning is proposed for further enhancement of the effective contact area. By considering these merits of simple fabrication, outstanding performance, robust characteristic and low-cost technology, we believe that TENG can open up great opportunities not only for powering small electronics, but can contribute to large-scale energy harvesting through engineering design being complementary to solar energy in remote areas.

Keywords: triboelectric nanogenerator, wind energy, vortex design, large scale energy

Procedia PDF Downloads 210
1239 A Survey and Analysis on Inflammatory Pain Detection and Standard Protocol Selection Using Medical Infrared Thermography from Image Processing View Point

Authors: Mrinal Kanti Bhowmik, Shawli Bardhan Jr., Debotosh Bhattacharjee

Abstract:

Human skin containing temperature value more than absolute zero, discharges infrared radiation related to the frequency of the body temperature. The difference in infrared radiation from the skin surface reflects the abnormality present in human body. Considering the difference, detection and forecasting the temperature variation of the skin surface is the main objective of using Medical Infrared Thermography(MIT) as a diagnostic tool for pain detection. Medical Infrared Thermography(MIT) is a non-invasive imaging technique that records and monitors the temperature flow in the body by receiving the infrared radiated from the skin and represent it through thermogram. The intensity of the thermogram measures the inflammation from the skin surface related to pain in human body. Analysis of thermograms provides automated anomaly detection associated with suspicious pain regions by following several image processing steps. The paper represents a rigorous study based survey related to the processing and analysis of thermograms based on the previous works published in the area of infrared thermal imaging for detecting inflammatory pain diseases like arthritis, spondylosis, shoulder impingement, etc. The study also explores the performance analysis of thermogram processing accompanied by thermogram acquisition protocols, thermography camera specification and the types of pain detected by thermography in summarized tabular format. The tabular format provides a clear structural vision of the past works. The major contribution of the paper introduces a new thermogram acquisition standard associated with inflammatory pain detection in human body to enhance the performance rate. The FLIR T650sc infrared camera with high sensitivity and resolution is adopted to increase the accuracy of thermogram acquisition and analysis. The survey of previous research work highlights that intensity distribution based comparison of comparable and symmetric region of interest and their statistical analysis assigns adequate result in case of identifying and detecting physiological disorder related to inflammatory diseases.

Keywords: acquisition protocol, inflammatory pain detection, medical infrared thermography (MIT), statistical analysis

Procedia PDF Downloads 339
1238 An Exploratory Approach of the Latin American Migrants’ Urban Space Transformation of Antofagasta City, Chile

Authors: Carolina Arriagada, Yasna Contreras

Abstract:

Since mid-2000, the migratory flows of Latin American migrants to Chile have been increasing constantly. There are two reasons that would explain why Chile is presented as an attractive country for the migrants. On the one hand, traditional centres of migrants’ attraction such as the United States and Europe have begun to close their borders. On the other hand, Chile exhibits relative economic and political stability, which offers greater job opportunities and better standard of living when compared to the migrants’ origin country. At the same time, the neoliberal economic model of Chile, developed under an extractive production of the natural resources, has privatized the urban space. The market regulates the growth of the fragmented and segregated cities. Then, the vulnerable population, most of the time, is located in the periphery and in the marginal areas of the urban space. In this aspect, the migrants have begun to occupy those degraded and depressed areas of the city. The problem raised is that the increase of the social spatial segregation could be also attributed to the migrants´ occupation of the marginal urban places of the city. The aim of this investigation is to carry out an analysis of the migrants’ housing strategies, which are transforming the marginal areas of the city. The methodology focused on the urban experience of the migrants, through the observation of spatial practices, ways of living and networks configuration in order to transform the marginal territory. The techniques applied in this study are semi–structured interviews in-depth interviews. The study reveals that the migrants housing strategies for living in the marginal areas of the city are built on a paradox way. On the one hand, the migrants choose proximity to their place of origin, maintaining their identity and customs. On the other hand, the migrants choose proximity to their social and familiar places, generating sense of belonging. In conclusion, the migration as international displacements under a globalized economic model increasing socio spatial segregation in cities is evidenced, but the transformation of the marginal areas is a fundamental resource of their integration migratory process. The importance of this research is that it is everybody´s responsibility not only the right to live in a city without any discrimination but also to integrate the citizens within the social urban space of a city.

Keywords: migrations, marginal space, resignification, visibility

Procedia PDF Downloads 138
1237 Engineering Topology of Construction Ecology in Urban Environments: Suez Canal Economic Zone

Authors: Moustafa Osman Mohammed

Abstract:

Integration sustainability outcomes give attention to construction ecology in the design review of urban environments to comply with Earth’s System that is composed of integral parts of the (i.e., physical, chemical and biological components). Naturally, exchange patterns of industrial ecology have consistent and periodic cycles to preserve energy flows and materials in Earth’s System. When engineering topology is affecting internal and external processes in system networks, it postulated the valence of the first-level spatial outcome (i.e., project compatibility success). These instrumentalities are dependent on relating the second-level outcome (i.e., participant security satisfaction). Construction ecology approach feedback energy from resources flows between biotic and abiotic in the entire Earth’s ecosystems. These spatial outcomes are providing an innovation, as entails a wide range of interactions to state, regulate and feedback “topology” to flow as “interdisciplinary equilibrium” of ecosystems. The interrelation dynamics of ecosystems are performing a process in a certain location within an appropriate time for characterizing their unique structure in “equilibrium patterns”, such as biosphere and collecting a composite structure of many distributed feedback flows. These interdisciplinary systems regulate their dynamics within complex structures. These dynamic mechanisms of the ecosystem regulate physical and chemical properties to enable a gradual and prolonged incremental pattern to develop a stable structure. The engineering topology of construction ecology for integration sustainability outcomes offers an interesting tool for ecologists and engineers in the simulation paradigm as an initial form of development structure within compatible computer software. This approach argues from ecology, resource savings, static load design, financial other pragmatic reasons, while an artistic/architectural perspective, these are not decisive. The paper described an attempt to unify analytic and analogical spatial modeling in developing urban environments as a relational setting, using optimization software and applied as an example of integrated industrial ecology where the construction process is based on a topology optimization approach.

Keywords: construction ecology, industrial ecology, urban topology, environmental planning

Procedia PDF Downloads 122
1236 The Yield of Neuroimaging in Patients Presenting to the Emergency Department with Isolated Neuro-Ophthalmological Conditions

Authors: Dalia El Hadi, Alaa Bou Ghannam, Hala Mostafa, Hana Mansour, Ibrahim Hashim, Soubhi Tahhan, Tharwat El Zahran

Abstract:

Introduction: Neuro-ophthalmological emergencies require prompt assessment and management to avoid vision or life-threatening sequelae. Some would require neuroimaging. Most commonly used are the CT and MRI of the Brain. They can be over-used when not indicated. Their yield remains dependent on multiple factors relating to the clinical scenario. Methods: A retrospective cross-sectional study was conducted by reviewing the electronic medical records of patients presenting to the Emergency Department (ED) with isolated neuro-ophthalmologic complaints. For each patient, data were collected on the clinical presentation, whether neuroimaging was performed (and which type), and the result of neuroimaging. Analysis of the performed neuroimaging was made, and its yield was determined. Results: A total of 211 patients were reviewed. The complaints or symptoms at presentation were: blurry vision, change in the visual field, transient vision loss, floaters, double vision, eye pain, eyelid droop, headache, dizziness and others such as nausea or vomiting. In the ED, a total of 126 neuroimaging procedures were performed. Ninety-four imagings (74.6%) were normal, while 32 (25.4%) had relevant abnormal findings. Only 2 symptoms were significant for abnormal imaging: blurry vision (p-value= 0.038) and visual field change (p-value= 0.014). While 4 physical exam findings had significant abnormal imaging: visual field defect (p-value= 0.016), abnormal pupil reactivity (p-value= 0.028), afferent pupillary defect (p-value= 0.018), and abnormal optic disc exam (p-value= 0.009). Conclusion: Risk indicators for abnormal neuroimaging in the setting of neuro-ophthalmological emergencies are blurred vision or changes in the visual field on history taking. While visual field irregularities, abnormal pupil reactivity with or without afferent pupillary defect, or abnormal optic discs, are risk factors related to physical testing. These findings, when present, should sway the ED physician towards neuroimaging but still individualizing each case is of utmost importance to prevent time-consuming, resource-draining, and sometimes unnecessary workup. In the end, it suggests a well-structured patient-centered algorithm to be followed by ED physicians.

Keywords: emergency department, neuro-ophthalmology, neuroimaging, risk indicators

Procedia PDF Downloads 174
1235 Development of a Journal over 20 Years: Citation Analysis

Authors: Byung Lee, Charles Perschau

Abstract:

This study analyzes the development of a communication journal, the Journal of Advertising Education (JAE) over the past 20 years by examining citations of all research articles there. The purpose of a journal is to offer a stable and transparent forum for the presentation, scrutiny, and discussion of research in a targeted domain. This study asks whether JAE has fulfilled this purpose. The authors and readers who are involved in a journal need to have common research topics of their interest. In the case of the discipline of communication, scholars have a variety of backgrounds beyond communication itself since the social scientific study of communication is a relatively recent development, one that emerged after World War II, and the discipline has been heavily indebted to other social sciences, such as psychology, sociology, social psychology, and political science. When authors impart their findings and knowledge to others, their work is not done in isolation. They have to stand on previous studies, which are listed as sources in the bibliography. Since communication has heavily piggybacked on other disciplines, cited sources should be as diverse as the resources it taps into. This paper analyzes 4,244 articles that were cited by JAE articles in the past 36 issues. Since journal article authors reveal their intellectual linkage by using bibliographic citations, the analysis of citations in journal articles will reveal various networks of relationships among authors, journal types, and fields in an objective and quantitative manner. The study found that an easier access to information sources because of the development of electronic databases and the growing competition among scholars for publication seemed to influence authors to increase the number of articles cited even though some variations existed during the examined period. The types of articles cited have also changed. Authors have more often cited journal articles, periodicals (most of them available online), and web site sources, while decreased their dependence on books, conference papers, and reports. To provide a forum for discussion, a journal needs a common topic or theme. This can be realized when an author writes an article about a topic, and that article is cited and discussed in another article. Thus, the citation of articles in the same journal is vital for a journal to form a forum for discussion. JAE has gradually increased the citations of in-house articles with a few fluctuations over the years. The study also examines not only specific articles that are often cited, but also specific authors often cited. The analysis of citations in journal articles shows how JAE has developed into a full academic journal while offering a communal forum even though the speed of its formation is not as fast as desired probably because of its interdisciplinary nature.

Keywords: citation, co-citation, the Journal of Advertising Education, development of a journal

Procedia PDF Downloads 150
1234 Mapping Man-Induced Soil Degradation in Armenia's High Mountain Pastures through Remote Sensing Methods: A Case Study

Authors: A. Saghatelyan, Sh. Asmaryan, G. Tepanosyan, V. Muradyan

Abstract:

One of major concern to Armenia has been soil degradation emerged as a result of unsustainable management and use of grasslands, this in turn largely impacting environment, agriculture and finally human health. Hence, assessment of soil degradation is an essential and urgent objective set out to measure its possible consequences and develop a potential management strategy. Since recently, an essential tool for assessing pasture degradation has been remote sensing (RS) technologies. This research was done with an intention to measure preciseness of Linear spectral unmixing (LSU) and NDVI-SMA methods to estimate soil surface components related to degradation (fractional vegetation cover-FVC, bare soils fractions, surface rock cover) and determine appropriateness of these methods for mapping man-induced soil degradation in high mountain pastures. Taking into consideration a spatially complex and heterogeneous biogeophysical structure of the studied site, we used high resolution multispectral QuickBird imagery of a pasture site in one of Armenia’s rural communities - Nerkin Sasoonashen. The accuracy assessment was done by comparing between the land cover abundance data derived through RS methods and the ground truth land cover abundance data. A significant regression was established between ground truth FVC estimate and both NDVI-LSU and LSU - produced vegetation abundance data (R2=0.636, R2=0.625, respectively). For bare soil fractions linear regression produced a general coefficient of determination R2=0.708. Because of poor spectral resolution of the QuickBird imagery LSU failed with assessment of surface rock abundance (R2=0.015). It has been well documented by this particular research, that reduction in vegetation cover runs in parallel with increase in man-induced soil degradation, whereas in the absence of man-induced soil degradation a bare soil fraction does not exceed a certain level. The outcomes show that the proposed method of man-induced soil degradation assessment through FVC, bare soil fractions and field data adequately reflects the current status of soil degradation throughout the studied pasture site and may be employed as an alternate of more complicated models for soil degradation assessment.

Keywords: Armenia, linear spectral unmixing, remote sensing, soil degradation

Procedia PDF Downloads 324
1233 Nutritional Status of Food Insecure Students, UWC

Authors: E. C. Swart, E. Kunneke

Abstract:

Background: Disparities in food security exist between communities and households across the country, reflecting continuing social and economic inequalities. The purpose of this study was to investigate the presence of food insecurity amongst UWC students. Method: Cross-sectional study recruited 200 students via email and cellphone from an ICS generated list of randomly selected students aged 18-25. Data collection took place during the first two weeks of term 3. Individual appointments were made with consenting participants and conducted in English by trained BSc Dietetics students. Data was analysed using SPSS. The hunger scale used by Stats SA (October 2010) was used. Dietary intake was assessed using a single 24hr recall. Results: Sixty-three percent of the students reported that they do experience some food insecurity whilst 14.5% reported to go hungry due to inadequate access to food. Coping mechanisms during periods of food insecurity include: Asking a friend, neighbour, family member (40%); Borrow (15%); Steal (none); Casual jobs (12%). Anthropometric status of students did not differ statistically significantly by food security status. A statistically significantly greater proportion of Xhosa speaking students reported inadequate money for food. Students residing in residences off campus appear to be least food secure in terms of money available and limiting food intake, whilst those residing at home are less food insecure. Similar proportions of students who receive bursaries or whose parents are paying reported going hungry whilst those who supports themselves never goes hungry. Mean nutrient intake during the previous 24 hours of students who reported inadequate resources to buy food, who eat less due to inadequate resources and who goes hungry only differed statistically significantly for Vitamin B (go hungry) and for fibre (money shortage). In general the nutrient intake is lower for those who reported to eat less and go hungry except for added sugar, vitamin A and folate (go hungry), and energy, fibre, iron, riboflavin and folate (eat less). For students who reported to have inadequate money to buy food, the mean nutrient intake was higher except for calcium and thiamin. The mean body mass index of this group of students was also higher even though the difference was not statistically significant. Conclusion: Hunger is present on campus however a single 24hr recall did not confirm statistically significant lower nutrient intakes for students who reported different levels of food insecurity.

Keywords: anthropometry, dietary intake, nutritional status, students

Procedia PDF Downloads 369
1232 Segmented Pupil Phasing with Deep Learning

Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan

Abstract:

Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.

Keywords: wavefront sensing, deep learning, deployable telescope, space telescope

Procedia PDF Downloads 96
1231 Functional Performance Needs of Individuals with Intellectual and Developmental Disabilities

Authors: Noor Taleb Ismael, Areej Abd Al Kareem Al Titi, Ala'a Fayez Jaber

Abstract:

Objectives: To investigate self-perceived functional performance among adults with IDD who are Jordanian residential care and rehabilitation centers residents. Also, to investigate their functional abilities (i.e., motor, and cognitive). In addition, to determine the motor and cognitive predictors of their functional performance. Methods: The study utilized a cross-sectional descriptive design; the sample included 180 individuals with IDD (90 males and 90 females) aged 18 to 75 years. The inclusion criteria encompassed: 1) Adults with a confirmed IDD by their physician’s professional and 2) residents in Jordanian Residential Care and Rehabilitation Centers affiliated with the Jordanian Ministry of Social Development. The exclusion criteria were: 1) bedridden or totally dependent on their care providers; 2) who had an accident or acquired neurological conditions. Researchers conducted semi-structured interviews to complete the outcome measures that include the Canadian Occupational Performance Measure (COPM), the Functional Independence Measure (FIM), the Montreal Cognitive Assessment (MoCA), the Mini-Mental Status Examination (MMSE), and the sociodemographic questionnaire. Data analyses consisted of descriptive statistics, analysis of frequencies, correlation, and regression analyses. Result: Individuals with IDD showed low functional performance in all daily life areas, including self-care, productivity, and leisure; there was severe cognitive impairment and poor independence and functional performance. (COPM Performance M= 1.433, SD±.57021, COPM Satisfaction M= 1.31, SD±.54, FIM M= 3.673, SD± 1.7918). Two predictive models were validated for the COPM performance and FIM total scores. First, significant predictors of high self-perceived functional performance on COPM were high scores on FIM Motor sub scores, FIM cognitive sub scores, young age, and having a high school educational level (R2=0.603, p=0.012). Second, significant predictors of high functional capacity on FIM were a high score on the COPM performance subscale, a high MMSE score, and having a cerebral palsy (CP) diagnosis (R2=0.671, p<0.001). Conclusions: Evaluating functional performance and associated factors is important in rehabilitation to provide better services and improve health and QoL for individuals with IDD. This study suggested conducting future studies targeting integrated individuals with IDD who live with their families in the communities.

Keywords: functional performance, intellectual and developmental disabilty, cognitive abilities, motor abilities

Procedia PDF Downloads 45
1230 Unlocking Health Insights: Studying Data for Better Care

Authors: Valentina Marutyan

Abstract:

Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.

Keywords: data mining, healthcare, big data, large amounts of data

Procedia PDF Downloads 70