Search results for: manual attendance
100 A Mixed Method Approach for Modeling Entry Capacity at Rotary Intersections
Authors: Antonio Pratelli, Lorenzo Brocchini, Reginald Roy Souleyrette
Abstract:
A rotary is a traffic circle intersection where vehicles entering from branches give priority to circulating flow. Vehicles entering the intersection from converging roads move around the central island and weave out of the circle into their desired exiting branch. This creates merging and diverging conflicts among any entry and its successive exit, i.e., a section. Therefore, rotary capacity models are usually based on the weaving of the different movements in any section of the circle, and the maximum rate of flow value is then related to each weaving section of the rotary. Nevertheless, the single-section capacity value does not lead to the typical performance characteristics of the intersection, such as the entry average delay which is directly linked to its level of service. From another point of view, modern roundabout capacity models are based on the limitation of the flow entering from the single entrance due to the amount of flow circulating in front of the entrance itself. Modern roundabouts capacity models generally lead also to a performance evaluation. This paper aims to incorporate a modern roundabout capacity model into an old rotary capacity method to obtain from the latter the single input capacity and ultimately achieve the related performance indicators. Put simply; the main objective is to calculate the average delay of each single roundabout entrance to apply the most common Highway Capacity Manual, or HCM, criteria. The paper is organized as follows: firstly, the rotary and roundabout capacity models are sketched, and it has made a brief introduction to the model combination technique with some practical instances. The successive section is deserved to summarize the TRRL old rotary capacity model and the most recent HCM-7th modern roundabout capacity model. Then, the two models are combined through an iteration-based algorithm, especially set-up and linked to the concept of roundabout total capacity, i.e., the value reached due to a traffic flow pattern leading to the simultaneous congestion of all roundabout entrances. The solution is the average delay for each entrance of the rotary, by which is estimated its respective level of service. In view of further experimental applications, at this research stage, a collection of existing rotary intersections operating with the priority-to-circle rule has already started, both in the US and in Italy. The rotaries have been selected by direct inspection of aerial photos through a map viewer, namely Google Earth. Each instance has been recorded by location, general urban or rural, and its main geometrical patterns. Finally, conclusion remarks are drawn, and a discussion on some further research developments has opened.Keywords: mixed methods, old rotary and modern roundabout capacity models, total capacity algorithm, level of service estimation
Procedia PDF Downloads 8699 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide
Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva
Abstract:
Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning
Procedia PDF Downloads 16098 Design, Construction, Validation And Use Of A Novel Portable Fire Effluent Sampling Analyser
Authors: Gabrielle Peck, Ryan Hayes
Abstract:
Current large scale fire tests focus on flammability and heat release measurements. Smoke toxicity isn’t considered despite it being a leading cause of death and injury in unwanted fires. A key reason could be that the practical difficulties associated with quantifying individual toxic components present in a fire effluent often require specialist equipment and expertise. Fire effluent contains a mixture of unreactive and reactive gases, water, organic vapours and particulate matter, which interact with each other. This interferes with the operation of the analytical instrumentation and must be removed without changing the concentration of the target analyte. To mitigate the need for expensive equipment and time-consuming analysis, a portable gas analysis system was designed, constructed and tested for use in large-scale fire tests as a simpler and more robust alternative to online FTIR measurements. The novel equipment aimed to be easily portable and able to run on battery or mains electricity; be able to be calibrated at the test site; be capable of quantifying CO, CO2, O2, HCN, HBr, HCl, NOx and SO2 accurately and reliably; be capable of independent data logging; be capable of automated switchover of 7 bubblers; be able to withstand fire effluents; be simple to operate; allow individual bubbler times to be pre-set; be capable of being controlled remotely. To test the analysers functionality, it was used alongside the ISO/TS 19700 Steady State Tube Furnace (SSTF). A series of tests were conducted to assess the validity of the box analyser measurements and the data logging abilities of the apparatus. PMMA and PA 6.6 were used to assess the validity of the box analyser measurements. The data obtained from the bench-scale assessments showed excellent agreement. Following this, the portable analyser was used to monitor gas concentrations during large-scale testing using the ISO 9705 room corner test. The analyser was set up, calibrated and set to record smoke toxicity measurements in the doorway of the test room. The analyser was successful in operating without manual interference and successfully recorded data for 12 of the 12 tests conducted in the ISO room tests. At the end of each test, the analyser created a data file (formatted as .csv) containing the measured gas concentrations throughout the test, which do not require specialist knowledge to interpret. This validated the portable analyser’s ability to monitor fire effluent without operator intervention on both a bench and large-scale. The portable analyser is a validated and significantly more practical alternative to FTIR, proven to work for large-scale fire testing for quantification of smoke toxicity. The analyser is a cheaper, more accessible option to assess smoke toxicity, mitigating the need for expensive equipment and specialist operators.Keywords: smoke toxicity, large-scale tests, iso 9705, analyser, novel equipment
Procedia PDF Downloads 7797 The Implementation of Human Resource Information System in the Public Sector: An Exploratory Study of Perceived Benefits and Challenges
Authors: Aneeqa Suhail, Shabana Naveed
Abstract:
The public sector (in both developed and developing countries) has gone through various waves of radical reforms in recent decades. In Pakistan, under the influence of New Public Management(NPM) Reforms; best practices of private sector are introduced in the public sector to modernize public organizations. Human Resource Information System (HRIS) has been popular in the private sector and proven to be a successful system, therefore it is being adopted in the public sector too. However, implementation of private business practices in public organizations us very challenging due to differences in context. This implementation gets further critical in Pakistan due to a centralizing tendency and lack of autonomy in public organizations. Adoption of HRIS by public organizations in Pakistan raises several questions: What challenges are faced by public organizations in implementation of HRIS? Are benefits of HRIS such as efficiency, process integration and cost reduction achieved? How is the previous system improved with this change and what are the impacts? Yet, it is an under-researched topic, especially in public enterprises. This study contributes to the existing body of knowledge by empirically exploring benefits and challenges of implementation of HRIS in public organizations. The research adopts a case study approach and uses qualitative data based on in-depth interviews conducted at various levels in the hierarchy including top management, departmental heads and employees. The unit of analysis is LESCO, the Lahore Electric Supply Company, a state-owned entity that generates, transmits and distributes electricity to 4 big cities in Punjab, Pakistan. The findings of the study show that LESCO has not achieved the benefits of HRIS as established in literature. The implementation process remained quite slow and costly. Various functions of HR are still in isolation and integration is a big challenge for the organization. Although the data is automated, the previous system of manually record maintenance and paperwork is still in work, resulting in the presence of parallel practices. The findings also identified resistance to change from top management and labor workforce, lack of commitment and technical knowledge, and costly vendors as major barriers that affect the effective implementation of HRIS. The paper suggests some potential actions to overcome these barriers and to enhance effective implementation of HR-technology. The findings are explained in light of an institutional logics perspective. HRIS’ new logic of automated and integrated HR system is in sharp contrast with the prevailing logic of process-oriented manual data maintenance, leading to resistance to change and deadlock.Keywords: human resource information system, technological changes, state-owned enterprise, implementation challenges
Procedia PDF Downloads 14496 Low Frequency Ultrasonic Degassing to Reduce Void Formation in Epoxy Resin and Its Effect on the Thermo-Mechanical Properties of the Cured Polymer
Authors: A. J. Cobley, L. Krishnan
Abstract:
The demand for multi-functional lightweight materials in sectors such as automotive, aerospace, electronics is growing, and for this reason fibre-reinforced, epoxy polymer composites are being widely utilized. The fibre reinforcing material is mainly responsible for the strength and stiffness of the composites whilst the main role of the epoxy polymer matrix is to enhance the load distribution applied on the fibres as well as to protect the fibres from the effect of harmful environmental conditions. The superior properties of the fibre-reinforced composites are achieved by the best properties of both of the constituents. Although factors such as the chemical nature of the epoxy and how it is cured will have a strong influence on the properties of the epoxy matrix, the method of mixing and degassing of the resin can also have a significant impact. The production of a fibre-reinforced epoxy polymer composite will usually begin with the mixing of the epoxy pre-polymer with a hardener and accelerator. Mechanical methods of mixing are often employed for this stage but such processes naturally introduce air into the mixture, which, if it becomes entrapped, will lead to voids in the subsequent cured polymer. Therefore, degassing is normally utilised after mixing and this is often achieved by placing the epoxy resin mixture in a vacuum chamber. Although this is reasonably effective, it is another process stage and if a method of mixing could be found that, at the same time, degassed the resin mixture this would lead to shorter production times, more effective degassing and less voids in the final polymer. In this study the effect of four different methods for mixing and degassing of the pre-polymer with hardener and accelerator were investigated. The first two methods were manual stirring and magnetic stirring which were both followed by vacuum degassing. The other two techniques were ultrasonic mixing/degassing using a 40 kHz ultrasonic bath and a 20 kHz ultrasonic probe. The cured cast resin samples were examined under scanning electron microscope (SEM), optical microscope, and Image J analysis software to study morphological changes, void content and void distribution. Three point bending test and differential scanning calorimetry (DSC) were also performed to determine the thermal and mechanical properties of the cured resin. It was found that the use of the 20 kHz ultrasonic probe for mixing/degassing gave the lowest percentage voids of all the mixing methods in the study. In addition, the percentage voids found when employing a 40 kHz ultrasonic bath to mix/degas the epoxy polymer mixture was only slightly higher than when magnetic stirrer mixing followed by vacuum degassing was utilized. The effect of ultrasonic mixing/degassing on the thermal and mechanical properties of the cured resin will also be reported. The results suggest that low frequency ultrasound is an effective means of mixing/degassing a pre-polymer mixture and could enable a significant reduction in production times.Keywords: degassing, low frequency ultrasound, polymer composites, voids
Procedia PDF Downloads 29695 Correlation between Visual Perception and Social Function in Patients with Schizophrenia
Authors: Candy Chieh Lee
Abstract:
Objective: The purpose of this study is to investigate the relationship between visual perception and social function in patients with schizophrenia. The specific aims are: 1) To explore performances in visual perception and social function in patients with schizophrenia 2) to examine the correlation between visual perceptual skills and social function in patients with schizophrenia The long-term goal is to be able to provide the most adequate intervention program for promoting patients’ visual perceptual skills and social function, as well as compensatory techniques. Background: Perceptual deficits in schizophrenia have been well documented in the visual system. Clinically, a considerable portion (up to 60%) of schizophrenia patients report distorted visual experiences such as visual perception of motion, color, size, and facial expression. Visual perception is required for the successful performance of most activities of daily living, such as dressing, making a cup of tea, driving a car and reading. On the other hand, patients with schizophrenia usually exhibit psychotic symptoms such as auditory hallucination and delusions which tend to alter their perception of reality and affect their quality of interpersonal relationship and limit their participation in various social situations. Social function plays an important role in the prognosis of patients with schizophrenia; lower social functioning skills can lead to poorer prognosis. Investigations on the relationship between social functioning and perceptual ability in patients with schizophrenia are relatively new but important as the results could provide information for effective intervention on visual perception and social functioning in patients with schizophrenia. Methods: We recruited 50 participants with schizophrenia in the mental health hospital (Taipei City Hospital, Songde branch, Taipei, Taiwan) acute ward. Participants who have signed consent forms, diagnosis of schizophrenia and having no organic vision deficits were included. Participants were administered the test of visual-perceptual skills (non-motor), third edition (TVPS-3) and the personal and social performance scale (PSP) for assessing visual perceptual skill and social function. The assessments will take about 70-90 minutes to complete. Data Analysis: The IBM SPSS 21.0 will be used to perform the statistical analysis. First, descriptive statistics will be performed to describe the characteristics and performance of the participants. Lastly, Pearson correlation will be computed to examine the correlation between PSP and TVPS-3 scores. Results: Significant differences were found between the means of participants’ TVPS-3 raw scores of each subtest with the age equivalent raw score provided by the TVPS-3 manual. Significant correlations were found between all 7 subtests of TVPS-3 and PSP total score. Conclusions: The results showed that patients with schizophrenia do exhibit visual perceptual deficits and is correlated social functions. Understanding these facts of patients with schizophrenia can assist health care professionals in designing and implementing adequate rehabilitative treatment according to patients’ needs.Keywords: occupational therapy, social function, schizophrenia, visual perception
Procedia PDF Downloads 13894 Direct Assessment of Cellular Immune Responses to Ovalbumin with a Secreted Luciferase Transgenic Reporter Mouse Strain IFNγ-Lucia
Authors: Martyna Chotomska, Aleksandra Studzinska, Marta Lisowska, Justyna Szubert, Aleksandra Tabis, Jacek Bania, Arkadiusz Miazek
Abstract:
Objectives: Assessing antigen-specific T cell responses is of utmost importance for the pre-clinical testing of prototype vaccines against intracellular pathogens and tumor antigens. Mainly two types of in vitro assays are used for this purpose 1) enzyme-linked immunospot (ELISpot) and 2) intracellular cytokine staining (ICS). Both are time-consuming, relatively expensive, and require manual dexterity. Here, we assess if a straightforward detection of luciferase activity in blood samples of transgenic reporter mice expressing a secreted Lucia luciferase under the transcriptional control of IFN-γ promoter parallels the sensitivity of IFNγ ELISpot assay. Methods: IFN-γ-LUCIA mouse strain carrying multiple copies of Lucia luciferase transgene under the transcriptional control of IFNγ minimal promoter were generated by pronuclear injection of linear DNA. The specificity of transgene expression and mobilization was assessed in vitro using transgenic splenocytes exposed to various mitogens. The IFN-γ-LUCIA mice were immunized with 50mg of ovalbumin (OVA) emulsified in incomplete Freund’s adjuvant three times every two weeks by subcutaneous injections. Blood samples were collected before and five days after each immunization. Luciferase activity was assessed in blood serum. Peripheral blood mononuclear cells were separated and assessed for frequencies of OVA-specific IFNγ-secreting T cells. Results: We show that in vitro cultured splenocytes of IFN-γ-LUCIA mice respond by 2 and 3 fold increase in secreted luciferase activity to T cell mitogens concanavalin A and phorbol myristate acetate, respectively but fail to respond to B cell-stimulating E.coli lipopolysaccharide. Immunization of IFN-γ-LUCIA mice with OVA leads to over 4 fold increase in luciferase activity in blood serum five days post-immunization with a barely detectable increase in OVA-specific, IFNγ-secreting T cells by ELISpot. Second and third immunizations, further increase the luciferase activity and coincidently also increase the frequencies of OVA-specific T cells by ELISpot. Conclusions: We conclude that minimally invasive monitoring of luciferase secretions in blood serum of IFN-γ-LUCIA mice constitutes a sensitive method for evaluating primary and memory Th1 responses to protein antigens. As such, this method may complement existing methods for rapid immunogenicity assessment of prototype vaccines.Keywords: ELISpot, immunogenicity, interferon-gamma, reporter mice, vaccines
Procedia PDF Downloads 17093 Forest Degradation and Implications for Rural Livelihood in Kaimur Reserve Forest of Bihar, India
Authors: Shashi Bhushan, Sucharita Sen
Abstract:
In India, forest and people are inextricably linked since millions of people live adjacent to or within protected areas and harvest forest products. Indian forest has their own legacy to sustain by its own climatic nature with several social, economic and cultural activities. People surrounding forest areas are not only dependent on this resource for their livelihoods but also for the other source, like religious ceremonies, social customs and herbal medicines, which are determined by the forest like agricultural land, groundwater level, and soil fertility. The assumption that fuelwood and fodder extraction, which is the part of local livelihood leads to deforestation, has so far been the dominant mainstream views in deforestation discourses. Given the occupational division across social groups in Kaimur reserve forest, the differential nature of dependence of forest resources is important to understand. This paper attempts to assess the nature of dependence and impact of forest degradation on rural households across various social groups. Also, an additional element that is added to the enquiry is the way degradation of forests leading to scarcity of forest-based resources impacts the patterns of dependence across various social groups. Change in forest area calculated through land use land cover analysis using remote sensing technique and examination of different economic activities carried out by the households that are forest-based was collected by primary survey in Kaimur reserve forest of state of Bihar in India. The general finding indicates that the Scheduled Tribe and Scheduled Caste communities, the most socially and economically deprived sections of the rural society are involved in a significant way in collection of fuelwood, fodder, and fruits, both for self-consumption and sale in the market while other groups of society uses fuelwood, fruit, and fodder for self-use only. Depending on the local forest resources for fuelwood consumption was the primary need for all social groups due to easy accessibility and lack of alternative energy source. In last four decades, degradation of forest made a direct impact on rural community mediated through the socio-economic structure, resulting in a shift from forest-based occupations to cultivation and manual labour in agricultural and non-agricultural activities. Thus there is a need to review the policies with respect to the ‘community forest management’ since this study clearly throws up the fact that engagement with and dependence on forest resources is socially differentiated. Thus tying the degree of dependence and forest management becomes extremely important from the view of ‘sustainable’ forest resource management. The statization of forest resources also has to keep in view the intrinsic way in which the forest-dependent population interacts with the forest.Keywords: forest degradation, livelihood, social groups, tribal community
Procedia PDF Downloads 16992 Roadway Infrastructure and Bus Safety
Authors: Richard J. Hanowski, Rebecca L. Hammond
Abstract:
Very few studies have been conducted to investigate safety issues associated with motorcoach/bus operations. The current study investigates the impact that roadway infrastructure, including locality, roadway grade, traffic flow and traffic density, have on bus safety. A naturalistic driving study was conducted in the U.S.A that involved 43 motorcoaches. Two fleets participated in the study and over 600,000 miles of naturalistic driving data were collected. Sixty-five bus drivers participated in this study; 48 male and 17 female. The average age of the drivers was 49 years. A sophisticated data acquisition system (DAS) was installed on each of the 43 motorcoaches and a variety of kinematic and video data were continuously recorded. The data were analyzed by identifying safety critical events (SCEs), which included crashes, near-crashes, crash-relevant conflicts, and unintentional lane deviations. Additionally, baseline (normative driving) segments were also identified and analyzed for comparison to the SCEs. This presentation highlights the need for bus safety research and the methods used in this data collection effort. With respect to elements of roadway infrastructure, this study highlights the methods used to assess locality, roadway grade, traffic flow, and traffic density. Locality was determined by manual review of the recorded video for each event and baseline and was characterized in terms of open country, residential, business/industrial, church, playground, school, urban, airport, interstate, and other. Roadway grade was similarly determined through video review and characterized in terms of level, grade up, grade down, hillcrest, and dip. The video was also used to make a determination of the traffic flow and traffic density at the time of the event or baseline segment. For traffic flow, video was used to assess which of the following best characterized the event or baseline: not divided (2-way traffic), not divided (center 2-way left turn lane), divided (median or barrier), one-way traffic, or no lanes. In terms of traffic density, level-of-service categories were used: A1, A2, B, C, D, E, and F. Highlighted in this abstract are only a few of the many roadway elements that were coded in this study. Other elements included lighting levels, weather conditions, roadway surface conditions, relation to junction, and roadway alignment. Note that a key component of this study was to assess the impact that driver distraction and fatigue have on bus operations. In this regard, once the roadway elements had been coded, the primary research questions that were addressed were (i) “What environmental condition are associated with driver choice of engagement in tasks?”, and (ii) “what are the odds of being in a SCE while engaging in tasks while encountering these conditions?”. The study may be of interest to researchers and traffic engineers that are interested in the relationship between roadway infrastructure elements and safety events in motorcoach bus operations.Keywords: bus safety, motorcoach, naturalistic driving, roadway infrastructure
Procedia PDF Downloads 18091 The Structuring of Economic of Brazilian Innovation and the Institutional Proposal to the Legal Management for Global Conformity to Treat the Technological Risks
Authors: Daniela Pellin, Wilson Engelmann
Abstract:
Brazil has sought to accelerate your development through technology and innovation as a response to the global influences, which has received in internal management practices. For this, it had edited the Brazilian Law of Innovation 13.243/2016. However observing the Law overestimated economic aspects the respective application will not consider the stakeholders and the technological risks because there is no legal treatment. The economic exploitation and the technological risks must be controlled by limits of democratic system to find better social development to contribute with the economics agents for making decision to conform with global directions. The research understands this is a problem to face given the social particularities of the country because there has been the literal import of the North American Triple Helix Theory consolidated in developed countries and the negative consequences when applied in developing countries. Because of this symptomatic scenario, it is necessary to create adjustment to conduct the management of the law besides social democratic interests to increase the country development. For this, therefore, the Government will have to adopt some conducts promoting side by side with universities, civil society and companies, informational transparency, catch of partnerships, create a Confort Letter document for preparation to ensure the operation, joint elaboration of a Manual of Good Practices, make accountability and data dissemination. Also the Universities must promote informational transparency, drawing up partnership contracts and generating revenue, development of information. In addition, the civil society must do data analysis about proposals received for discussing to give opinion related. At the end, companies have to give public and transparent information about investments and economic benefits, risks and innovation manufactured. The research intends as a general objective to demonstrate that the efficiency of the propeller deployment will be possible if the innovative decision-making process goes through the institutional logic. As specific objectives, the American influence must undergo some modifications to better suit the economic-legal incentives to potentiate the development of the social system. The hypothesis points to institutional model for application to the legal system can be elaborated based on emerging characteristics of the country, in such a way that technological risks can be foreseen and there will be global conformity with attention to the full development of society as proposed by the researchers.The method of approach will be the systemic-constructivist with bibliographical review, data collection and analysis with the construction of the institutional and democratic model for the management of the Law.Keywords: development, governance of law, institutionalization, triple helix
Procedia PDF Downloads 14090 Automated Computer-Vision Analysis Pipeline of Calcium Imaging Neuronal Network Activity Data
Authors: David Oluigbo, Erik Hemberg, Nathan Shwatal, Wenqi Ding, Yin Yuan, Susanna Mierau
Abstract:
Introduction: Calcium imaging is an established technique in neuroscience research for detecting activity in neural networks. Bursts of action potentials in neurons lead to transient increases in intracellular calcium visualized with fluorescent indicators. Manual identification of cell bodies and their contours by experts typically takes 10-20 minutes per calcium imaging recording. Our aim, therefore, was to design an automated pipeline to facilitate and optimize calcium imaging data analysis. Our pipeline aims to accelerate cell body and contour identification and production of graphical representations reflecting changes in neuronal calcium-based fluorescence. Methods: We created a Python-based pipeline that uses OpenCV (a computer vision Python package) to accurately (1) detect neuron contours, (2) extract the mean fluorescence within the contour, and (3) identify transient changes in the fluorescence due to neuronal activity. The pipeline consisted of 3 Python scripts that could both be easily accessed through a Python Jupyter notebook. In total, we tested this pipeline on ten separate calcium imaging datasets from murine dissociate cortical cultures. We next compared our automated pipeline outputs with the outputs of manually labeled data for neuronal cell location and corresponding fluorescent times series generated by an expert neuroscientist. Results: Our results show that our automated pipeline efficiently pinpoints neuronal cell body location and neuronal contours and provides a graphical representation of neural network metrics accurately reflecting changes in neuronal calcium-based fluorescence. The pipeline detected the shape, area, and location of most neuronal cell body contours by using binary thresholding and grayscale image conversion to allow computer vision to better distinguish between cells and non-cells. Its results were also comparable to manually analyzed results but with significantly reduced result acquisition times of 2-5 minutes per recording versus 10-20 minutes per recording. Based on these findings, our next step is to precisely measure the specificity and sensitivity of the automated pipeline’s cell body and contour detection to extract more robust neural network metrics and dynamics. Conclusion: Our Python-based pipeline performed automated computer vision-based analysis of calcium image recordings from neuronal cell bodies in neuronal cell cultures. Our new goal is to improve cell body and contour detection to produce more robust, accurate neural network metrics and dynamic graphs.Keywords: calcium imaging, computer vision, neural activity, neural networks
Procedia PDF Downloads 8289 Pattern of Deliberate Self-Harm Repetition in Rural Sri Lanka
Authors: P. H. G. J. Pushpakumara, Andrew Dawson
Abstract:
Introduction: Deliberate self harm (DSH) is a major public health problem globally. Suicide rates of Sri Lanka are being among the highest national rates in the world, since 1950. Previous DSH is the most important independent predictor of repetition. The estimated 1 year non-fatal repeat self-harm rate was 16.3%. Asian countries had considerably lower rate, 10.0%. Objectives: To calculate incidence of deliberate self-poisoning (DSP) and suicides, repetition rate of DSP in Kurunegala District (KD). To determine the pattern of repeated DSP in KD. Methods: Study had two components. In the first component, demographic and event related details of, DSP admission in 46 hospitals and suicides in 28 police stations of KD were collected for 3 years from January 2011. Demographic details of cohort of DSP patients admitted to above hospitals in 2011 were linked with hospital admissions and police records of next two years period from the index admission. Records were screened for links with high sensitivity using the computer then did manual matching which would have been much more specific. In the second component, randomly selected DSP patients (n=438), who admitted to main referral centre which receives 60% of DSP cases of the district, were interviewed to assess life-time repetition. Results: There were 16,993 DSP admissions and 1078 suicides for the three year period. Suicide incidences in KD were, 21.6, 20.7 and 24.3 per 100,000 population in 2011, 2012 and 2013. Average male to female ratio for suicide incidences was 5.5. DSP incidences were 205.4, 248.3 and 202.5 per 100,000 population. Male incidences were slightly greater than the female incidences, male: female ratio was 1.1:1. Highest age standardized male and female incidence was reported in 20-24 years age group, 769.6/100,000, and 15-19 years age group 1304.0/100,000. Male to female ratio of the incidence increased with the age. There were 318 (179 male and 139 female) patients attempted DSH within two years. Female repetitive patients were ounger compared to the males, p < 0.0001, median age: males 28 and females 19 years. 290 (91.2%) had only one repetitive attempt, 24 (7.5%) had two, 3 (0.9%) had three and one (0.3%) had four in that period. One year repetition rate was 5.6 and two year repetition rate was 7.9%. Average intervals between indexed events and first repetitive DSP events were 246.8 (SD:223.4) and 238.5 (SD:207.0) days among males and females. One fifth of first repetitive events occurred within first two weeks in both males and females. Around 50% of males and females had the second event within 28 weeks. Within the first year of the indexed event, around 70% had the second event. First repetitive event was fatal for 28 (8.8%) individuals. Ages of those who died, mean 49.7 years (SD:15.3), were significantly higher compared to those who had non-fatal outcome, p<0.0001. 9.5% had life time history of DSH attempts. Conclusions: Both, DSP and suicide incidences were very high in KD. However, repetition rates were lesser compared regional values. Prevention of repetition alone may not produce significant impact on prevention of DSH.Keywords: deliberate self-harm, incidence, repetition, Sri Lanka, suicide
Procedia PDF Downloads 21888 The Background of Ornamental Design Practice: Theory and Practice Based Research on Ornamental Traditions
Authors: Jenna Pyorala
Abstract:
This research looks at the principles and purposes ornamental design has served in the field of textile design. Ornamental designs are characterized by richness of details, abundance of elements, vegetative motifs and organic forms that flow harmoniously in complex compositions. Research on ornamental design is significant, because ornaments have been overlooked and considered as less meaningful and aesthetically pleasing than minimalistic, modern designs. This is despite the fact that in many parts of the world ornaments have been an important part of the cultural identification and expression for centuries. Ornament has been claimed to be superficial and merely used as a decorative way to hide the faults of designs. Such generalization is an incorrect interpretation of the real purposes of ornament. Many ornamental patterns tell stories, present mythological scenes or convey symbolistic meanings. Historically, ornamental decorations have been representing ideas and characteristics such as abundance, wealth, power and personal magnificence. The production of fine ornaments required refined skill, eye for intricate detail and perseverance while compiling complex elements into harmonious compositions. For this reason, ornaments have played an important role in the advancement of craftsmanship. Even though it has been claimed that people in the western design world have lost the relationship to ornament, the relation to it has merely changed from the practice of a craftsman to conceptualisation of a designer. With the help of new technological tools the production of ornaments has become faster and more efficient, demanding less manual labour. Designers who commit to this style of organic forms and vegetative motifs embrace and respect nature by representing its organically growing forms and by following its principles. The complexity of the designs is used as a way to evoke a sense of extraordinary beauty and stimulate intellect by freeing the mind from the predetermined interpretations. Through the study of these purposes it can be demonstrated that complex and richer design styles are as valuable a part of the world of design as more modern design approaches. The study highlights the meaning of ornaments by presenting visual examples and literature research findings. The practice based part of the project is the visual analysis of historical and cultural ornamental traditions such as Indian Chikan embroidery, Persian carpets, Art Nouveau and Rococo according to the rubric created for the purpose. The next step is the creation of ornamental designs based on the key elements in different styles. Theoretical and practical parts are woven together in this study that respects respect the long traditions of ornaments and highlight the importance of these design approaches to the field, in contrast to the more commonly preferred styles.Keywords: cultural design traditions, ornamental design, organic forms from nature, textile design
Procedia PDF Downloads 22687 Detection of Some Drugs of Abuse from Fingerprints Using Liquid Chromatography-Mass Spectrometry
Authors: Ragaa T. Darwish, Maha A. Demellawy, Haidy M. Megahed, Doreen N. Younan, Wael S. Kholeif
Abstract:
The testing of drug abuse is authentic in order to affirm the misuse of drugs. Several analytical approaches have been developed for the detection of drugs of abuse in pharmaceutical and common biological samples, but few methodologies have been created to identify them from fingerprints. Liquid Chromatography-Mass Spectrometry (LC-MS) plays a major role in this field. The current study aimed at assessing the possibility of detection of some drugs of abuse (tramadol, clonazepam, and phenobarbital) from fingerprints using LC-MS in drug abusers. The aim was extended in order to assess the possibility of detection of the above-mentioned drugs in fingerprints of drug handlers till three days of handling the drugs. The study was conducted on randomly selected adult individuals who were either drug abusers seeking treatment at centers of drug dependence in Alexandria, Egypt or normal volunteers who were asked to handle the different studied drugs (drug handlers). An informed consent was obtained from all individuals. Participants were classified into 3 groups; control group that consisted of 50 normal individuals (neither abusing nor handling drugs), drug abuser group that consisted of 30 individuals who abused tramadol, clonazepam or phenobarbital (10 individuals for each drug) and drug handler group that consisted of 50 individuals who were touching either the powder of drugs of abuse: tramadol, clonazepam or phenobarbital (10 individuals for each drug) or the powder of the control substances which were of similar appearance (white powder) and that might be used in the adulteration of drugs of abuse: acetyl salicylic acid and acetaminophen (10 individuals for each drug). Samples were taken from the handler individuals for three consecutive days for the same individual. The diagnosis of drug abusers was based on the current Diagnostic and Statistical Manual of Mental disorders (DSM-V) and urine screening tests using immunoassay technique. Preliminary drug screening tests of urine samples were also done for drug handlers and the control groups to indicate the presence or absence of the studied drugs of abuse. Fingerprints of all participants were then taken on a filter paper previously soaked with methanol to be analyzed by LC-MS using SCIEX Triple Quad or QTRAP 5500 System. The concentration of drugs in each sample was calculated using the regression equations between concentration in ng/ml and peak area of each reference standard. All fingerprint samples from drug abusers showed positive results with LC-MS for the tested drugs, while all samples from the control individuals showed negative results. A significant difference was noted between the concentration of the drugs and the duration of abuse. Tramadol, clonazepam, and phenobarbital were also successfully detected from fingerprints of drug handlers till 3 days of handling the drugs. The mean concentration of the chosen drugs of abuse among the handlers group decreased when the days of samples intake increased.Keywords: drugs of abuse, fingerprints, liquid chromatography–mass spectrometry, tramadol
Procedia PDF Downloads 11986 Biosensor: An Approach towards Sustainable Environment
Authors: Purnima Dhall, Rita Kumar
Abstract:
Introduction: River Yamuna, in the national capital territory (NCT), and also the primary source of drinking water for the city. Delhi discharges about 3,684 MLD of sewage through its 18 drains in to the Yamuna. Water quality monitoring is an important aspect of water management concerning to the pollution control. Public concern and legislation are now a day’s demanding better environmental control. Conventional method for estimating BOD5 has various drawbacks as they are expensive, time-consuming, and require the use of highly trained personnel. Stringent forthcoming regulations on the wastewater have necessitated the urge to develop analytical system, which contribute to greater process efficiency. Biosensors offer the possibility of real time analysis. Methodology: In the present study, a novel rapid method for the determination of biochemical oxygen demand (BOD) has been developed. Using the developed method, the BOD of a sample can be determined within 2 hours as compared to 3-5 days with the standard BOD3-5day assay. Moreover, the test is based on specified consortia instead of undefined seeding material therefore it minimizes the variability among the results. The device is coupled to software which automatically calculates the dilution required, so, the prior dilution of the sample is not required before BOD estimation. The developed BOD-Biosensor makes use of immobilized microorganisms to sense the biochemical oxygen demand of industrial wastewaters having low–moderate–high biodegradability. The method is quick, robust, online and less time consuming. Findings: The results of extensive testing of the developed biosensor on drains demonstrate that the BOD values obtained by the device correlated with conventional BOD values the observed R2 value was 0.995. The reproducibility of the measurements with the BOD biosensor was within a percentage deviation of ±10%. Advantages of developed BOD biosensor • Determines the water pollution quickly in 2 hours of time; • Determines the water pollution of all types of waste water; • Has prolonged shelf life of more than 400 days; • Enhanced repeatability and reproducibility values; • Elimination of COD estimation. Distinctiveness of Technology: • Bio-component: can determine BOD load of all types of waste water; • Immobilization: increased shelf life > 400 days, extended stability and viability; • Software: Reduces manual errors, reduction in estimation time. Conclusion: BiosensorBOD can be used to measure the BOD value of the real wastewater samples. The BOD biosensor showed good reproducibility in the results. This technology is useful in deciding treatment strategies well ahead and so facilitating discharge of properly treated water to common water bodies. The developed technology has been transferred to M/s Forbes Marshall Pvt Ltd, Pune.Keywords: biosensor, biochemical oxygen demand, immobilized, monitoring, Yamuna
Procedia PDF Downloads 27885 Skull Extraction for Quantification of Brain Volume in Magnetic Resonance Imaging of Multiple Sclerosis Patients
Authors: Marcela De Oliveira, Marina P. Da Silva, Fernando C. G. Da Rocha, Jorge M. Santos, Jaime S. Cardoso, Paulo N. Lisboa-Filho
Abstract:
Multiple Sclerosis (MS) is an immune-mediated disease of the central nervous system characterized by neurodegeneration, inflammation, demyelination, and axonal loss. Magnetic resonance imaging (MRI), due to the richness in the information details provided, is the gold standard exam for diagnosis and follow-up of neurodegenerative diseases, such as MS. Brain atrophy, the gradual loss of brain volume, is quite extensive in multiple sclerosis, nearly 0.5-1.35% per year, far off the limits of normal aging. Thus, the brain volume quantification becomes an essential task for future analysis of the occurrence atrophy. The analysis of MRI has become a tedious and complex task for clinicians, who have to manually extract important information. This manual analysis is prone to errors and is time consuming due to various intra- and inter-operator variability. Nowadays, computerized methods for MRI segmentation have been extensively used to assist doctors in quantitative analyzes for disease diagnosis and monitoring. Thus, the purpose of this work was to evaluate the brain volume in MRI of MS patients. We used MRI scans with 30 slices of the five patients diagnosed with multiple sclerosis according to the McDonald criteria. The computational methods for the analysis of images were carried out in two steps: segmentation of the brain and brain volume quantification. The first image processing step was to perform brain extraction by skull stripping from the original image. In the skull stripper for MRI images of the brain, the algorithm registers a grayscale atlas image to the grayscale patient image. The associated brain mask is propagated using the registration transformation. Then this mask is eroded and used for a refined brain extraction based on level-sets (edge of the brain-skull border with dedicated expansion, curvature, and advection terms). In the second step, the brain volume quantification was performed by counting the voxels belonging to the segmentation mask and converted in cc. We observed an average brain volume of 1469.5 cc. We concluded that the automatic method applied in this work can be used for the brain extraction process and brain volume quantification in MRI. The development and use of computer programs can contribute to assist health professionals in the diagnosis and monitoring of patients with neurodegenerative diseases. In future works, we expect to implement more automated methods for the assessment of cerebral atrophy and brain lesions quantification, including machine-learning approaches. Acknowledgements: This work was supported by a grant from Brazilian agency Fundação de Amparo à Pesquisa do Estado de São Paulo (number 2019/16362-5).Keywords: brain volume, magnetic resonance imaging, multiple sclerosis, skull stripper
Procedia PDF Downloads 14684 Training Manual of Organic Agriculture Farming for the Farmers: A Case Study from Kunjpura and Surrounding Villages
Authors: Rishi Pal Singh
Abstract:
In Indian Scenario, Organic agriculture is growing by the conscious efforts of inspired people who are able to create the best promising relationship between the earth and men. Nowadays, the major challenge is its entry into the policy-making framework, its entry into the global market and weak sensitization among the farmers. But, during the last two decades, the contamination in environment and food which is linked with the bad agricultural potential/techniques has diverted the mind set of farmers towards the organic farming. In the view of above concept, a small-scale project has been installed to promote the 20 farmers from the Kunjura and surrounding villages for organic farming. This project is working since from the last 3 crops (starting from October, 2016) and found that it can meet both demands and complete development of rural areas. Farmers of this concept are working on the principles such that the nature never demands unreasonable quantities of water, mining and to destroy the microbes and other organisms. As per details of Organic Monitor estimates, global sales reached in billion in the present analysis. In this initiative, firstly, wheat and rice were considered for farming and observed that the production of crop has grown almost 10-15% per year from the last crop production. This is not linked only with the profit or loss but also emphasized on the concept of health, ecology, fairness and care of soil enrichment. Several techniques were used like use of biological fertilizers instead of chemicals, multiple cropping, temperature management, rain water harvesting, development of own seed, vermicompost and integration of animals. In the first year, to increase the fertility of the land, legumes (moong, cow pea and red gram) were grown in strips for the 60, 90 and 120 days. Simultaneously, the mixture of compost and vermicompost in the proportion of 2:1 was applied at the rate of 2.0 ton per acre which was enriched with 5 kg Azotobacter and 5 kg Rhizobium biofertilizer. To complete the amount of phosphorus, 250 kg rock phosphate was used. After the one month, jivamrut can be used with the irrigation water or during the rainy days. In next season, compost-vermicompost mixture @ 2.5 ton/ha was used for all type of crops. After the completion of this treatment, now the soil is ready for high value ordinary/horticultural crops. The amount of above stated biofertilizers, compost-vermicompost and rock phosphate may be increased for the high alternative fertilizers. The significance of the projects is that now the farmers believe in cultural alternative (use of disease-free their own seed, organic pest management), maintenance of biodiversity, crop rotation practices and health benefits of organic farming. This type of organic farming projects should be installed at the level of gram/block/district administration.Keywords: organic farming, Kunjpura, compost, bio-fertilizers
Procedia PDF Downloads 19583 Self-Supervised Learning for Hate-Speech Identification
Authors: Shrabani Ghosh
Abstract:
Automatic offensive language detection in social media has become a stirring task in today's NLP. Manual Offensive language detection is tedious and laborious work where automatic methods based on machine learning are only alternatives. Previous works have done sentiment analysis over social media in different ways such as supervised, semi-supervised, and unsupervised manner. Domain adaptation in a semi-supervised way has also been explored in NLP, where the source domain and the target domain are different. In domain adaptation, the source domain usually has a large amount of labeled data, while only a limited amount of labeled data is available in the target domain. Pretrained transformers like BERT, RoBERTa models are fine-tuned to perform text classification in an unsupervised manner to perform further pre-train masked language modeling (MLM) tasks. In previous work, hate speech detection has been explored in Gab.ai, which is a free speech platform described as a platform of extremist in varying degrees in online social media. In domain adaptation process, Twitter data is used as the source domain, and Gab data is used as the target domain. The performance of domain adaptation also depends on the cross-domain similarity. Different distance measure methods such as L2 distance, cosine distance, Maximum Mean Discrepancy (MMD), Fisher Linear Discriminant (FLD), and CORAL have been used to estimate domain similarity. Certainly, in-domain distances are small, and between-domain distances are expected to be large. The previous work finding shows that pretrain masked language model (MLM) fine-tuned with a mixture of posts of source and target domain gives higher accuracy. However, in-domain performance of the hate classifier on Twitter data accuracy is 71.78%, and out-of-domain performance of the hate classifier on Gab data goes down to 56.53%. Recently self-supervised learning got a lot of attention as it is more applicable when labeled data are scarce. Few works have already been explored to apply self-supervised learning on NLP tasks such as sentiment classification. Self-supervised language representation model ALBERTA focuses on modeling inter-sentence coherence and helps downstream tasks with multi-sentence inputs. Self-supervised attention learning approach shows better performance as it exploits extracted context word in the training process. In this work, a self-supervised attention mechanism has been proposed to detect hate speech on Gab.ai. This framework initially classifies the Gab dataset in an attention-based self-supervised manner. On the next step, a semi-supervised classifier trained on the combination of labeled data from the first step and unlabeled data. The performance of the proposed framework will be compared with the results described earlier and also with optimized outcomes obtained from different optimization techniques.Keywords: attention learning, language model, offensive language detection, self-supervised learning
Procedia PDF Downloads 10582 Persistent Ribosomal In-Frame Mis-Translation of Stop Codons as Amino Acids in Multiple Open Reading Frames of a Human Long Non-Coding RNA
Authors: Leonard Lipovich, Pattaraporn Thepsuwan, Anton-Scott Goustin, Juan Cai, Donghong Ju, James B. Brown
Abstract:
Two-thirds of human genes do not encode any known proteins. Aside from long non-coding RNA (lncRNA) genes with recently-discovered functions, the ~40,000 non-protein-coding human genes remain poorly understood, and a role for their transcripts as de-facto unconventional messenger RNAs has not been formally excluded. Ribosome profiling (Riboseq) predicts translational potential, but without independent evidence of proteins from lncRNA open reading frames (ORFs), ribosome binding of lncRNAs does not prove translation. Previously, we mass-spectrometrically documented translation of specific lncRNAs in human K562 and GM12878 cells. We now examined lncRNA translation in human MCF7 cells, integrating strand-specific Illumina RNAseq, Riboseq, and deep mass spectrometry in biological quadruplicates performed at two core facilities (BGI, China; City of Hope, USA). We excluded known-protein matches. UCSC Genome Browser-assisted manual annotation of imperfect (tryptic-digest-peptides)-to-(lncRNA-three-frame-translations) alignments revealed three peptides hypothetically explicable by 'stop-to-nonstop' in-frame replacement of stop codons by amino acids in two ORFs of the lncRNA MMP24-AS1. To search for this phenomenon genomewide, we designed and implemented a novel pipeline, matching tryptic-digest spectra to wildcard-instead-of-stop versions of repeat-masked, six-frame, whole-genome translations. Along with singleton putative stop-to-nonstop events affecting four other lncRNAs, we identified 24 additional peptides with stop-to-nonstop in-frame substitutions from multiple positive-strand MMP24-AS1 ORFs. Only UAG and UGA, never UAA, stop codons were impacted. All MMP24-AS1-matching spectra met the same significance thresholds as high-confidence known-protein signatures. Targeted resequencing of MMP24-AS1 genomic DNA and cDNA from the same samples did not reveal any mutations, polymorphisms, or sequencing-detectable RNA editing. This unprecedented apparent gene-specific violation of the genetic code highlights the importance of matching peptides to whole-genome, not known-genes-only, ORFs in mass-spectrometry workflows, and suggests a new mechanism enhancing the combinatorial complexity of the proteome. Funding: NIH Director’s New Innovator Award 1DP2-CA196375 to LL.Keywords: genetic code, lncRNA, long non-coding RNA, mass spectrometry, proteogenomics, ribo-seq, ribosome, RNAseq
Procedia PDF Downloads 23581 Construction and Analysis of Tamazight (Berber) Text Corpus
Authors: Zayd Khayi
Abstract:
This paper deals with the construction and analysis of the Tamazight text corpus. The grammatical structure of the Tamazight remains poorly understood, and a lack of comparative grammar leads to linguistic issues. In order to fill this gap, even though it is small, by constructed the diachronic corpus of the Tamazight language, and elaborated the program tool. In addition, this work is devoted to constructing that tool to analyze the different aspects of the Tamazight, with its different dialects used in the north of Africa, specifically in Morocco. It also focused on three Moroccan dialects: Tamazight, Tarifiyt, and Tachlhit. The Latin version was good choice because of the many sources it has. The corpus is based on the grammatical parameters and features of that language. The text collection contains more than 500 texts that cover a long historical period. It is free, and it will be useful for further investigations. The texts were transformed into an XML-format standardization goal. The corpus counts more than 200,000 words. Based on the linguistic rules and statistical methods, the original user interface and software prototype were developed by combining the technologies of web design and Python. The corpus presents more details and features about how this corpus provides users with the ability to distinguish easily between feminine/masculine nouns and verbs. The interface used has three languages: TMZ, FR, and EN. Selected texts were not initially categorized. This work was done in a manual way. Within corpus linguistics, there is currently no commonly accepted approach to the classification of texts. Texts are distinguished into ten categories. To describe and represent the texts in the corpus, we elaborated the XML structure according to the TEI recommendations. Using the search function may provide us with the types of words we would search for, like feminine/masculine nouns and verbs. Nouns are divided into two parts. The gender in the corpus has two forms. The neutral form of the word corresponds to masculine, while feminine is indicated by a double t-t affix (the prefix t- and the suffix -t), ex: Tarbat (girl), Tamtut (woman), Taxamt (tent), and Tislit (bride). However, there are some words whose feminine form contains only the prefix t- and the suffix –a, ex: Tasa (liver), tawja (family), and tarwa (progenitors). Generally, Tamazight masculine words have prefixes that distinguish them from other words. For instance, 'a', 'u', 'i', ex: Asklu (tree), udi (cheese), ighef (head). Verbs in the corpus are for the first person singular and plural that have suffixes 'agh','ex', 'egh', ex: 'ghrex' (I study), 'fegh' (I go out), 'nadagh' (I call). The program tool permits the following characteristics of this corpus: list of all tokens; list of unique words; lexical diversity; realize different grammatical requests. To conclude, this corpus has only focused on a small group of parts of speech in Tamazight language verbs, nouns. Work is still on the adjectives, prounouns, adverbs and others.Keywords: Tamazight (Berber) language, corpus linguistic, grammar rules, statistical methods
Procedia PDF Downloads 6480 Furniko Flour: An Emblematic Traditional Food of Greek Pontic Cuisine
Authors: A. Keramaris, T. Sawidis, E. Kasapidou, P. Mitlianga
Abstract:
Although the gastronomy of the Greeks of Pontus is highly prominent, it has not received the same level of scientific analysis as another local cuisine of Greece, that of Crete. As a result, we intended to focus our research on Greek Pontic cuisine to shed light on its unique recipes, food products, and, ultimately, its features. The Greeks of Pontus, who lived for a long time in the northern part (Black Sea Region) of contemporary Turkey and now widely inhabit northern Greece, have one of Greece's most distinguished local cuisines. Despite their gastronomy being simple, it features several inspiring delicacies. It's been a century since they immigrated to Greece, yet their gastronomic culture remains a critical component of their collective identity. As a first step toward comprehending Greek Pontic cuisine, it was attempted to investigate the production of one of its most renowned traditional products, furniko flour. In this project, we targeted residents of Western Macedonia, a province in northern Greece with a large population of descendants of Greeks of Pontus who are primarily engaged in agricultural activities. In this quest, we approached a descendant of the Greeks of Pontus who is involved in the production of furniko flour and who consented to show us the entire process of its production as we participated in it. The furniko flour is made from non-hybrid heirloom corn. It is harvested by hand when the moisture content of the seeds is low enough to make them suitable for roasting. Manual harvesting entails removing the cob from the plant and detaching the husks. The harvested cobs are then roasted for 24 hours in a traditional wood oven. The roasted cobs are then collected and stored in sacks. The next step is to extract the seeds, which is accomplished by rubbing the cobs. The seeds should ideally be ground in a traditional stone hand mill. We end up with aromatic and dark golden furniko flour, which is used to cook havitz. Accompanied by the preparation of the furnikoflour, we also recorded the cooking process of the havitz (a porridge-like cornflour dish). A savory delicacy that is simple to prepare and one of the most delightful dishes in Greek Pontic cuisine. According to the research participant, havitzis a highly nutritious dish due to the ingredients of furniko flour. In addition, he argues that preparing havitz is a great way to bring families together, share stories, and revisit fond memories. In conclusion, this study illustrates the traditional preparation of furnikoflour and its use in various traditional recipes as an initial effort to highlight the elements of Pontic Greek cuisine. As a continuation of the current study, it could be the analysis of the chemical components of the furniko flour to evaluate its nutritional content.Keywords: furniko flour, greek pontic cuisine, havitz, traditional foods
Procedia PDF Downloads 13679 Monitoring and Evaluation of Web-Services Quality and Medium-Term Impact on E-Government Agencies' Efficiency
Authors: A. F. Huseynov, N. T. Mardanov, J. Y. Nakhchivanski
Abstract:
This practical research is aimed to improve the management quality and efficiency of public administration agencies providing e-services. The monitoring system developed will provide continuous review of the websites compliance with the selected indicators, their evaluation based on the selected indicators and ranking of services according to the quality criteria. The responsible departments in the government agencies were surveyed; the questionnaire includes issues of management and feedback, e-services provided, and the application of information systems. By analyzing the main affecting factors and barriers, the recommendations will be given that lead to the relevant decisions to strengthen the state agencies competencies for the management and the provision of their services. Component 1. E-services monitoring system. Three separate monitoring activities are proposed to be executed in parallel: Continuous tracing of e-government sites using built-in web-monitoring program; this program generates several quantitative values which are basically related to the technical characteristics and the performance of websites. The expert assessment of e-government sites in accordance with the two general criteria. Criterion 1. Technical quality of the site. Criterion 2. Usability/accessibility (load, see, use). Each high-level criterion is in turn subdivided into several sub-criteria, such as: the fonts and the color of the background (Is it readable?), W3C coding standards, availability of the Robots.txt and the site map, the search engine, the feedback/contact and the security mechanisms. The on-line survey of the users/citizens – a small group of questions embedded in the e-service websites. The questionnaires comprise of the information concerning navigation, users’ experience with the website (whether it was positive or negative), etc. Automated monitoring of web-sites by its own could not capture the whole evaluation process, and should therefore be seen as a complement to expert’s manual web evaluations. All of the separate results were integrated to provide the complete evaluation picture. Component 2. Assessment of the agencies/departments efficiency in providing e-government services. - the relevant indicators to evaluate the efficiency and the effectiveness of e-services were identified; - the survey was conducted in all the governmental organizations (ministries, committees and agencies) that provide electronic services for the citizens or the businesses; - the quantitative and qualitative measures are covering the following sections of activities: e-governance, e-services, the feedback from the users, the information systems at the agencies’ disposal. Main results: 1. The software program and the set of indicators for internet sites evaluation has been developed and the results of pilot monitoring have been presented. 2. The evaluation of the (internal) efficiency of the e-government agencies based on the survey results with the practical recommendations related to the human potential, the information systems used and e-services provided.Keywords: e-government, web-sites monitoring, survey, internal efficiency
Procedia PDF Downloads 30478 Large-Scale Production of High-Performance Fiber-Metal-Laminates by Prepreg-Press-Technology
Authors: Christian Lauter, Corin Reuter, Shuang Wu, Thomas Troester
Abstract:
Lightweight construction became more and more important over the last decades in several applications, e.g. in the automotive or aircraft sector. This is the result of economic and ecological constraints on the one hand and increasing safety and comfort requirements on the other hand. In the field of lightweight design, different approaches are used due to specific requirements towards the technical systems. The use of endless carbon fiber reinforced plastics (CFRP) offers the largest weight saving potential of sometimes more than 50% compared to conventional metal-constructions. However, there are very limited industrial applications because of the cost-intensive manufacturing of the fibers and production technologies. Other disadvantages of pure CFRP-structures affect the quality control or the damage resistance. One approach to meet these challenges is hybrid materials. This means CFRP and sheet metal are combined on a material level. Therefore, new opportunities for innovative process routes are realizable. Hybrid lightweight design results in lower costs due to an optimized material utilization and the possibility to integrate the structures in already existing production processes of automobile manufacturers. In recent and current research, the advantages of two-layered hybrid materials have been pointed out, i.e. the possibility to realize structures with tailored mechanical properties or to divide the curing cycle of the epoxy resin into two steps. Current research work at the Chair for Automotive Lightweight Design (LiA) at the Paderborn University focusses on production processes for fiber-metal-laminates. The aim of this work is the development and qualification of a large-scale production process for high-performance fiber-metal-laminates (FML) for industrial applications in the automotive or aircraft sector. Therefore, the prepreg-press-technology is used, in which pre-impregnated carbon fibers and sheet metals are formed and cured in a closed, heated mold. The investigations focus e.g. on the realization of short process chains and cycle times, on the reduction of time-consuming manual process steps, and the reduction of material costs. This paper gives an overview over the considerable steps of the production process in the beginning. Afterwards experimental results are discussed. This part concentrates on the influence of different process parameters on the mechanical properties, the laminate quality and the identification of process limits. Concluding the advantages of this technology compared to conventional FML-production-processes and other lightweight design approaches are carried out.Keywords: composite material, fiber-metal-laminate, lightweight construction, prepreg-press-technology, large-series production
Procedia PDF Downloads 24077 Effect of Silica Nanoparticles on Three-Point Flexural Properties of Isogrid E-Glass Fiber/Epoxy Composite Structures
Authors: Hamed Khosravi, Reza Eslami-Farsani
Abstract:
Increased interest in lightweight and efficient structural components has created the need for selecting materials with improved mechanical properties. To do so, composite materials are being widely used in many applications, due to durability, high strength and modulus, and low weight. Among the various composite structures, grid-stiffened structures are extensively considered in various aerospace and aircraft applications, because of higher specific strength and stiffness, higher impact resistance, superior load-bearing capacity, easy to repair, and excellent energy absorption capability. Although there are a good number of publications on the design aspects and fabrication of grid structures, little systematic work has been reported on their material modification to improve their properties, to our knowledge. Therefore, the aim of this research is to study the reinforcing effect of silica nanoparticles on the flexural properties of epoxy/E-glass isogrid panels under three-point bending test. Samples containing 0, 1, 3, and 5 wt.% of the silica nanoparticles, with 44 and 48 vol.% of the glass fibers in the ribs and skin components respectively, were fabricated by using a manual filament winding method. Ultrasonic and mechanical routes were employed to disperse the nanoparticles within the epoxy resin. To fabricate the ribs, the unidirectional fiber rovings were impregnated with the matrix mixture (epoxy + nanoparticles) and then laid up into the grooves of a silicone mold layer-by-layer. At once, four plies of woven fabrics, after impregnating into the same matrix mixture, were layered on the top of the ribs to produce the skin part. In order to conduct the ultimate curing and to achieve the maximum strength, the samples were tested after 7 days of holding at room temperature. According to load-displacement graphs, the bellow trend was observed for all of the samples when loaded from the skin side; following an initial linear region and reaching a load peak, the curve was abruptly dropped and then showed a typical absorbed energy region. It would be worth mentioning that in these structures, a considerable energy absorption was observed after the primary failure related to the load peak. The results showed that the flexural properties of the nanocomposite samples were always higher than those of the nanoparticle-free sample. The maximum enhancement in flexural maximum load and energy absorption was found to be for the incorporation of 3 wt.% of the nanoparticles. Furthermore, the flexural stiffness was continually increased by increasing the silica loading. In conclusion, this study suggested that the addition of nanoparticles is a promising method to improve the flexural properties of grid-stiffened fibrous composite structures.Keywords: grid-stiffened composite structures, nanocomposite, three point flexural test , energy absorption
Procedia PDF Downloads 34176 A Machine Learning Approach for Assessment of Tremor: A Neurological Movement Disorder
Authors: Rajesh Ranjan, Marimuthu Palaniswami, A. A. Hashmi
Abstract:
With the changing lifestyle and environment around us, the prevalence of the critical and incurable disease has proliferated. One such condition is the neurological disorder which is rampant among the old age population and is increasing at an unstoppable rate. Most of the neurological disorder patients suffer from some movement disorder affecting the movement of their body parts. Tremor is the most common movement disorder which is prevalent in such patients that infect the upper or lower limbs or both extremities. The tremor symptoms are commonly visible in Parkinson’s disease patient, and it can also be a pure tremor (essential tremor). The patients suffering from tremor face enormous trouble in performing the daily activity, and they always need a caretaker for assistance. In the clinics, the assessment of tremor is done through a manual clinical rating task such as Unified Parkinson’s disease rating scale which is time taking and cumbersome. Neurologists have also affirmed a challenge in differentiating a Parkinsonian tremor with the pure tremor which is essential in providing an accurate diagnosis. Therefore, there is a need to develop a monitoring and assistive tool for the tremor patient that keep on checking their health condition by coordinating them with the clinicians and caretakers for early diagnosis and assistance in performing the daily activity. In our research, we focus on developing a system for automatic classification of tremor which can accurately differentiate the pure tremor from the Parkinsonian tremor using a wearable accelerometer-based device, so that adequate diagnosis can be provided to the correct patient. In this research, a study was conducted in the neuro-clinic to assess the upper wrist movement of the patient suffering from Pure (Essential) tremor and Parkinsonian tremor using a wearable accelerometer-based device. Four tasks were designed in accordance with Unified Parkinson’s disease motor rating scale which is used to assess the rest, postural, intentional and action tremor in such patient. Various features such as time-frequency domain, wavelet-based and fast-Fourier transform based cross-correlation were extracted from the tri-axial signal which was used as input feature vector space for the different supervised and unsupervised learning tools for quantification of severity of tremor. A minimum covariance maximum correlation energy comparison index was also developed which was used as the input feature for various classification tools for distinguishing the PT and ET tremor types. An automatic system for efficient classification of tremor was developed using feature extraction methods, and superior performance was achieved using K-nearest neighbors and Support Vector Machine classifiers respectively.Keywords: machine learning approach for neurological disorder assessment, automatic classification of tremor types, feature extraction method for tremor classification, neurological movement disorder, parkinsonian tremor, essential tremor
Procedia PDF Downloads 15475 Applying Biosensors’ Electromyography Signals through an Artificial Neural Network to Control a Small Unmanned Aerial Vehicle
Authors: Mylena McCoggle, Shyra Wilson, Andrea Rivera, Rocio Alba-Flores
Abstract:
This work introduces the use of EMGs (electromyography) from muscle sensors to develop an Artificial Neural Network (ANN) for pattern recognition to control a small unmanned aerial vehicle. The objective of this endeavor exhibits interfacing drone applications beyond manual control directly. MyoWare Muscle sensor contains three EMG electrodes (dual and single type) used to collect signals from the posterior (extensor) and anterior (flexor) forearm and the bicep. Collection of raw voltages from each sensor were connected to an Arduino Uno and a data processing algorithm was developed with the purpose of interpreting the voltage signals given when performing flexing, resting, and motion of the arm. Each sensor collected eight values over a two-second period for the duration of one minute, per assessment. During each two-second interval, the movements were alternating between a resting reference class and an active motion class, resulting in controlling the motion of the drone with left and right movements. This paper further investigated adding up to three sensors to differentiate between hand gestures to control the principal motions of the drone (left, right, up, and land). The hand gestures chosen to execute these movements were: a resting position, a thumbs up, a hand swipe right motion, and a flexing position. The MATLAB software was utilized to collect, process, and analyze the signals from the sensors. The protocol (machine learning tool) was used to classify the hand gestures. To generate the input vector to the ANN, the mean, root means squared, and standard deviation was processed for every two-second interval of the hand gestures. The neuromuscular information was then trained using an artificial neural network with one hidden layer of 10 neurons to categorize the four targets, one for each hand gesture. Once the machine learning training was completed, the resulting network interpreted the processed inputs and returned the probabilities of each class. Based on the resultant probability of the application process, once an output was greater or equal to 80% of matching a specific target class, the drone would perform the motion expected. Afterward, each movement was sent from the computer to the drone through a Wi-Fi network connection. These procedures have been successfully tested and integrated into trial flights, where the drone has responded successfully in real-time to predefined command inputs with the machine learning algorithm through the MyoWare sensor interface. The full paper will describe in detail the database of the hand gestures, the details of the ANN architecture, and confusion matrices results.Keywords: artificial neural network, biosensors, electromyography, machine learning, MyoWare muscle sensors, Arduino
Procedia PDF Downloads 17474 Using Real Truck Tours Feedback for Address Geocoding Correction
Authors: Dalicia Bouallouche, Jean-Baptiste Vioix, Stéphane Millot, Eric Busvelle
Abstract:
When researchers or logistics software developers deal with vehicle routing optimization, they mainly focus on minimizing the total travelled distance or the total time spent in the tours by the trucks, and maximizing the number of visited customers. They assume that the upstream real data given to carry the optimization of a transporter tours is free from errors, like customers’ real constraints, customers’ addresses and their GPS-coordinates. However, in real transporter situations, upstream data is often of bad quality because of address geocoding errors and the irrelevance of received addresses from the EDI (Electronic Data Interchange). In fact, geocoders are not exempt from errors and could give impertinent GPS-coordinates. Also, even with a good geocoding, an inaccurate address can lead to a bad geocoding. For instance, when the geocoder has trouble with geocoding an address, it returns those of the center of the city. As well, an obvious geocoding issue is that the mappings used by the geocoders are not regularly updated. Thus, new buildings could not exist on maps until the next update. Even so, trying to optimize tours with impertinent customers GPS-coordinates, which are the most important and basic input data to take into account for solving a vehicle routing problem, is not really useful and will lead to a bad and incoherent solution tours because the locations of the customers used for the optimization are very different from their real positions. Our work is supported by a logistics software editor Tedies and a transport company Upsilon. We work with Upsilon's truck routes data to carry our experiments. In fact, these trucks are equipped with TOMTOM GPSs that continuously save their tours data (positions, speeds, tachograph-information, etc.). We, then, retrieve these data to extract the real truck routes to work with. The aim of this work is to use the experience of the driver and the feedback of the real truck tours to validate GPS-coordinates of well geocoded addresses, and bring a correction to the badly geocoded addresses. Thereby, when a vehicle makes its tour, for each visited customer, the vehicle might have trouble with finding this customer’s address at most once. In other words, the vehicle would be wrong at most once for each customer’s address. Our method significantly improves the quality of the geocoding. Hence, we achieve to automatically correct an average of 70% of GPS-coordinates of a tour addresses. The rest of the GPS-coordinates are corrected in a manual way by giving the user indications to help him to correct them. This study shows the importance of taking into account the feedback of the trucks to gradually correct address geocoding errors. Indeed, the accuracy of customer’s address and its GPS-coordinates play a major role in tours optimization. Unfortunately, address writing errors are very frequent. This feedback is naturally and usually taken into account by transporters (by asking drivers, calling customers…), to learn about their tours and bring corrections to the upcoming tours. Hence, we develop a method to do a big part of that automatically.Keywords: driver experience feedback, geocoding correction, real truck tours
Procedia PDF Downloads 67473 Multi-Label Approach to Facilitate Test Automation Based on Historical Data
Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally
Abstract:
The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.Keywords: machine learning, multi-class, multi-label, supervised learning, test automation
Procedia PDF Downloads 13272 The Effects of Heavy Metal and Aromatic Hydrocarbon Pollution on Bees
Authors: Katarzyna Zięba, Hajnalka Szentgyörgyi, Paweł Miśkowiec, Agnieszka Moos-Matysik
Abstract:
Bees are effective pollinators of plants using by humans. However, there is a concern about the fate different species due to their recently decline. Pollution of the environment is described in the literature as one of the causes of this phenomenon. Due to human activities, heavy metals and aromatic hydrocarbons can occur in bee organisms in high concentrations. The presented study aims to provide information on how pollution affects bee quality, taking into account, also the biological differences between various groups of bees. Understanding the consequences of environmental pollution on bees can help to create and promote bee friendly habitats and actions. The analyses were carried out using two contamination gradients with 5 sites on each. The first, mainly heavy metal polluted gradient is stretching approx. 30km from the Bukowno Zinc smelter near Olkusz in the Lesser Poland Voivodship, to the north. The second cuts through the agglomeration of Kraków up to the southern borders of the Ojców National Park. The gradient near Olkusz is a well-described pollution gradient contaminated mainly by zinc, lead, and cadmium. The second gradient cut through the agglomeration of Kraków and end below the Ojców National Park. On each gradient, two bee species were installed: red mason bees (Osmia bicornis) and honey bees (Apis mellifera). Red mason bee is a polylectic, solitary bee species, widely distributed in Poland. Honey bees are a highly social species of bees, with clearly defined casts and roles in the colony. Before installing the bees in the field, samples of imagos of red mason bees and samples of pollen and imagos from each honey bee colony were analysed for zinc, lead cadmium, polycyclic and monocyclic hydrocarbons levels. After collecting the bees from the field, samples of bees and pollen samples for each site were prepared for heavy metal, monocyclic hydrocarbon, and polycyclic hydrocarbon analysis. Analyses of aromatic hydrocarbons were performed with gas chromatography coupled with a headspace sampler (HP 7694E) and mass spectrometer (MS) as detector. Monocyclic compounds were injected into column with headspace sampler while polycyclic ones with manual injector (after solid-liquid extraction with hexane). The heavy metal content (zinc, lead and cadmium) was assessed with flame atomic absorption spectroscopy (FAAS AAnalyst 300 Perkin Elmer spectrometer) according to the methods for honey and bee products described in the literature. Pollution levels found in bee bodies and imago body masses in both species, and proportion of sex in case of red mason bees were correlated with pollution levels found in pollen for each site and colony or trap nest. An attempt to pinpoint the most important form of contamination regarding bee health was also be undertaken based on the achieved results.Keywords: heavy metals, aromatic hydrocarbons, bees, pollution
Procedia PDF Downloads 50871 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing
Authors: Rowan P. Martnishn
Abstract:
During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding
Procedia PDF Downloads 28