Search results for: offline signature verification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 897

Search results for: offline signature verification

177 Non-Conformance Clearance through an Intensified Mentorship towards ISO 15189 Accreditation: The Case of Jimma and Hawassa Hospital Microbiology Laboratories, Ethiopia

Authors: Dawit Assefa, Kassaye Tekie, Gebrie Alebachew, Degefu Beyene, Bikila Alemu, Naji Mohammed, Asnakech Agegnehu, Seble Tsehay, Geremew Tasew

Abstract:

Background: Implementation of a Laboratory Quality Management System (LQMS) is critical to ensure accurate, reliable, and efficient laboratory testing of antimicrobial resistance (AMR). However, limited LQMS implementation and progress toward accreditation in the AMR surveillance laboratory testing setting exist in Ethiopia. By addressing non-conformances (NCs) and working towards accreditation, microbiology laboratories can improve the quality of their services, increase staff competence, and contribute to mitigate the spread of AMR. Methods: Using standard ISO 15189 horizontal and vertical assessment checklists, certified assessors identified NCs at Hawassa and Jimma Hospital microbiology laboratories. The Ethiopian Public Health Institute AMR mentors and IDDS staff prioritized closing the NCs through the implementation of an intensified mentorship program that included ISO 15189 orientation training, resource allocation, and action plan development. Results: For the two facilities to clear their NCs, an intensified mentorship approach was adopted by providing ISO 15189 orientation training, provision of buffer reagents, controls, standards, and axillary equipment, and facilitating equipment maintenance and calibration. Method verification and competency assessment were also conducted along with the implementation of standard operating procedures and recommended corrective actions. This approach enhanced the laboratory's readiness for accreditation. After addressing their NCs, the two laboratories applied to Ethiopian Accreditation Services for ISO 15189 accreditation. Conclusions: Clearing NCs through the implementation of intensified mentorship was crucial in preparing the two laboratories for accreditation and improving quality laboratory test results. This approach can guide other microbiology laboratories’ accreditation attainment efforts.

Keywords: non-conformance clearance, intensified mentorship, accreditation, ISO 15189

Procedia PDF Downloads 38
176 A Study of the Use of Arguments in Nominalizations as Instanciations of Grammatical Metaphors Finished in -TION in Academic Texts of Native Speakers

Authors: Giovana Perini-Loureiro

Abstract:

The purpose of this research was to identify whether the nominalizations terminating in -TION in the academic discourse of native English speakers contain the arguments required by their input verbs. In the perspective of functional linguistics, ideational metaphors, with nominalization as their most pervasive realization, are lexically dense, and therefore frequent in formal texts. Ideational metaphors allow the academic genre to instantiate objectification, de-personalization, and the ability to construct a chain of arguments. The valence of those nouns present in nominalizations tends to maintain the same elements of the valence from its original verbs, but these arguments are not always expressed. The initial hypothesis was that these arguments would also be present alongside the nominalizations, through anaphora or cataphora. In this study, a qualitative analysis of the occurrences of the five more frequent nominalized terminations in -TION in academic texts was accomplished, and thus a verification of the occurrences of the arguments required by the original verbs. The assembling of the concordance lines was done through COCA (Corpus of Contemporary American English). After identifying the five most frequent nominalizations (attention, action, participation, instruction, intervention), the concordance lines were selected at random to be analyzed, assuring the representativeness and reliability of the sample. It was possible to verify, in all the analyzed instances, the presence of arguments. In most instances, the arguments were not expressed, but recoverable, either in the context or in the shared knowledge among the interactants. It was concluded that the realizations of the arguments which were not expressed alongside the nominalizations are part of a continuum, starting from the immediate context with anaphora and cataphora; up to a knowledge shared outside the text, such as specific area knowledge. The study also has implications for the teaching of academic writing, especially with regards to the impact of nominalizations on the thematic and informational flow of the text. Grammatical metaphors are essential to academic writing, hence acknowledging the occurrence of its arguments is paramount to achieve linguistic awareness and the writing prestige required by the academy.

Keywords: corpus, functional linguistics, grammatical metaphors, nominalizations, academic English

Procedia PDF Downloads 124
175 Construction and Validation of Allied Bank-Teller Aptitude Test

Authors: Muhammad Kashif Fida

Abstract:

In the bank, teller’s job (cash officer) is highly important and critical as at one end it requires soft and brisk customer services and on the other side, handling cash with integrity. It is always challenging for recruiters to hire competent and trustworthy tellers. According to author’s knowledge, there is no comprehensive test available that may provide assistance in recruitment in Pakistan. So there is a dire need of a psychometric battery that could provide support in recruitment of potential candidates for the teller’ position. So, the aim of the present study was to construct ABL-Teller Aptitude Test (ABL-TApT). Three major phases have been designed by following American Psychological Association’s guidelines. The first phase was qualitative, indicators of the test have been explored by content analysis of the a) teller’s job descriptions (n=3), b) interview with senior tellers (n=6) and c) interview with HR personals (n=4). Content analysis of above yielded three border constructs; i). Personality, ii). Integrity/honesty, iii). Professional Work Aptitude. Identified indicators operationalized and statements (k=170) were generated using verbatim. It was then forwarded to the five experts for review of content validity. They finalized 156 items. In the second phase; ABL-TApT (k=156) administered on 323 participants through a computer application. The overall reliability of the test shows significant alpha coefficient (α=.81). Reliability of subscales have also significant alpha coefficients. Confirmatory Factor Analysis (CFA) performed to estimate the construct validity, confirms four main factors comprising of eight personality traits (Confidence, Organized, Compliance, Goal-oriented, Persistent, Forecasting, Patience, Caution), one Integrity/honesty factor, four factors of professional work aptitude (basic numerical ability and perceptual accuracy of letters, numbers and signature) and two factors for customer services (customer services, emotional maturity). Values of GFI, AGFI, NNFI, CFI, RFI and RMSEA are in recommended range depicting significant model fit. In third phase concurrent validity evidences have been pursued. Personality and integrity part of this scale has significant correlations with ‘conscientiousness’ factor of NEO-PI-R, reflecting strong concurrent validity. Customer services and emotional maturity have significant correlations with ‘Bar-On EQI’ showing another evidence of strong concurrent validity. It is concluded that ABL-TAPT is significantly reliable and valid battery of tests, will assist in objective recruitment of tellers and help recruiters in finding a more suitable human resource.

Keywords: concurrent validity, construct validity, content validity, reliability, teller aptitude test, objective recruitment

Procedia PDF Downloads 202
174 Parking Service Effectiveness at Commercial Malls

Authors: Ahmad AlAbdullah, Ali AlQallaf, Mahdi Hussain, Mohammed AlAttar, Salman Ashknani, Magdy Helal

Abstract:

We study the effectiveness of the parking service provided at Kuwaiti commercial malls and explore potential problems and feasible improvements. Commercial malls are important to Kuwaitis as the entertainment and shopping centers due to the lack of other alternatives. The difficulty and relatively long times wasted in finding a parking spot at the mall are real annoyances. We applied queuing analysis to one of the major malls that offer paid-parking (1040 parking spots) in addition to free parking. Patrons of the mall usually complained of the traffic jams and delays at entering the paid parking (average delay to park exceeds 15 min for about 62% of the patrons, while average time spent in the mall is about 2.6 hours). However, the analysis showed acceptable service levels at the check-in gates of the parking garage. Detailed review of the vehicle movement at the gateways indicated that arriving and departing cars both had to share parts of the gateway to the garage, which caused the traffic jams and delays. A simple comparison we made indicated that the largest commercial mall in Kuwait does not suffer such parking issues, while other smaller, yet important malls do, including the one we studied. It was suggested that well-designed inlets and outlets of that gigantic mall permitted smooth parking despite being totally free and mall is the first choice for most people for entertainment and shopping. A simulation model is being developed for further analysis and verification. Simulation can overcome the mathematical difficulty in using non-Poisson queuing models. The simulation model is used to explore potential changes to the parking garage entrance layout. And with the inclusion of the drivers’ behavior inside the parking, effectiveness indicators can be derived to address the economic feasibility of extending the parking capacity and increasing service levels. Outcomes of the study are planned to be generalized as appropriate to other commercial malls in Kuwait

Keywords: commercial malls, parking service, queuing analysis, simulation modeling

Procedia PDF Downloads 318
173 Alternative Seed System for Enhanced Availability of Quality Seeds and Seed/Varietal Replacement Rate - An Experience

Authors: Basave Gowda, Lokesh K., Prasanth S. M., Bellad S. B., Radha J., Lokesh G. Y., Patil S. B., Vijayakumar D. K., Ganigar B. S., Rakesh C. Mathad

Abstract:

Quality seed plays an important role in enhancing the crop productivity. It was reported and confirmed by large scale verification research trials that by use of quality seeds alone, the crop yield can be enhanced by 15 to 20 per cent. At present, the quality seed production and distribution through organised sectors comprising both public and private seed sector was only 20-25% of the requirement and the remaining quantity is met through unorganised sector which include the farmer to farmers saved seeds. With an objective of developing an alternative seed system, the University of Agricultural Sciences, Raichur in Karnataka state has implemented Seed Village Programme in more than 100 villages covering around 5000 farmers every year since 2009-10 and in the selected seed villages, a group of 50-150 farmers were supplied the foundation seeds of new varieties to an extent of 0.4 ha at 50 % subsidy. And two to three training programmes were conducted in the targeted villages for quality seed production and the seed produced in the target group was processed locally in the university seed processing units and arranged for distribution in the local villages by the seed growers themselves. By this new innovative and modified seed system, the university can able to replace old varieties of pigeon pea and green gram by producing 1482, 2978, 2729, 2560, and 4581 tonnes of seeds of new varieties on large scale under farmers and scientists participatory seed village programmes respectively during 2009-10, 2010-11, 2011-12, 2012-13 and 2013-14. From this new alternate model of seed system, there should be large scale promotion of regional seed system involving farmers, NGO and voluntary organisation for quick and effective replacement of old, low yielding, disease susceptible varieties with new high yielding, disease resistant for enhanced food production and food security.

Keywords: seed system, seed village, seed replacement, varietal replacement

Procedia PDF Downloads 403
172 Development of Hydrodynamic Drag Calculation and Cavity Shape Generation for Supercavitating Torpedoes

Authors: Sertac Arslan, Sezer Kefeli

Abstract:

In this paper, firstly supercavitating phenomenon and supercavity shape design parameters are explained and then drag force calculation methods of high speed supercavitating torpedoes are investigated with numerical techniques and verified with empirical studies. In order to reach huge speeds such as 200, 300 knots for underwater vehicles, hydrodynamic hull drag force which is proportional to density of water (ρ) and square of speed should be reduced. Conventional heavy weight torpedoes could reach up to ~50 knots by classic underwater hydrodynamic techniques. However, to exceed 50 knots and reach about 200 knots speeds, hydrodynamic viscous forces must be reduced or eliminated completely. This requirement revives supercavitation phenomena that could be implemented to conventional torpedoes. Supercavitation is the use of cavitation effects to create a gas bubble, allowing the torpedo to move at huge speed through the water by being fully developed cavitation bubble. When the torpedo moves in a cavitation envelope due to cavitator in nose section and solid fuel rocket engine in rear section, this kind of torpedoes could be entitled as Supercavitating Torpedoes. There are two types of cavitation; first one is natural cavitation, and second one is ventilated cavitation. In this study, disk cavitator is modeled with natural cavitation and supercavitation phenomenon parameters are studied. Moreover, drag force calculation is performed for disk shape cavitator with numerical techniques and compared via empirical studies. Drag forces are calculated with computational fluid dynamics methods and different empirical methods. Numerical calculation method is developed by comparing with empirical results. In verification study cavitation number (σ), drag coefficient (CD) and drag force (D), cavity wall velocity (U

Keywords: cavity envelope, CFD, high speed underwater vehicles, supercavitation, supercavity flows

Procedia PDF Downloads 151
171 A Fast Multi-Scale Finite Element Method for Geophysical Resistivity Measurements

Authors: Mostafa Shahriari, Sergio Rojas, David Pardo, Angel Rodriguez- Rozas, Shaaban A. Bakr, Victor M. Calo, Ignacio Muga

Abstract:

Logging-While Drilling (LWD) is a technique to record down-hole logging measurements while drilling the well. Nowadays, LWD devices (e.g., nuclear, sonic, resistivity) are mostly used commercially for geo-steering applications. Modern borehole resistivity tools are able to measure all components of the magnetic field by incorporating tilted coils. The depth of investigation of LWD tools is limited compared to the thickness of the geological layers. Thus, it is a common practice to approximate the Earth’s subsurface with a sequence of 1D models. For a 1D model, we can reduce the dimensionality of the problem using a Hankel transform. We can solve the resulting system of ordinary differential equations (ODEs) either (a) analytically, which results in a so-called semi-analytic method after performing a numerical inverse Hankel transform, or (b) numerically. Semi-analytic methods are used by the industry due to their high performance. However, they have major limitations, namely: -The analytical solution of the aforementioned system of ODEs exists only for piecewise constant resistivity distributions. For arbitrary resistivity distributions, the solution of the system of ODEs is unknown by today’s knowledge. -In geo-steering, we need to solve inverse problems with respect to the inversion variables (e.g., the constant resistivity value of each layer and bed boundary positions) using a gradient-based inversion method. Thus, we need to compute the corresponding derivatives. However, the analytical derivatives of cross-bedded formation and the analytical derivatives with respect to the bed boundary positions have not been published to the best of our knowledge. The main contribution of this work is to overcome the aforementioned limitations of semi-analytic methods by solving each 1D model (associated with each Hankel mode) using an efficient multi-scale finite element method. The main idea is to divide our computations into two parts: (a) offline computations, which are independent of the tool positions and we precompute only once and use them for all logging positions, and (b) online computations, which depend upon the logging position. With the above method, (a) we can consider arbitrary resistivity distributions along the 1D model, and (b) we can easily and rapidly compute the derivatives with respect to any inversion variable at a negligible additional cost by using an adjoint state formulation. Although the proposed method is slower than semi-analytic methods, its computational efficiency is still high. In the presentation, we shall derive the mathematical variational formulation, describe the proposed multi-scale finite element method, and verify the accuracy and efficiency of our method by performing a wide range of numerical experiments and comparing the numerical solutions to semi-analytic ones when the latest are available.

Keywords: logging-While-Drilling, resistivity measurements, multi-scale finite elements, Hankel transform

Procedia PDF Downloads 360
170 Five Years Analysis and Mitigation Plans on Adjustment Orders Impacts on Projects in Kuwait's Oil and Gas Sector

Authors: Rawan K. Al-Duaij, Salem A. Al-Salem

Abstract:

Projects, the unique and temporary process of achieving a set of requirements have always been challenging; Planning the schedule and budget, managing the resources and risks are mostly driven by a similar past experience or the technical consultations of experts in the matter. With that complexity of Projects in Scope, Time, and execution environment, Adjustment Orders are tools to reflect changes to the original project parameters after Contract signature. Adjustment Orders are the official/legal amendments to the terms and conditions of a live Contract. Reasons for issuing Adjustment Orders arise from changes in Contract scope, technical requirement and specification resulting in scope addition, deletion, or alteration. It can be as well a combination of most of these parameters resulting in an increase or decrease in time and/or cost. Most business leaders (handling projects in the interest of the owner) refrain from using Adjustment Orders considering their main objectives of staying within budget and on schedule. Success in managing the changes results in uninterrupted execution and agreed project costs as well as schedule. Nevertheless, this is not always practically achievable. In this paper, a detailed study through utilizing Industrial Engineering & Systems Management tools such as Six Sigma, Data Analysis, and Quality Control were implemented on the organization’s five years records of the issued Adjustment Orders in order to investigate their prevalence, and time and cost impact. The analysis outcome revealed and helped to identify and categorize the predominant causations with the highest impacts, which were considered most in recommending the corrective measures to reach the objective of minimizing the Adjustment Orders impacts. Data analysis demonstrated no specific trend in the AO frequency in past five years; however, time impact is more than the cost impact. Although Adjustment Orders might never be avoidable; this analysis offers’ some insight to the procedural gaps, and where it is highly impacting the organization. Possible solutions are concluded such as improving project handling team’s coordination and communication, utilizing a blanket service contract, and modifying the projects gate system procedures to minimize the possibility of having similar struggles in future. Projects in the Oil and Gas sector are always evolving and demand a certain amount of flexibility to sustain the goals of the field. As it will be demonstrated, the uncertainty of project parameters, in adequate project definition, operational constraints and stringent procedures are main factors resulting in the need for Adjustment Orders and accordingly the recommendation will be to address that challenge.

Keywords: adjustment orders, data analysis, oil and gas sector, systems management

Procedia PDF Downloads 132
169 Experimental Verification of Similarity Criteria for Sound Absorption of Perforated Panels

Authors: Aleksandra Majchrzak, Katarzyna Baruch, Monika Sobolewska, Bartlomiej Chojnacki, Adam Pilch

Abstract:

Scaled modeling is very common in the areas of science such as aerodynamics or fluid mechanics, since defining characteristic numbers enables to determine relations between objects under test and their models. In acoustics, scaled modeling is aimed mainly at investigation of room acoustics, sound insulation and sound absorption phenomena. Despite such a range of application, there is no method developed that would enable scaling acoustical perforated panels freely, maintaining their sound absorption coefficient in a desired frequency range. However, conducted theoretical and numerical analyses have proven that it is not physically possible to obtain given sound absorption coefficient in a desired frequency range by directly scaling only all of the physical dimensions of a perforated panel, according to a defined characteristic number. This paper is a continuation of the research mentioned above and presents practical evaluation of theoretical and numerical analyses. The measurements of sound absorption coefficient of perforated panels were performed in order to verify previous analyses and as a result find the relations between full-scale perforated panels and their models which will enable to scale them properly. The measurements were conducted in a one-to-eight model of a reverberation chamber of Technical Acoustics Laboratory, AGH. Obtained results verify theses proposed after theoretical and numerical analyses. Finding the relations between full-scale and modeled perforated panels will allow to produce measurement samples equivalent to the original ones. As a consequence, it will make the process of designing acoustical perforated panels easier and will also lower the costs of prototypes production. Having this knowledge, it will be possible to emulate in a constructed model panels used, or to be used, in a full-scale room more precisely and as a result imitate or predict the acoustics of a modeled space more accurately.

Keywords: characteristic numbers, dimensional analysis, model study, scaled modeling, sound absorption coefficient

Procedia PDF Downloads 170
168 Controlling the Release of Cyt C and L- Dopa from pNIPAM-AAc Nanogel Based Systems

Authors: Sulalit Bandyopadhyay, Muhammad Awais Ashfaq Alvi, Anuvansh Sharma, Wilhelm R. Glomm

Abstract:

Release of drugs from nanogels and nanogel-based systems can occur under the influence of external stimuli like temperature, pH, magnetic fields and so on. pNIPAm-AAc nanogels respond to the combined action of both temperature and pH, the former being mostly determined by hydrophilic-to-hydrophobic transitions above the volume phase transition temperature (VPTT), while the latter is controlled by the degree of protonation of the carboxylic acid groups. These nanogels based systems are promising candidates in the field of drug delivery. Combining nanogels with magneto-plasmonic nanoparticles (NPs) introduce imaging and targeting modalities along with stimuli-response in one hybrid system, thereby incorporating multifunctionality. Fe@Au core-shell NPs possess optical signature in the visible spectrum owing to localized surface plasmon resonance (LSPR) of the Au shell, and superparamagnetic properties stemming from the Fe core. Although there exist several synthesis methods to control the size and physico-chemical properties of pNIPAm-AAc nanogels, yet, there is no comprehensive study that highlights the dependence of incorporation of one or more layers of NPs to these nanogels. In addition, effective determination of volume phase transition temperature (VPTT) of the nanogels is a challenge which complicates their uses in biological applications. Here, we have modified the swelling-collapse properties of pNIPAm-AAc nanogels, by combining with Fe@Au NPs using different solution based methods. The hydrophilic-hydrophobic transition of the nanogels above the VPTT has been confirmed to be reversible. Further, an analytical method has been developed to deduce the average VPTT which is found to be 37.3°C for the nanogels and 39.3°C for nanogel coated Fe@Au NPs. An opposite swelling –collapse behaviour is observed for the latter where the Fe@Au NPs act as bridge molecules pulling together the gelling units. Thereafter, Cyt C, a model protein drug and L-Dopa, a drug used in the clinical treatment of Parkinson’s disease were loaded separately into the nanogels and nanogel coated Fe@Au NPs, using a modified breathing-in mechanism. This gave high loading and encapsulation efficiencies (L Dopa: ~9% and 70µg/mg of nanogels, Cyt C: ~30% and 10µg/mg of nanogels respectively for both the drugs. The release kinetics of L-Dopa, monitored using UV-vis spectrophotometry was observed to be rather slow (over several hours) with highest release happening under a combination of high temperature (above VPTT) and acidic conditions. However, the release of L-Dopa from nanogel coated Fe@Au NPs was the fastest, accounting for release of almost 87% of the initially loaded drug in ~30 hours. The chemical structure of the drug, drug incorporation method, location of the drug and presence of Fe@Au NPs largely alter the drug release mechanism and the kinetics of these nanogels and Fe@Au NPs coated with nanogels.

Keywords: controlled release, nanogels, volume phase transition temperature, l-dopa

Procedia PDF Downloads 305
167 Challenges to Safe and Effective Prescription Writing in the Environment Where Digital Prescribing is Absent

Authors: Prashant Neupane, Asmi Pandey, Mumna Ehsan, Katie Davies, Richard Lowsby

Abstract:

Introduction/Background & aims: Safe and effective prescribing in hospitals, directly and indirectly, impacts the health of the patients. Even though digital prescribing in the National Health Service (NHS), UK has been used in lots of tertiary centers along with district general hospitals, a significant number of NHS trusts are still using paper prescribing. We came across lots of irregularities in our daily clinical practice when we are doing paper prescribing. The main aim of the study was to assess how safely and effectively are we prescribing at our hospital where there is no access to digital prescribing. Method/Summary of work: We conducted a prospective audit in the critical care department at Mid Cheshire Hopsitals NHS Foundation Trust in which 20 prescription charts from different patients were randomly selected over a period of 1 month. We assessed 16 multiple categories from each prescription chart and compared them to the standard trust guidelines on prescription. Results/Discussion: We collected data from 20 different prescription charts. 16 categories were evaluated within each prescription chart. The results showed there was an urgent need for improvement in 8 different sections. In 85% of the prescription chart, all the prescribers who prescribed the medications were not identified. Name, GMC number and signature were absent in the required prescriber identification section of the prescription chart. In 70% of prescription charts, either indication or review date of the antimicrobials was absent. Units of medication were not documented correctly in 65% and the allergic status of the patient was absent in 30% of the charts. The start date of medications was missing and alternations of the medications were not done properly in 35%of charts. The patient's name was not recorded in all desired sections of the chart in 50% of cases and cancellations of the medication were not done properly in 45% of the prescription charts. Conclusion(s): From the audit and data analysis, we assessed the areas in which we needed improvement in prescription writing in the Critical care department. However, during the meetings and conversations with the experts from the pharmacy department, we realized this audit is just a representation of the specialized department of the hospital where access to prescribing is limited to a certain number of prescribers. But if we consider bigger departments of the hospital where patient turnover is much more, the results could be much worse. The findings were discussed in the Critical care MDT meeting where suggestions regarding digital/electronic prescribing were discussed. A poster and presentation regarding safe and effective prescribing were done, awareness poster was prepared and attached alongside every bedside in critical care where it is visible to prescribers. We consider this as a temporary measure to improve the quality of prescribing, however, we strongly believe digital prescribing will help to a greater extent to control weak areas which are seen in paper prescribing.

Keywords: safe prescribing, NHS, digital prescribing, prescription chart

Procedia PDF Downloads 95
166 Student Participation in Higher Education Quality Assurance Processes

Authors: Tomasz Zarebski

Abstract:

A very important element of the education system is its evaluation procedure. Each education system should be systematically evaluated and improved. Among the criteria subject to evaluation, attention should be paid to the following: structure of the study programme, implementation of the study programme, admission to studies, verification of learning outcomes achievement by students, giving credit for individual semesters and years, and awarding diplomas, competence, experience, qualifications and the number of staff providing education, staff development, and in-service training, education infrastructure, cooperation with social and economic stakeholders on the development, conditions for and methods of improving the internationalisation of education provided as part of the degree programme, supporting learning, social, academic or professional development of students and their entry on the labour market, public access to information about the study programme and quality assurance policy. Concerning the assessment process and the individual assessment indicators, the participation of students in these processes is essential. The purpose of this paper is to analyse the rules of student participation in accreditation processes on the example of individual countries in Europe. The rules of students' participation in the work of accreditation committees and their influence on the final grade of the committee were analysed. Most of the higher education institutions follow similar rules for accreditation. The general model gives the individual institution freedom to organize its own quality assurance, as long as the system lives up to the criteria for quality and relevance laid down in the particular provisions. This point also applies to students. The regulations of the following countries were examined in the legal-comparative aspect: Poland (Polish Accreditation Committee), Denmark (The Danish Accreditation Institution), France (High Council for the Evaluation of Research and Higher Education), Germany (Agency for Quality Assurance through Accreditation of Study Programmes) and Italy (National Agency for the Evaluation of Universities and Research Institutes).

Keywords: accreditation, student, study programme, quality assurance in higher education

Procedia PDF Downloads 134
165 Threat Modeling Methodology for Supporting Industrial Control Systems Device Manufacturers and System Integrators

Authors: Raluca Ana Maria Viziteu, Anna Prudnikova

Abstract:

Industrial control systems (ICS) have received much attention in recent years due to the convergence of information technology (IT) and operational technology (OT) that has increased the interdependence of safety and security issues to be considered. These issues require ICS-tailored solutions. That led to the need to creation of a methodology for supporting ICS device manufacturers and system integrators in carrying out threat modeling of embedded ICS devices in a way that guarantees the quality of the identified threats and minimizes subjectivity in the threat identification process. To research, the possibility of creating such a methodology, a set of existing standards, regulations, papers, and publications related to threat modeling in the ICS sector and other sectors was reviewed to identify various existing methodologies and methods used in threat modeling. Furthermore, the most popular ones were tested in an exploratory phase on a specific PLC device. The outcome of this exploratory phase has been used as a basis for defining specific characteristics of ICS embedded devices and their deployment scenarios, identifying the factors that introduce subjectivity in the threat modeling process of such devices, and defining metrics for evaluating the minimum quality requirements of identified threats associated to the deployment of the devices in existing infrastructures. Furthermore, the threat modeling methodology was created based on the previous steps' results. The usability of the methodology was evaluated through a set of standardized threat modeling requirements and a standardized comparison method for threat modeling methodologies. The outcomes of these verification methods confirm that the methodology is effective. The full paper includes the outcome of research on different threat modeling methodologies that can be used in OT, their comparison, and the results of implementing each of them in practice on a PLC device. This research is further used to build a threat modeling methodology tailored to OT environments; a detailed description is included. Moreover, the paper includes results of the evaluation of created methodology based on a set of parameters specifically created to rate threat modeling methodologies.

Keywords: device manufacturers, embedded devices, industrial control systems, threat modeling

Procedia PDF Downloads 54
164 Robustness of the Deep Chroma Extractor and Locally-Normalized Quarter Tone Filters in Automatic Chord Estimation under Reverberant Conditions

Authors: Luis Alvarado, Victor Poblete, Isaac Gonzalez, Yetzabeth Gonzalez

Abstract:

In MIREX 2016 (http://www.music-ir.org/mirex), the deep neural network (DNN)-Deep Chroma Extractor, proposed by Korzeniowski and Wiedmer, reached the highest score in an audio chord recognition task. In the present paper, this tool is assessed under acoustic reverberant environments and distinct source-microphone distances. The evaluation dataset comprises The Beatles and Queen datasets. These datasets are sequentially re-recorded with a single microphone in a real reverberant chamber at four reverberation times (0 -anechoic-, 1, 2, and 3 s, approximately), as well as four source-microphone distances (32, 64, 128, and 256 cm). It is expected that the performance of the trained DNN will dramatically decrease under these acoustic conditions with signals degraded by room reverberation and distance to the source. Recently, the effect of the bio-inspired Locally-Normalized Cepstral Coefficients (LNCC), has been assessed in a text independent speaker verification task using speech signals degraded by additive noise at different signal-to-noise ratios with variations of recording distance, and it has also been assessed under reverberant conditions with variations of recording distance. LNCC showed a performance so high as the state-of-the-art Mel Frequency Cepstral Coefficient filters. Based on these results, this paper proposes a variation of locally-normalized triangular filters called Locally-Normalized Quarter Tone (LNQT) filters. By using the LNQT spectrogram, robustness improvements of the trained Deep Chroma Extractor are expected, compared with classical triangular filters, and thus compensating the music signal degradation improving the accuracy of the chord recognition system.

Keywords: chord recognition, deep neural networks, feature extraction, music information retrieval

Procedia PDF Downloads 196
163 Electrophysiological Correlates of Statistical Learning in Children with and without Developmental Language Disorder

Authors: Ana Paula Soares, Alexandrina Lages, Helena Oliveira, Francisco-Javier Gutiérrez-Domínguez, Marisa Lousada

Abstract:

From an early age, exposure to a spoken language allows us to implicitly capture the structure underlying the succession of the speech sounds in that language and to segment it into meaningful units (words). Statistical learning (SL), i.e., the ability to pick up patterns in the sensory environment even without intention or consciousness of doing it, is thus assumed to play a central role in the acquisition of the rule-governed aspects of language and possibly to lie behind the language difficulties exhibited by children with development language disorder (DLD). The research conducted so far has, however, led to inconsistent results, which might stem from the behavioral tasks used to test SL. In a classic SL experiment, participants are first exposed to a continuous stream (e.g., syllables) in which, unbeknownst to the participants, stimuli are grouped into triplets that always appear together in the stream (e.g., ‘tokibu’, ‘tipolu’), with no pauses between each other (e.g., ‘tokibutipolugopilatokibu’) and without any information regarding the task or the stimuli. Following exposure, SL is assessed by asking participants to discriminate between triplets previously presented (‘tokibu’) from new sequences never presented together during exposure (‘kipopi’), i.e., to perform a two-alternative-forced-choice (2-AFC) task. Despite the widespread use of the 2-AFC to test SL, it has come under increasing criticism as it is an offline post-learning task that only assesses the result of the learning that had occurred during the previous exposure phase and that might be affected by other factors beyond the computation of regularities embedded in the input, typically the likelihood two syllables occurring together, a statistic known as transitional probability (TP). One solution to overcome these limitations is to assess SL as exposure to the stream unfolds using online techniques such as event-related potentials (ERP) that is highly sensitive to the time-course of the learning in the brain. Here we collected ERPs to examine the neurofunctional correlates of SL in preschool children with DLD, and chronological-age typical language development (TLD) controls who were exposed to an auditory stream in which eight three-syllable nonsense words, four of which presenting high-TPs and the other four low-TPs, to further analyze whether the ability of DLD and TLD children to extract-word-like units from the steam was modulated by words’ predictability. Moreover, to ascertain if the previous knowledge of the to-be-learned-regularities affected the neural responses to high- and low-TP words, children performed the auditory SL task, firstly, under implicit, and, subsequently, under explicit conditions. Although behavioral evidence of SL was not obtained in either group, the neural responses elicited during the exposure phases of the SL tasks differentiated children with DLD from children with TLD. Specifically, the results indicated that only children from the TDL group showed neural evidence of SL, particularly in the SL task performed under explicit conditions, firstly, for the low-TP, and, subsequently, for the high-TP ‘words’. Taken together, these findings support the view that children with DLD showed deficits in the extraction of the regularities embedded in the auditory input which might underlie the language difficulties.

Keywords: development language disorder, statistical learning, transitional probabilities, word segmentation

Procedia PDF Downloads 162
162 Computation of Radiotherapy Treatment Plans Based on CT to ED Conversion Curves

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Radiotherapy treatment planning computers use CT data of the patient. For the computation of a treatment plan, treatment planning system must have an information on electron densities of tissues scanned by CT. This information is given by the conversion curve CT (CT number) to ED (electron density), or simply calibration curve. Every treatment planning system (TPS) has built in default CT to ED conversion curves, for the CTs of different manufacturers. However, it is always recommended to verify the CT to ED conversion curve before actual clinical use. Objective of this study was to check how the default curve already provided matches the curve actually measured on a specific CT, and how much it influences the calculation of a treatment planning computer. The examined CT scanners were from the same manufacturer, but four different scanners from three generations. The measurements of all calibration curves were done with the dedicated phantom CIRS 062M Electron Density Phantom. The phantom was scanned, and according to real HU values read at the CT console computer, CT to ED conversion curves were generated for different materials, for same tube voltage 140 kV. Another phantom, CIRS Thorax 002 LFC which represents an average human torso in proportion, density and two-dimensional structure, was used for verification. The treatment planning was done on CT slices of scanned CIRS LFC 002 phantom, for selected cases. Interest points were set in the lungs, and in the spinal cord, and doses recorded in TPS. The overall calculated treatment times for four scanners and default scanner did not differ more than 0.8%. Overall interest point dose in bone differed max 0.6% while for single fields was maximum 2.7% (lateral field). Overall interest point dose in lungs differed max 1.1% while for single fields was maximum 2.6% (lateral field). It is known that user should verify the CT to ED conversion curve, but often, developing countries are facing lack of QA equipment, and often use default data provided. We have concluded that the CT to ED curves obtained differ in certain points of a curve, generally in the region of higher densities. This influences the treatment planning result which is not significant, but definitely does make difference in the calculated dose.

Keywords: Computation of treatment plan, conversion curve, radiotherapy, electron density

Procedia PDF Downloads 450
161 Effects of Machining Parameters on the Surface Roughness and Vibration of the Milling Tool

Authors: Yung C. Lin, Kung D. Wu, Wei C. Shih, Jui P. Hung

Abstract:

High speed and high precision machining have become the most important technology in manufacturing industry. The surface roughness of high precision components is regarded as the important characteristics of the product quality. However, machining chatter could damage the machined surface and restricts the process efficiency. Therefore, selection of the appropriate cutting conditions is of importance to prevent the occurrence of chatter. In addition, vibration of the spindle tool also affects the surface quality, which implies the surface precision can be controlled by monitoring the vibration of the spindle tool. Based on this concept, this study was aimed to investigate the influence of the machining conditions on the surface roughness and the vibration of the spindle tool. To this end, a series of machining tests were conducted on aluminum alloy. In tests, the vibration of the spindle tool was measured by using the acceleration sensors. The surface roughness of the machined parts was examined using white light interferometer. The response surface methodology (RSM) was employed to establish the mathematical models for predicting surface finish and tool vibration, respectively. The correlation between the surface roughness and spindle tool vibration was also analyzed by ANOVA analysis. According to the machining tests, machined surface with or without chattering was marked on the lobes diagram as the verification of the machining conditions. Using multivariable regression analysis, the mathematical models for predicting the surface roughness and tool vibrations were developed based on the machining parameters, cutting depth (a), feed rate (f) and spindle speed (s). The predicted roughness is shown to agree well with the measured roughness, an average percentage of errors of 10%. The average percentage of errors of the tool vibrations between the measurements and the predictions of mathematical model is about 7.39%. In addition, the tool vibration under various machining conditions has been found to have a positive influence on the surface roughness (r=0.78). As a conclusion from current results, the mathematical models were successfully developed for the predictions of the surface roughness and vibration level of the spindle tool under different cutting condition, which can help to select appropriate cutting parameters and to monitor the machining conditions to achieve high surface quality in milling operation.

Keywords: machining parameters, machining stability, regression analysis, surface roughness

Procedia PDF Downloads 204
160 Developing the Principal Change Leadership Non-Technical Competencies Scale: An Exploratory Factor Analysis

Authors: Tai Mei Kin, Omar Abdull Kareem

Abstract:

In light of globalization, educational reform has become a top priority for many countries. However, the task of leading change effectively requires a multidimensional set of competencies. Over the past two decades, technical competencies of principal change leadership have been extensively analysed and discussed. Comparatively, little research has been conducted in Malaysian education context on non-technical competencies or popularly known as emotional intelligence, which is equally crucial for the success of change. This article provides a validation of the Principal Change Leadership Non-Technical Competencies (PCLnTC) Scale, a tool that practitioners can easily use to assess school principals’ level of change leadership non-technical competencies that facilitate change and maximize change effectiveness. The overall coherence of the PCLnTC model was constructed by incorporating three theories: a)the change leadership theory whereby leading change is the fundamental role of a leader; b)competency theory in which leadership can be taught and learned; and c)the concept of emotional intelligence whereby it can be developed, fostered and taught. An exploratory factor analysis (EFA) was used to determine the underlying factor structure of PCLnTC model. Before conducting EFA, five important pilot test approaches were conducted to ensure the validity and reliability of the instrument: a)reviewed by academic colleagues; b)verification and comments from panel; c)evaluation on questionnaire format, syntax, design, and completion time; d)evaluation of item clarity; and e)assessment of internal consistency reliability. A total of 335 teachers from 12 High Performing Secondary School in Malaysia completed the survey. The PCLnTCS with six points Liker-type scale were subjected to Principal Components Analysis. The analysis yielded a three-factor solution namely, a)Interpersonal Sensitivity; b)Flexibility; and c)Motivation, explaining a total 74.326 per cent of the variance. Based on the results, implications for instrument revisions are discussed and specifications for future confirmatory factor analysis are delineated.

Keywords: exploratory factor analysis, principal change leadership non-technical competencies (PCLnTC), interpersonal sensitivity, flexibility, motivation

Procedia PDF Downloads 400
159 Seismic Retrofit of Reinforced Concrete Structures by Highly Dissipative Technologies

Authors: Stefano Sorace, Gloria Terenzi, Giulia Mazzieri, Iacopo Costoli

Abstract:

The prolonged earthquake sequence that struck several urban agglomerations and villages in Central Italy, starting from 24 August 2016 through January 2017, highlighted once again the seismic vulnerability of pre-normative reinforced concrete (R/C) structures. At the same time, considerable damages were surveyed in recently retrofitted R/C buildings too, one of which also by means of a dissipative bracing system. The solution adopted for the latter did not expressly take into account the performance of non-structural elements, and namely of infills and partitions, confirming the importance of their dynamic interaction with the structural skeleton. Based on this consideration, an alternative supplemental damping-based retrofit solution for this representative building, i.e., a school with an R/C structure situated in the municipality of Norcia, is examined in this paper. It consists of the incorporation of dissipative braces equipped with pressurized silicone fluid viscous (FV) dampers, instead of the BRAD system installed in the building, the delayed activation of which -caused by the high stiffness of the constituting metallic dampers- determined the observed non-structural damages. Indeed, the alternative solution proposed herein, characterized by dissipaters with mainly damping mechanical properties, guarantees an earlier activation of the protective system. A careful assessment analysis, preliminarily carried out to simulate and check the case study building performance in originally BRAD-retrofitted conditions, confirms that the interstorey drift demand related to the Norcia earthquake's mainshock and aftershocks is beyond the response capacity of infills. The verification analyses developed on the R/C structure, including the FV-damped braces, highlight their higher performance, giving rise to a completely undamaged response both of structural and non-structural elements up to the basic design earthquake normative level of seismic action.

Keywords: dissipative technologies, performance assessment analysis, concrete structures, seismic retrofit

Procedia PDF Downloads 95
158 An Introduction to the Concept of Environmental Audit: Indian Context

Authors: Pradip Kumar Das

Abstract:

Phenomenal growth of population and industry exploits the environment in varied ways. Consequently, the greenhouse effect and other allied problems are threatening mankind the world over. Protection and up gradation of environment have, therefore, become the prime necessity all of mankind for the sustainable development of environment. People in humbler walks of life including the corporate citizens have become aware of the impacts of environmental pollution. Governments of various nations have entered the picture with laws and regulations to correct and cure the effects of present and past violations of environmental practices and to obstruct future violations of good environmental disciplines. In this perspective, environmental audit directs verification and validation to ensure that the various environmental laws are complied with and adequate care has been taken towards environmental protection and preservation. The discipline of environmental audit has experienced expressive development throughout the world. It examines the positive and negative effects of the activities of an enterprise on environment and provides an in-depth study of the company processes any growth in realizing long-term strategic goals. Environmental audit helps corporations assess its achievement, correct deficiencies and reduce risk to the health and improving safety. Environmental audit being a strong management tool should be administered by industry for its own self-assessment. Developed countries all over the globe have gone ahead in environment quantification; but unfortunately, there is a lack of awareness about pollution and environmental hazards among the common people in India. In the light of this situation, the conceptual analysis of this study is concerned with the rationale of environmental audit on the industry and the society as a whole and highlights the emerging dimensions in the auditing theory and practices. A modest attempt has been made to throw light on the recent development in environmental audit in developing nations like India and the problems associated with the implementation of environmental audit. The conceptual study also reflects that despite different obstacles, environmental audit is becoming an increasing aspect within the corporate sectors in India and lastly, conclusions along with suggestions have been offered to improve the current scenario.

Keywords: environmental audit, environmental hazards, environmental laws, environmental protection, environmental preservation

Procedia PDF Downloads 241
157 Investigation p53 Codon 72 Polymorphism and miR-146a rs2910164 Polymorphism in Breast Cancer

Authors: Marjan Moradi Fard, Hossein Rassi, Masoud Houshmand

Abstract:

Aim: Breast cancer is one of the most common cancers affecting the morbidity and mortality of Iranian women. This disease is a result of collective alterations of oncogenes and tumor suppressor genes. Studies have produced conflicting results concerning the role of p53 codon 72 polymorphism (G>C) and miR-146a rs2910164 polymorphism (G>C) on the risk of several cancers; therefore, a research was performed to estimate the association between the p53 codon 72 polymorphism and miR-146a rs2910164 polymorphism in breast cancer. Methods and Materials: A total of 45 archival breast cancer samples from khatam hospital and 40 healthy samples were collected. Verification of each cancer reported in a relative was sought through the pathology reports of the hospital records. Then, DNA extracted from all samples by standard methods and p53 codon 72 polymorphism genotypes and miR-146a rs2910164 polymorphism genotypes were analyzed using multiplex PCR. The tubules, mitotic activity, necrosis, polymorphism and grade of breast cancer were staged by Nottingham histological grading and immunohistochemical staining of the sections from the paraffin wax embedded tissues for the expression of ER, PR and p53 was carried out using a standard method. Finally, data analysis was performed using the 7 version of the Epi Info(TM) 2012 software and test chi-square(x2) for trend. Results: Successful DNA extraction was assessed by PCR amplification of b-actin gene (99 bp). According to the results, p53 GG genotype and miR-146a rs2910164 CC genotype was significantly associated with increased risk of breast cancer in the study population. In this study, we established that tumors of p53 GG genotype and miR-146a rs2910164 CC genotype exhibited higher mitotic activity, higher polymorphism, lower necrosis, lower tubules, higher ER- and PR-negatives and lower TP53-positives than the other genotypes. Conclusion: The present study provided preliminary evidence that a p53 GG genotype may effect breast cancer risk in the study population, interacting synergistically with miR-146a rs2910164 CC genotype. Our results demonstrate that the testing of p53 codon 72 polymorphism genotypes and miR-146a rs2910164 polymorphism genotypes in combination with clinical parameters can serve as major risk factors in the early identification of breast cancers.

Keywords: breast cancer, p53 codon 72 polymorphism, miR-146a rs2910164 polymorphism, genotypes

Procedia PDF Downloads 313
156 Effectiveness of Qanun Number 14 of 2013 on Khalwat, Nasty in the Enforcement of Islamic Shari'a in Banda Aceh, Aceh Province

Authors: Muhadam Labolo, Mughny Ibtisyam Mukhlis, Zulkarnaen, Safira Maulida Rahman Soulisa

Abstract:

This research is motivated by one of the functions of government is a regulatory function. Aceh Province, especially in Banda Aceh City has special autonomy, one of them is in the application of Islamic law, but when the law implemented to the citizen, there are many problems happens. One of the problems faced by the Government and people of Banda Aceh was Seclusion. Seclusion/nasty silent act between two people mukallafor more of the opposite sex who is not mahram or without marriage. This study aims to determine and analyze how the effectiveness of the policy as well as enabling and inhibiting factors of Qanun Number 14 of 2003 On Khalwat (nasty) in sharia Islam Islamic law in the city of Banda Aceh. This type of research is qualitative research method is a descriptive and inductive approach. The source of data used is People, Problem, Phenomenon, and programs, while the data collection through field studies and literature such as interviews, observation, and documentation. The results of this study were analyzed by using data reduction, display data, conclusions, and verification. The results showed that the Qanun Number 14 of 2003 on Khalwat (nasty) in the establishment of Islamic law in Banda Aceh is still not effective. It is seen from the high number of violations seclusion committed by Banda Aceh citizen, especially among teenagers, lack of socialization, as well as a lack of budgetary support for the implementation of Islamic Law in Banda Aceh. The supporting factors are 1) Coordination and communication among agencies had been walking steadily. 2) Facilities and infrastructure Syar'iah Court of Banda Aceh and the Office of Sharia Islam Banda Aceh that very good. 3) The Cultural majority of the people of Banda Aceh that support. Inhibiting factors: 1) There are no written duties of each institution for the prosecution case Seclusion. 2) The lack of socialization programs. 3) Lack of facilities and infrastructure Municipal Police Unit and the WH less. 4) Lack of control by the family. 5) The absence of training for officials Municipal Police Units and the Wilayatul Hisbah Banda Aceh.

Keywords: effectiveness, Islamic Sharia, Khalwat, Qanun

Procedia PDF Downloads 202
155 Predicting and Optimizing the Mechanical Behavior of a Flax Reinforced Composite

Authors: Georgios Koronis, Arlindo Silva

Abstract:

This study seeks to understand the mechanical behavior of a natural fiber reinforced composite (epoxy/flax) in more depth, utilizing both experimental and numerical methods. It is attempted to identify relationships between the design parameters and the product performance, understand the effect of noise factors and reduce process variations. Optimization of the mechanical performance of manufactured goods has recently been implemented by numerous studies for green composites. However, these studies are limited and have explored in principal mass production processes. It is expected here to discover knowledge about composite’s manufacturing that can be used to design artifacts that are of low batch and tailored to niche markets. The goal is to reach greater consistency in the performance and further understand which factors play significant roles in obtaining the best mechanical performance. A prediction of response function (in various operating conditions) of the process is modeled by the DoE. Normally, a full factorial designed experiment is required and consists of all possible combinations of levels for all factors. An analytical assessment is possible though with just a fraction of the full factorial experiment. The outline of the research approach will comprise of evaluating the influence that these variables have and how they affect the composite mechanical behavior. The coupons will be fabricated by the vacuum infusion process defined by three process parameters: flow rate, injection point position and fiber treatment. Each process parameter is studied at 2-levels along with their interactions. Moreover, the tensile and flexural properties will be obtained through mechanical testing to discover the key process parameters. In this setting, an experimental phase will be followed in which a number of fabricated coupons will be tested to allow for a validation of the design of the experiment’s setup. Finally, the results are validated by performing the optimum set of in a final set of experiments as indicated by the DoE. It is expected that after a good agreement between the predicted and the verification experimental values, the optimal processing parameter of the biocomposite lamina will be effectively determined.

Keywords: design of experiments, flax fabrics, mechanical performance, natural fiber reinforced composites

Procedia PDF Downloads 182
154 Designing of Induction Motor Efficiency Monitoring System

Authors: Ali Mamizadeh, Ires Iskender, Saeid Aghaei

Abstract:

Energy is one of the important issues with high priority property in the world. Energy demand is rapidly increasing depending on the growing population and industry. The useable energy sources in the world will be insufficient to meet the need for energy. Therefore, the efficient and economical usage of energy sources is getting more importance. In a survey conducted among electric consuming machines, the electrical machines are consuming about 40% of the total electrical energy consumed by electrical devices and 96% of this consumption belongs to induction motors. Induction motors are the workhorses of industry and have very large application areas in industry and urban systems like water pumping and distribution systems, steel and paper industries and etc. Monitoring and the control of the motors have an important effect on the operating performance of the motor, driver selection and replacement strategy management of electrical machines. The sensorless monitoring system for monitoring and calculating efficiency of induction motors are studied in this study. The equivalent circuit of IEEE is used in the design of this study. The terminal current and voltage of induction motor are used in this motor to measure the efficiency of induction motor. The motor nameplate information and the measured current and voltage are used in this system to calculate accurately the losses of induction motor to calculate its input and output power. The efficiency of the induction motor is monitored online in the proposed method without disconnecting the motor from the driver and without adding any additional connection at the motor terminal box. The proposed monitoring system measure accurately the efficiency by including all losses without using torque meter and speed sensor. The monitoring system uses embedded architecture and does not need to connect to a computer to measure and log measured data. The conclusion regarding the efficiency, the accuracy and technical and economical benefits of the proposed method are presented. The experimental verification has been obtained on a 3 phase 1.1 kW, 2-pole induction motor. The proposed method can be used for optimal control of induction motors, efficiency monitoring and motor replacement strategy.

Keywords: induction motor, efficiency, power losses, monitoring, embedded design

Procedia PDF Downloads 322
153 Alternative Approach to the Machine Vision System Operating for Solving Industrial Control Issue

Authors: M. S. Nikitenko, S. A. Kizilov, D. Y. Khudonogov

Abstract:

The paper considers an approach to a machine vision operating system combined with using a grid of light markers. This approach is used to solve several scientific and technical problems, such as measuring the capability of an apron feeder delivering coal from a lining return port to a conveyor in the technology of mining high coal releasing to a conveyor and prototyping an autonomous vehicle obstacle detection system. Primary verification of a method of calculating bulk material volume using three-dimensional modeling and validation in laboratory conditions with relative errors calculation were carried out. A method of calculating the capability of an apron feeder based on a machine vision system and a simplifying technology of a three-dimensional modelled examined measuring area with machine vision was offered. The proposed method allows measuring the volume of rock mass moved by an apron feeder using machine vision. This approach solves the volume control issue of coal produced by a feeder while working off high coal by lava complexes with release to a conveyor with accuracy applied for practical application. The developed mathematical apparatus for measuring feeder productivity in kg/s uses only basic mathematical functions such as addition, subtraction, multiplication, and division. Thus, this fact simplifies software development, and this fact expands the variety of microcontrollers and microcomputers suitable for performing tasks of calculating feeder capability. A feature of an obstacle detection issue is to correct distortions of the laser grid, which simplifies their detection. The paper presents algorithms for video camera image processing and autonomous vehicle model control based on obstacle detection machine vision systems. A sample fragment of obstacle detection at the moment of distortion with the laser grid is demonstrated.

Keywords: machine vision, machine vision operating system, light markers, measuring capability, obstacle detection system, autonomous transport

Procedia PDF Downloads 83
152 Open Fields' Dosimetric Verification for a Commercially-Used 3D Treatment Planning System

Authors: Nashaat A. Deiab, Aida Radwan, Mohamed Elnagdy, Mohamed S. Yahiya, Rasha Moustafa

Abstract:

This study is to evaluate and investigate the dosimetric performance of our institution's 3D treatment planning system, Elekta PrecisePLAN, for open 6MV fields including square, rectangular, variation in SSD, centrally blocked, missing tissue, square MLC and MLC shaped fields guided by the recommended QA tests prescribed in AAPM TG53, NCS report 15 test packages, IAEA TRS 430 and ESTRO booklet no.7. The study was performed for Elekta Precise linear accelerator designed for clinical range of 4, 6 and 15 MV photon beams with asymmetric jaws and fully integrated multileaf collimator that enables high conformance to target with sharp field edges. Seven different tests were done applied on solid water equivalent phantom along with 2D array dose detection system, the calculated doses using 3D treatment planning system PrecisePLAN, compared with measured doses to make sure that the dose calculations are accurate for open fields including square, rectangular, variation in SSD, centrally blocked, missing tissue, square MLC and MLC shaped fields. The QA results showed dosimetric accuracy of the TPS for open fields within the specified tolerance limits. However large square (25cm x 25cm) and rectangular fields (20cm x 5cm) some points were out of tolerance in penumbra region (11.38 % and 10.9 %, respectively). For the test of SSD variation, the large field resulted from SSD 125 cm for 10cm x 10cm filed the results recorded an error of 0.2% at the central axis and 1.01% in penumbra. The results yielded differences within the accepted tolerance level as recommended. Large fields showed variations in penumbra. These differences between dose values predicted by the TPS and the measured values at the same point may result from limitations of the dose calculation, uncertainties in the measurement procedure, or fluctuations in the output of the accelerator.

Keywords: quality assurance, dose calculation, 3D treatment planning system, photon beam

Procedia PDF Downloads 478
151 Detection of Phoneme [S] Mispronounciation for Sigmatism Diagnosis in Adults

Authors: Michal Krecichwost, Zauzanna Miodonska, Pawel Badura

Abstract:

The diagnosis of sigmatism is mostly based on the observation of articulatory organs. It is, however, not always possible to precisely observe the vocal apparatus, in particular in the oral cavity of the patient. Speech processing can allow to objectify the therapy and simplify the verification of its progress. In the described study the methodology for classification of incorrectly pronounced phoneme [s] is proposed. The recordings come from adults. They were registered with the speech recorder at the sampling rate of 44.1 kHz and the resolution of 16 bit. The database of pathological and normative speech has been collected for the study including reference assessments provided by the speech therapy experts. Ten adult subjects were asked to simulate a certain type of stigmatism under the speech therapy expert supervision. In the recordings, the analyzed phone [s] was surrounded by vowels, viz: ASA, ESE, ISI, SPA, USU, YSY. Thirteen MFCC (mel-frequency cepstral coefficients) and RMS (root mean square) values are calculated within each frame being a part of the analyzed phoneme. Additionally, 3 fricative formants along with corresponding amplitudes are determined for the entire segment. In order to aggregate the information within the segment, the average value of each MFCC coefficient is calculated. All features of other types are aggregated by means of their 75th percentile. The proposed method of features aggregation reduces the size of the feature vector used in the classification. Binary SVM (support vector machine) classifier is employed at the phoneme recognition stage. The first group consists of pathological phones, while the other of the normative ones. The proposed feature vector yields classification sensitivity and specificity measures above 90% level in case of individual logo phones. The employment of a fricative formants-based information improves the sole-MFCC classification results average of 5 percentage points. The study shows that the employment of specific parameters for the selected phones improves the efficiency of pathology detection referred to the traditional methods of speech signal parameterization.

Keywords: computer-aided pronunciation evaluation, sibilants, sigmatism diagnosis, speech processing

Procedia PDF Downloads 256
150 Analysis of Impact of Airplane Wheels Pre-Rotating on Landing Gears of Large Airplane

Authors: Huang Bingling, Jia Yuhong, Liu Yanhui

Abstract:

As an important part of aircraft, landing gears are responsible for taking-off and landing function. In recent years, big airplane's structural quality increases a lot. As a result, landing gears have stricter technical requirements than ever before such as structure strength and etc. If the structural strength of the landing gear is enhanced through traditional methods like increasing structural quality, the negative impacts on the landing gear's function would be very serious and even counteract the positive effects. Thus, in order to solve this problem, the impact of pre-rotating of landing gears on performance of landing gears is studied from the theoretical and experimental verification in this paper. By increasing the pre-rotating speed of the wheel, it can improve the performance of the landing gear and reduce the structural quality, the force of joint parts and other properties. In addition, the pre-rotating of the wheels also has other advantages, such as reduce the friction between wheels and ground and extend the life of the wheel. In this paper, the impact of the pre-rotating speed on landing gears and the connecting between landing gears performance and pre-rotating speed would be researched in detail. This paper is divided into three parts. In the first part, large airplane landing gear model is built by CATIA and LMS. As most general landing gear type in big plane, four-wheel landing gear is picked as model. The second part is to simulate the process of landing in LMS motion, and study the impact of pre-rotating of wheels on the aircraft`s properties, including the buffer stroke, efficiency, power; friction, displacement and relative speed between piston and sleeve; force and load distribution of tires. The simulation results show that the characteristics of the different pre-rotation speed are understood. The third part is conclusion. Through the data of the previous simulation and the relationship between the pre-rotation speed of the aircraft wheels and the performance of the aircraft, recommended speed interval is proposed. This paper is of great theoretical value to improve the performance of large airplane. It is a very effective method to improve the performance of aircraft by setting wheel pre-rotating speed. Do not need to increase the structural quality too much, eliminating the negative effects of traditional methods.

Keywords: large airplane, landing gear, pre-rotating, simulation

Procedia PDF Downloads 305
149 The Effect of the Contributory Pension Scheme on Employees’ Performance

Authors: Oladipo Jimoh Ayanda, Fashagba Mathew Olasehinde

Abstract:

Pension is a post retirement benefit paid to employees after retirement to cushion the effects of severance from monthly emoluments. It serves the dual purpose of providing financial succour to retired employees as well as motivating employees currently in service to greater performance on duty. However, the scheme, as operated in Nigeria, is prone to some pitfalls such as delayed and irregular payments, inadequate budgetary provisions, employee sufferings and deaths arising from the rigors of verification exercises, among others. This necessitated the replacement of the old scheme with the contributory pension scheme through an enabling law in 2004. The implementation of the new scheme has its own challenges especially in connection with administration. These challenges pose a fundamental problem of establishing a nexus between pension benefits and work performance which represent the focus of the study. The study objectives were to: determine the effect of contributory pension scheme on employees’ performance. The study population consisted of National Universities Commission recognized public and private universities in the South West Nigeria. Multi-stage sampling method involving stratified sampling and systematic sampling was used in selecting 359 respondents while data were collected through questionnaire administration. The procedure for analyzing the data included descriptive statistic, normal distribution test and cross-tabulation (gamma coefficient). The findings of the study showed that the existence of the scheme positively enhances employees’ performance as indicated by normal distribution test with Z-score (10.169) which is greater than the table value (1.96) at 0.05 level. The study concluded that the scope for enhancing employee current job performance can be quite elastic if future retirement benefits are guaranteed through proper and efficient administration and management of the contributory pension scheme. The study recommended that certain factors such as employers’ commitment which account for different levels of confidence between public and private universities should be looked into in order to improve confidence across board while the provisions of the scheme as they affect the PFAs should be properly monitored to ensure compliance.

Keywords: pension, retirement, performance, employees, benefit

Procedia PDF Downloads 301
148 Golden Dawn's Rhetoric on Social Networks: Populism, Xenophobia and Antisemitism

Authors: Georgios Samaras

Abstract:

New media such as Facebook, YouTube and Twitter introduced the world to a new era of instant communication. An era where online interactions could replace a lot of offline actions. Technology can create a mediated environment in which participants can communicate (one-to-one, one-to-many, and many-to-many) both synchronously and asynchronously and participate in reciprocal message exchanges. Currently, social networks are attracting similar academic attention to that of the internet after its mainstream implementation into public life. Websites and platforms are seen as the forefront of a new political change. There is a significant backdrop of previous methodologies employed to research the effects of social networks. New approaches are being developed to be able to adapt to the growth of social networks and the invention of new platforms. Golden Dawn was the first openly neo-Nazi party post World War II to win seats in the parliament of a European country. Its racist rhetoric and violent tactics on social networks were rewarded by their supporters, who in the face of Golden Dawn’s leaders saw a ‘new dawn’ in Greek politics. Mainstream media banned its leaders and members of the party indefinitely after Ilias Kasidiaris attacked Liana Kanelli, a member of the Greek Communist Party, on live television. This media ban was seen as a treasonous move by a significant percentage of voters, who believed that the system was desperately trying to censor Golden Dawn to favor mainstream parties. The shocking attack on live television received international coverage and while European countries were condemning this newly emerged neo-Nazi rhetoric, almost 7 percent of the Greek population rewarded Golden Dawn with 18 seats in the Greek parliament. Many seem to think that Golden Dawn mobilised its voters online and this approach played a significant role in spreading their message and appealing to wider audiences. No strict online censorship existed back in 2012 and although Golden Dawn was openly used neo-Nazi symbolism, it was allowed to use social networks without serious restrictions until 2017. This paper used qualitative methods to investigate Golden Dawn’s rise in social networks from 2012 to 2019. The focus of the content analysis was set on three social networking platforms: Facebook, Twitter and YouTube, while the existence of Golden Dawn’s website, which was used as a news sharing hub, was also taken into account. The content analysis included text and visual analyses that sampled content from their social networking pages to translate their political messaging through an ideological lens focused on extreme-right populism. The absence of hate speech regulations on social network platforms in 2012 allowed the free expression of those heavily ultranationalist and populist views, as they were employed by Golden Dawn in the Greek political scene. On YouTube, Facebook and Twitter, the influence of their rhetoric was particularly strong. Official channels and MPs profiles were investigated to explore the messaging in-depth and understand its ideological elements.

Keywords: populism, far-right, social media, Greece, golden dawn

Procedia PDF Downloads 118