Search results for: time series data mining
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 38136

Search results for: time series data mining

27216 Secure Network Coding-Based Named Data Network Mutual Anonymity Transfer Protocol

Authors: Tao Feng, Fei Xing, Ye Lu, Jun Li Fang

Abstract:

NDN is a kind of future Internet architecture. Due to the NDN design introduces four privacy challenges,Many research institutions began to care about the privacy issues of naming data network(NDN).In this paper, we are in view of the major NDN’s privacy issues to investigate privacy protection,then put forwards more effectively anonymous transfer policy for NDN.Firstly,based on mutual anonymity communication for MP2P networks,we propose NDN mutual anonymity protocol.Secondly,we add interest package authentication mechanism in the protocol and encrypt the coding coefficient, security of this protocol is improved by this way.Finally, we proof the proposed anonymous transfer protocol security and anonymity.

Keywords: NDN, mutual anonymity, anonymous routing, network coding, authentication mechanism

Procedia PDF Downloads 434
27215 Dielectric Spectroscopy Investigation of Hydrophobic Silica Aerogel

Authors: Deniz Bozoglu, Deniz Deger, Kemal Ulutas, Sahin Yakut

Abstract:

In recent years, silica aerogels have attracted great attention due to their outstanding properties, and their wide variety of potential applications such as microelectronics, nuclear and high-energy physics, optics and acoustics, superconductivity, space-physics. Hydrophobic silica aerogels were successfully synthesized in one-step by surface modification at ambient pressure. FT-IR result confirmed that Si-OH groups were successfully converted into hydrophobic and non-polar Si-CH3 groups by surface modification using trimethylchloro silane (TMCS) as co-precursor. Using Alpha-A High-Resolution Dielectric, Conductivity and Impedance Analyzer, AC conductivity of samples were examined at temperature range 293-423 K and measured over frequency range between 1-106 Hz. The characteristic relaxation time decreases with increasing temperature. The AC conductivity follows σ_AC (ω)=σ_t-σ_DC=Aω^s relation at frequencies higher than 10 Hz, and the dominant conduction mechanism is found to obey the Correlated Barrier Hopping (CBH) mechanism. At frequencies lower than 10 Hz, the electrical conduction is found to be in accordance with DC conduction mechanism. The activation energies obtained from AC conductivity results and it was observed two relaxation regions.

Keywords: aerogel, synthesis, dielectric constant, dielectric loss, relaxation time

Procedia PDF Downloads 179
27214 Machine Learning for Rational Decision-Making: Introducing Creativity to Teachers within a School System

Authors: Larry Audet

Abstract:

Creativity is suddenly and fortunately a new educational focus in the United Arab Emirates and around the world. Yet still today many leaders of creativity are not sure how to introduce it to their teachers. It is impossible to simultaneously introduce every aspect of creativity into a work climate and reach any degree of organizational coherence. The number of alternatives to explore is so great; the information teachers need to learn is so vast, that even an approximation to including every concept and theory of creativity into the school organization is hard to conceive. Effective leaders of creativity need evidence-based and practical guidance for introducing and stimulating creativity in others. Machine learning models reveal new findings from KEYS Survey© data about teacher perceptions of stimulants and barriers to their individual and collective creativity. Findings from predictive and causal models provide leaders with a rational for decision-making when introducing creativity into their organization. Leaders should focus on management practices first. Analyses reveal that creative outcomes are more likely to occur when teachers perceive supportive management practices: providing teachers with challenging work that calls for their best efforts; allowing freedom and autonomy in their practice of work; allowing teachers to form creative work-groups; and, recognizing them for their efforts. Once management practices are in place, leaders should focus their efforts on modeling risk-taking, providing optimal amounts of preparation time, and evaluating teachers fairly.

Keywords: creativity, leadership, KEYS survey, teaching, work climate

Procedia PDF Downloads 150
27213 Assessment of Genotoxic Effects of a Fungicide (Propiconazole) in Freshwater Fish Gambusia Affinis Using Alkaline Single-Cell Gel Electrophoresis (Comet Essay)

Authors: Bourenane Bouhafs Naziha

Abstract:

ARTEA330EC is a fungicide used to inhibit the growth of many types of fungi on and cereals and rice, it is the single largest selling agrochemical that has been widely detected in surface waters in our area (Northeast Algerian). The studies on long-term genotoxic effects of fugicides in different tissues of fish using genotoxic biomarkers are limited. Therefore, in the present study DNA damage by propiconazole in freshwater fish Gambusia affinis by comet assays was investigated. The LC(50)- 96 h of the fungicide was estimated for the fish in a semi-static system. On this basis of LC(50) value sublethal and nonlethal concentrations were determined (25; 50; 75; and 100 ppm). The DNA damage was measured in erythrocytes as the percentage of DNA in comet tails of fishes exposed to above concentrations the fungicide. In general,non significant effects for both the concentrations and time of exposure were observed in treated fish compared with the controls. However It was found that the highest DNA damage was observed at the highest concentration and the longest time of exposure (day 12). The study indicated comet assay to be sensitive and rapid method to detect genotoxicity of propiconasol and other pesticides in fishes.

Keywords: genotoxicity, fungicide, propiconazole, freshwater, Gambusia affinis, alkaline single-cell gel electrophoresis

Procedia PDF Downloads 286
27212 Predicting Consolidation Coefficient of Busan Clay by Time-Displacement-Velocity Methods

Authors: Thang Minh Le, Hadi Khabbaz

Abstract:

The coefficient of consolidation is a parameter governing the rate at which saturated soil particularly clay undergoes consolidation when subjected to an increase in pressure. The rate and amount of compression in soil varies with the rate that pore water is lost; and hence depends on soil permeability. Over many years, various methods have been proposed to determine the coefficient of consolidation, cv, which is an indication of the rate of foundation settlement on soft ground. However, defining this parameter is often problematic and heavily relies on graphical techniques, which are subject to some uncertainties. This paper initially presents an overview of many well-established methods to determine the vertical coefficient of consolidation from the incremental loading consolidation tests. An array of consolidation tests was conducted on the undisturbed clay samples, collected at various depths from a site in Nakdong river delta, Busan, South Korea. The consolidation test results on these soft sensitive clay samples were employed to evaluate the targeted methods to predict the settlement rate of Busan clay. In relationship of time-displacement-velocity, a total of 3 method groups from 10 common procedures were classified and compared together. Discussions on study results will be also provided.

Keywords: Busan clay, coefficient of consolidation, constant rate of strain, incremental loading

Procedia PDF Downloads 173
27211 Expert Supporting System for Diagnosing Lymphoid Neoplasms Using Probabilistic Decision Tree Algorithm and Immunohistochemistry Profile Database

Authors: Yosep Chong, Yejin Kim, Jingyun Choi, Hwanjo Yu, Eun Jung Lee, Chang Suk Kang

Abstract:

For the past decades, immunohistochemistry (IHC) has been playing an important role in the diagnosis of human neoplasms, by helping pathologists to make a clearer decision on differential diagnosis, subtyping, personalized treatment plan, and finally prognosis prediction. However, the IHC performed in various tumors of daily practice often shows conflicting and very challenging results to interpret. Even comprehensive diagnosis synthesizing clinical, histologic and immunohistochemical findings can be helpless in some twisted cases. Another important issue is that the IHC data is increasing exponentially and more and more information have to be taken into account. For this reason, we reached an idea to develop an expert supporting system to help pathologists to make a better decision in diagnosing human neoplasms with IHC results. We gave probabilistic decision tree algorithm and tested the algorithm with real case data of lymphoid neoplasms, in which the IHC profile is more important to make a proper diagnosis than other human neoplasms. We designed probabilistic decision tree based on Bayesian theorem, program computational process using MATLAB (The MathWorks, Inc., USA) and prepared IHC profile database (about 104 disease category and 88 IHC antibodies) based on WHO classification by reviewing the literature. The initial probability of each neoplasm was set with the epidemiologic data of lymphoid neoplasm in Korea. With the IHC results of 131 patients sequentially selected, top three presumptive diagnoses for each case were made and compared with the original diagnoses. After the review of the data, 124 out of 131 were used for final analysis. As a result, the presumptive diagnoses were concordant with the original diagnoses in 118 cases (93.7%). The major reason of discordant cases was that the similarity of the IHC profile between two or three different neoplasms. The expert supporting system algorithm presented in this study is in its elementary stage and need more optimization using more advanced technology such as deep-learning with data of real cases, especially in differentiating T-cell lymphomas. Although it needs more refinement, it may be used to aid pathological decision making in future. A further application to determine IHC antibodies for a certain subset of differential diagnoses might be possible in near future.

Keywords: database, expert supporting system, immunohistochemistry, probabilistic decision tree

Procedia PDF Downloads 212
27210 DEMs: A Multivariate Comparison Approach

Authors: Juan Francisco Reinoso Gordo, Francisco Javier Ariza-López, José Rodríguez Avi, Domingo Barrera Rosillo

Abstract:

The evaluation of the quality of a data product is based on the comparison of the product with a reference of greater accuracy. In the case of MDE data products, quality assessment usually focuses on positional accuracy and few studies consider other terrain characteristics, such as slope and orientation. The proposal that is made consists of evaluating the similarity of two DEMs (a product and a reference), through the joint analysis of the distribution functions of the variables of interest, for example, elevations, slopes and orientations. This is a multivariable approach that focuses on distribution functions, not on single parameters such as mean values or dispersions (e.g. root mean squared error or variance). This is considered to be a more holistic approach. The use of the Kolmogorov-Smirnov test is proposed due to its non-parametric nature, since the distributions of the variables of interest cannot always be adequately modeled by parametric models (e.g. the Normal distribution model). In addition, its application to the multivariate case is carried out jointly by means of a single test on the convolution of the distribution functions of the variables considered, which avoids the use of corrections such as Bonferroni when several statistics hypothesis tests are carried out together. In this work, two DEM products have been considered, DEM02 with a resolution of 2x2 meters and DEM05 with a resolution of 5x5 meters, both generated by the National Geographic Institute of Spain. DEM02 is considered as the reference and DEM05 as the product to be evaluated. In addition, the slope and aspect derived models have been calculated by GIS operations on the two DEM datasets. Through sample simulation processes, the adequate behavior of the Kolmogorov-Smirnov statistical test has been verified when the null hypothesis is true, which allows calibrating the value of the statistic for the desired significance value (e.g. 5%). Once the process has been calibrated, the same process can be applied to compare the similarity of different DEM data sets (e.g. the DEM05 versus the DEM02). In summary, an innovative alternative for the comparison of DEM data sets based on a multinomial non-parametric perspective has been proposed by means of a single Kolmogorov-Smirnov test. This new approach could be extended to other DEM features of interest (e.g. curvature, etc.) and to more than three variables

Keywords: data quality, DEM, kolmogorov-smirnov test, multivariate DEM comparison

Procedia PDF Downloads 101
27209 The Language of Landscape Architecture

Authors: Hosna Pourhashemi

Abstract:

Chahar Bagh, the symbol of the world, displayed around the pool of life in the centre, attempts to emulate Eden. It represents a duality concept based on the division of the universe into two perceptional insights, a celestial and an earthly one. Chahar Bagh garden pattern refers to the Garden of Eden, that was watered and framed by main four rivers. This microcosm is combined with a mystical love of flowers, sweet-scented trees, the variety of colors, and the sense of eternal life. This symbol of the integration of spontaneous expressivity of the natural elements and reasoning awareness of man strives for the ideal of divine perfection. Through collecting and analyzing the data, the prevalence and continuous influence of Chahar Bagh concept on selected historical gardens was elaborated and evaluated. After the conquest of Persia by the Arabs in the 7th century, Chahar Bagh was adopted and spread throughout the Islamic expansion, from the Middle East, westward across northern Africa to Morocco and the Iberian Peninsula, and eastward through Iran to Central Asia and India. Furthermore, its continuity to the mid of 16th century Renaissance period is shown. By adapting the semiotic theory of Peirce and Saussure on the Persian garden, Chahar Bagh was defined as the basic pattern language for the garden culture. The hypothesis of the continuous influence of Chahar Bagh pattern language on today’s landscape architecture was examined on selected works of Dieter Kienast, as the important and relevant protagonist of his time (end of twentieth ct.) and up to our time. Chahar Bagh pattern language offers collective cultural sensitive healing wisdom transmitted down through the millennia. Through my reflections in Dieter Kienast’s works, I transformed my personal experience into a transpersonal understanding regarding the Sufi philosophy and the Jung psychology, which brings me to define new design theories and methods to form a spiritual, as I call it” Quaternary Perception Model” for landscape architecture. Based on a cognition process through self-journeying in this holistic model, human consciousness can be developed to access to “higher” levels of being and embrace the unity. The self-purification and mindfulness through transpersonal confrontation in the ”Quaternary Perception Model” generates a form of heart-based treatment. I adapted the seven spiritual levels of Sufi self-development on the perception of landscape, representing the stages of the self, ranging from absolutely self-centered to pure spiritual humanity. I redefine and reread the elements and features of Chahar Bagh pattern language for today’s landscape architecture. The “lost paradise” lies in our heart and can be perceived by all humans in landscapes and cities designed in the spirit of” Quaternary Model”.

Keywords: persian garden, pattern language of Chahar Bagh, wholistic Perception, dieter kienast, “quaternary model”

Procedia PDF Downloads 68
27208 Causes of Construction Delays in Qatar Construction Projects

Authors: Murat Gunduz, Mohanad H. A. AbuHassan

Abstract:

Construction industry mainly focuses on the superstructure, infrastructure, and oil and gas industry. The development of infrastructure projects in developing countries attracted a lot of foreign construction contractors, consultants, suppliers and diversified workforce to interfere and to be evolved in such huge investment. Reducing worksite delays in such projects require knowledge and attention. Therefore, it is important to identify the influencing delay attributes affecting construction projects. The significant project factors affecting construction delays were investigated. Data collection was carried out through an online web survey system to capture significant factors. Significant factors were determined with importance index and relevant recommendations are made. The output of the data analysis would lead the industry experts better assess the impact of construction delays on construction projects.

Keywords: construction industry, delays, importance index, frequency index

Procedia PDF Downloads 334
27207 A Study of Soft Soil Improvement by Using Lime Grit

Authors: Ashim Kanti Dey, Briti Sundar Bhowmik

Abstract:

This paper presents an idea to improve the soft soil by using lime grits which are normally produced as waste product in the paper manufacturing industries. This waste material cannot be used as a construction material because of its light weight, uniform size and poor compaction control. With scarcity in land, effective disposal of lime grit is a major concern of all paper manufacturing industries. Considering its non-plasticity and high permeability characteristics the lime grit may suitably be used as a drainage material for speedy consolidation of cohesive soil. It can also be used to improve the bearing capacity of soft clay. An attempt has been made in this paper to show the usefulness of lime grit in improving the bearing capacity of shallow foundation resting on soft clayey soil. A series of undrained unconsolidated cyclic triaxial tests performed at different area ratios and at three different water contents shows that dynamic shear modulus and damping ratio can be substantially improved with lime grit. Improvement is observed to be more in case of higher area ratio and higher water content. Static triaxial tests were also conducted on lime grit reinforced clayey soil after application of 50 load cycles to determine the effect of lime grit columns on cyclically loaded clayey soils. It is observed that the degradation is less for lime grit stabilized soil. A study of model test with different area ratio of lime column installation is also included to see the field behaviour of lime grit reinforced soil.

Keywords: lime grit column, area ratio, shear modulus, damping ratio, strength ratio, improvement factor, degradation factor

Procedia PDF Downloads 490
27206 The Anti-Globalization Movement, Brexit, Outsourcing and the Current State of Globalization

Authors: Alexis Naranjo

Abstract:

In the current global stage, a new sense and mix feelings against the globalization has started to take shape thanks to events such as Brexit and the 2016 US election. The perceptions towards the globalization have started to focus in a resistance movement called the 'anti-globalization movement'. This paper examines the current global stage vs. leadership decisions in a time when market integrations are not longer seeing as an opportunity for an economic growth buster. The biggest economy in the world the United States of America has started to face a new beginning of something called 'anti-globalization', in the current global stage starting with the United Kingdom to the United States a new strategy to help local economies has started to emerge. A new nationalist movement has started to focus on their local economies which now represents a direct threat to the globalization, trade agreements, wages and free markets. Business leaders of multinationals now in our days face a new dilemma, how to address the feeling that globalization and outsourcing destroy and take away jobs from local economies. The initial perception of the literature and data rebels that companies in Western countries like the US sees many risks associate with outsourcing, however, saving cost associated with outsourcing is greater than the firm’s local reputation. Starting with India as a good example of a supplier of IT developers, analysts and call centers we can start saying that India is an industrialized nation which has not yet secured its spot and title. India has emerged as a powerhouse in the outsource industry, which makes India hold the number one spot in the world to outsource IT services. Thanks to the globalization of economies and markets around the globe that new ideas to increase productivity at a lower cost has been existing for years and has started to offer new ideas and options to businesses in different industries. The economic growth of the information technology (IT) industry in India is an example of the power of the globalization which in the case of India has been tremendous and significant especially in the economic arena. This research paper concentrates in understand the behavior of business leaders: First, how multinational’s leaders will face the new challenges and what actions help them to lead in turbulent times. Second, if outsourcing or withdraw from a market is an option what are the consequences and how you communicate and negotiate from the business leader perspective. Finally, is the perception of leaders focusing on financial results or they have a different goal? To answer these questions, this study focuses on the most recent data available to outline and present the findings of the reason why outsourcing is and option and second, how and why those decisions are made. This research also explores the perception of the phenomenon of outsourcing in many ways and explores how the globalization has contributed to its own questioning.

Keywords: anti-globalization, globalization, leadership, outsourcing

Procedia PDF Downloads 177
27205 Reliability of Swine Estrous Detector Probe in Dairy Cattle Breeding

Authors: O. O. Leigh, L. C. Agbugba, A. O. Oyewunmi, A. E. Ibiam, A. Hassan

Abstract:

Accuracy of insemination timing is a key determinant of high pregnancy rates in livestock breeding stations. The estrous detector probes are a recent introduction into the Nigerian livestock farming sector. Many of these probes are species-labeled and they measure changes in the vaginal mucus resistivity (VMR) during the stages of the estrous cycle. With respect to size and shaft conformation, the Draminski® swine estrous detector probe (sEDP) is quite similar to the bovine estrous detector probe. We investigated the reliability of the sEDP at insemination time on two farms designated as FM A and FM B. Cows (Bunaji, n=20 per farm) were evaluated for VMR at 16th h post standard OvSynch protocol, with concurrent insemination on FM B only. The difference in the mean VMR between FM A (221 ± 24.36) Ohms and FM B (254 ± 35.59) Ohms was not significant (p > 0.05). Sixteen cows (80%) at FM B were later (day 70) confirmed pregnant via rectal palpation and calved at term. These findings suggest consistency in VMR evaluated with sEDP at insemination as well as a high predictability for VMR associated with good pregnancy rates in dairy cattle. We conclude that Draminski® swine estrous detector probe is reliable in determining time of insemination in cattle breeding stations.

Keywords: dairy cattle, insemination, swine estrous probe, vaginal mucus resistivity

Procedia PDF Downloads 113
27204 Removal of Nickel Ions from Industrial Effluents by Batch and Column Experiments: A Comparison of Activated Carbon with Pinus Roxburgii Saw Dust

Authors: Sardar Khana, Zar Ali Khana

Abstract:

Rapid industrial development and urbanization contribute a lot to wastewater discharge. The wastewater enters into natural aquatic ecosystems from industrial activities and considers as one of the main sources of water pollution. Discharge of effluents loaded with heavy metals into the surrounding environment has become a key issue regarding human health risk, environment, and food chain contamination. Nickel causes fatigue, cancer, headache, heart problems, skin diseases (Nickel Itch), and respiratory disorders. Nickel compounds such as Nickel Sulfide and Nickel oxides in industrial environment, if inhaled, have an association with an increased risk of lung cancer. Therefore the removal of Nickel from effluents before discharge is necessary. Removal of Nickel by low-cost biosorbents is an efficient method. This study was aimed to investigate the efficiency of activated carbon and Pinusroxburgiisaw dust for the removal of Nickel from industrial effluents using commercial Activated Carbon, and raw P.roxburgii saw dust. Batch and column adsorption experiments were conducted for the removal of Nickel. The study conducted indicates that removal of Nickel greatly dependent on pH, contact time, Nickel concentration, and adsorbent dose. Maximum removal occurred at pH 9, contact time of 600 min, and adsorbent dose of 1 g/100 mL. The highest removal was 99.62% and 92.39% (pH based), 99.76% and 99.9% (dose based), 99.80% and 100% (agitation time), 92% and 72.40% (Ni Conc. based) for P.roxburgii saw dust and activated Carbon, respectively. Similarly, the Ni removal in column adsorption was 99.77% and 99.99% (bed height based), 99.80% and 99.99% (Concentration based), 99.98%, and 99.81% (flow rate based) during column studies for Nickel using P.Roxburgiisaw dust and activated carbon, respectively. Results were compared with Freundlich isotherm model, which showed “r2” values of 0.9424 (Activated carbon) and 0.979 (P.RoxburgiiSaw Dust). While Langmuir isotherm model values were 0.9285 (Activated carbon) and 0.9999 (P.RoxburgiiSaw Dust), the experimental results were fitted to both the models. But the results were in close agreement with Langmuir isotherm model.

Keywords: nickel removal, batch, and column, activated carbon, saw dust, plant uptake

Procedia PDF Downloads 118
27203 An Evaluation of the Implementation of Training and Development in a South African Municipality

Authors: Granny K. Lobega, Ntsako Idrs Makamu

Abstract:

The envisaged paper was to evaluate the implementation of training and development in a South African Municipality. The paper adopted a qualitative research approach. Primary data were collected from 20 participants which were sampled from the municipality, and data were collected by using semi-structured interviews. The main objective of the study was to assess the reason for the implementation of training and development program by the municipality. The study revealed that workers are helped to focus, and priority is placed on empowering employees, productivity is increased and contributing to better team morale. The study recommended that the municipality must establish proper procedures to be followed when selecting qualifying employees to attend the training and further use the training audit to establish the necessary training to be offered to qualifying employees.

Keywords: training, development, municipality, evaluation, human resource management

Procedia PDF Downloads 132
27202 Specification and Unification of All Fundamental Forces Exist in Universe in the Theoretical Perspective – The Universal Mechanics

Authors: Surendra Mund

Abstract:

At the beginning, the physical entity force was defined mathematically by Sir Isaac Newton in his Principia Mathematica as F ⃗=(dp ⃗)/dt in form of his second law of motion. Newton also defines his Universal law of Gravitational force exist in same outstanding book, but at the end of 20th century and beginning of 21st century, we have tried a lot to specify and unify four or five Fundamental forces or Interaction exist in universe, but we failed every time. Usually, Gravity creates problems in this unification every single time, but in my previous papers and presentations, I defined and derived Field and force equations for Gravitational like Interactions for each and every kind of central systems. This force is named as Variational Force by me, and this force is generated by variation in the scalar field density around the body. In this particular paper, at first, I am specifying which type of Interactions are Fundamental in Universal sense (or in all type of central systems or bodies predicted by my N-time Inflationary Model of Universe) and then unify them in Universal framework (defined and derived by me as Universal Mechanics in a separate paper) as well. This will also be valid in Universal dynamical sense which includes inflations and deflations of universe, central system relativity, Universal relativity, ϕ-ψ transformation and transformation of spin, physical perception principle, Generalized Fundamental Dynamical Law and many other important Generalized Principles of Generalized Quantum Mechanics (GQM) and Central System Theory (CST). So, In this article, at first, I am Generalizing some Fundamental Principles, and then Unifying Variational Forces (General form of Gravitation like Interactions) and Flow Generated Force (General form of EM like Interactions), and then Unify all Fundamental Forces by specifying Weak and Strong Interactions in form of more basic terms - Variational, Flow Generated and Transformational Interactions.

Keywords: Central System Force, Disturbance Force, Flow Generated Forces, Generalized Nuclear Force, Generalized Weak Interactions, Generalized EM-Like Interactions, Imbalance Force, Spin Generated Forces, Transformation Generated Force, Unified Force, Universal Mechanics, Uniform And Non-Uniform Variational Interactions, Variational Interactions

Procedia PDF Downloads 35
27201 Prevalence and Risk Factors of Musculoskeletal Disorders among School Teachers in Mangalore: A Cross Sectional Study

Authors: Junaid Hamid Bhat

Abstract:

Background: Musculoskeletal disorders are one of the main causes of occupational illness. Mechanisms and the factors like repetitive work, physical effort and posture, endangering the risk of musculoskeletal disorders would now appear to have been properly identified. Teacher’s exposure to work-related musculoskeletal disorders appears to be insufficiently described in the literature. Little research has investigated the prevalence and risk factors of musculoskeletal disorders in teaching profession. Very few studies are available in this regard and there are no studies evident in India. Purpose: To determine the prevalence of musculoskeletal disorders and to identify and measure the association of such risk factors responsible for developing musculoskeletal disorders among school teachers. Methodology: An observational cross sectional study was carried out. 500 school teachers from primary, middle, high and secondary schools were selected, based on eligibility criteria. A signed consent was obtained and a self-administered, validated questionnaire was used. Descriptive statistics was used to compute the statistical mean and standard deviation, frequency and percentage to estimate the prevalence of musculoskeletal disorders among school teachers. The data analysis was done by using SPSS version 16.0. Results: Results indicated higher pain prevalence (99.6%) among school teachers during the past 12 months. Neck pain (66.1%), low back pain (61.8%) and knee pain (32.0%) were the most prevalent musculoskeletal complaints of the subjects. Prevalence of shoulder pain was also found to be high among school teachers (25.9%). 52.0% subjects reported pain as disabling in nature, causing sleep disturbance (44.8%) and pain was found to be associated with work (87.5%). A significant association was found between musculoskeletal disorders and sick leaves/absenteeism. Conclusion: Work-related musculoskeletal disorders particularly neck pain, low back pain, and knee pain, is highly prevalent and risk factors are responsible for the development of same in school teachers. There is little awareness of musculoskeletal disorders among school teachers, due to work load and prolonged/static postures. Further research should concentrate on specific risk factors like repetitive movements, psychological stress, and ergonomic factors and should be carried out all over the country and the school teachers should be studied carefully over a period of time. Also, an ergonomic investigation is needed to decrease the work-related musculoskeletal disorder problems. Implication: Recall bias and self-reporting can be considered as limitations. Also, cause and effect inferences cannot be ascertained. Based on these results, it is important to disseminate general recommendations for prevention of work-related musculoskeletal disorders with regards to the suitability of furniture, equipment and work tools, environmental conditions, work organization and rest time to school teachers. School teachers in the early stage of their careers should try to adapt the ergonomically favorable position whilst performing their work for a safe and healthy life later. Employers should be educated on practical aspects of prevention to reduce musculoskeletal disorders, since changes in workplace and work organization and physical/recreational activities are required.

Keywords: work related musculoskeletal disorders, school teachers, risk factors funding, medical and health sciences

Procedia PDF Downloads 262
27200 A Concept for Flexible Battery Cell Manufacturing from Low to Medium Volumes

Authors: Tim Giesen, Raphael Adamietz, Pablo Mayer, Philipp Stiefel, Patrick Alle, Dirk Schlenker

Abstract:

The competitiveness and success of new electrical energy storages such as battery cells are significantly dependent on a short time-to-market. Producers who decide to supply new battery cells to the market need to be easily adaptable in manufacturing with respect to the early customers’ needs in terms of cell size, materials, delivery time and quantity. In the initial state, the required output rates do not yet allow the producers to have a fully automated manufacturing line nor to supply handmade battery cells. Yet there was no solution for manufacturing battery cells in low to medium volumes in a reproducible way. Thus, in terms of cell format and output quantity, a concept for the flexible assembly of battery cells was developed by the Fraunhofer-Institute for Manufacturing Engineering and Automation. Based on clustered processes, the modular system platform can be modified, enlarged or retrofitted in a short time frame according to the ordered product. The paper shows the analysis of the production steps from a conventional battery cell assembly line. Process solutions were found by using I/O-analysis, functional structures, and morphological boxes. The identified elementary functions were subsequently clustered by functional coherences for automation solutions and thus the single process cluster was generated. The result presented in this paper enables to manufacture different cell products on the same production system using seven process clusters. The paper shows the solution for a batch-wise flexible battery cell production using advanced process control. Further, the performed tests and benefits by using the process clusters as cyber-physical systems for an integrated production and value chain are discussed. The solution lowers the hurdles for SMEs to launch innovative cell products on the global market.

Keywords: automation, battery production, carrier, advanced process control, cyber-physical system

Procedia PDF Downloads 318
27199 Arithmetic Operations Based on Double Base Number Systems

Authors: K. Sanjayani, C. Saraswathy, S. Sreenivasan, S. Sudhahar, D. Suganya, K. S. Neelukumari, N. Vijayarangan

Abstract:

Double Base Number System (DBNS) is an imminent system of representing a number using two bases namely 2 and 3, which has its application in Elliptic Curve Cryptography (ECC) and Digital Signature Algorithm (DSA).The previous binary method representation included only base 2. DBNS uses an approximation algorithm namely, Greedy Algorithm. By using this algorithm, the number of digits required to represent a larger number is less when compared to the standard binary method that uses base 2 algorithms. Hence, the computational speed is increased and time being reduced. The standard binary method uses binary digits 0 and 1 to represent a number whereas the DBNS method uses binary digit 1 alone to represent any number (canonical form). The greedy algorithm uses two ways to represent the number, one is by using only the positive summands and the other is by using both positive and negative summands. In this paper, arithmetic operations are used for elliptic curve cryptography. Elliptic curve discrete logarithm problem is the foundation for most of the day to day elliptic curve cryptography. This appears to be a momentous hard slog compared to digital logarithm problem. In elliptic curve digital signature algorithm, the key generation requires 160 bit of data by usage of standard binary representation. Whereas, the number of bits required generating the key can be reduced with the help of double base number representation. In this paper, a new technique is proposed to generate key during encryption and extraction of key in decryption.

Keywords: cryptography, double base number system, elliptic curve cryptography, elliptic curve digital signature algorithm

Procedia PDF Downloads 386
27198 Predictive Semi-Empirical NOx Model for Diesel Engine

Authors: Saurabh Sharma, Yong Sun, Bruce Vernham

Abstract:

Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model.  Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.

Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical

Procedia PDF Downloads 102
27197 A Medical Vulnerability Scoring System Incorporating Health and Data Sensitivity Metrics

Authors: Nadir A. Carreon, Christa Sonderer, Aakarsh Rao, Roman Lysecky

Abstract:

With the advent of complex software and increased connectivity, the security of life-critical medical devices is becoming an increasing concern, particularly with their direct impact on human safety. Security is essential, but it is impossible to develop completely secure and impenetrable systems at design time. Therefore, it is important to assess the potential impact on the security and safety of exploiting a vulnerability in such critical medical systems. The common vulnerability scoring system (CVSS) calculates the severity of exploitable vulnerabilities. However, for medical devices it does not consider the unique challenges of impacts to human health and privacy. Thus, the scoring of a medical device on which human life depends (e.g., pacemakers, insulin pumps) can score very low, while a system on which human life does not depend (e.g., hospital archiving systems) might score very high. In this paper, we propose a medical vulnerability scoring system (MVSS) that extends CVSS to address the health and privacy concerns of medical devices. We propose incorporating two new parameters, namely health impact, and sensitivity impact. Sensitivity refers to the type of information that can be stolen from the device, and health represents the impact on the safety of the patient if the vulnerability is exploited (e.g., potential harm, life-threatening). We evaluate fifteen different known vulnerabilities in medical devices and compare MVSS against two state-of-the-art medical device-oriented vulnerability scoring systems and the foundational CVSS.

Keywords: common vulnerability system, medical devices, medical device security, vulnerabilities

Procedia PDF Downloads 149
27196 Problems of ICT Adoption in Nigerian Small and Medium Scale Enterprises

Authors: Ajayi Adeola

Abstract:

The study examined the sources of revenue in Osun State. It determined the impact of revenue consultants on the internally generated revenue of Osun State Government, all with a view to surveying the expenditure pattern of the state. In the course of carrying out the study, data were collected primarily through interview method. Four principal officers in the financial sector were interviewed. However, secondary sources of data were collected from Osun State of Nigeria audited reports and financial statements for the year ended 31st December, 1997 to 2006. The data generated were analyzed using percentages and pie-chart for illustrations. The findings of the study revealed that the sources of revenue for Osun State Government included internally generated revenue (IGR), statutory allocation, value added tax (VAT) and capital projects. It also discovered that Statutory Allocation was the dominant sources of government revenue during the period of study. It accounted for 63.69% while IGR was 19.7%, value added tax (VAT) 8.07% and capital Receipts 8.48%. The study also discovered that the recurrent expenditure overshot the capital expenditure during the period of study on ratio 7:3 respectively while the state recorded surplus budget in seven times and deficit budgets in 2003 and 2004. The study concluded that the Osun State government was over dependent on external sources to finance recurrent and capital expenditure during the period of study.

Keywords: information communication technology, ICT adoption, ICT solution, small and medium scale enterprises

Procedia PDF Downloads 380
27195 Family of Density Curves of Queensland Soils from Compaction Tests, on a 3D Z-Plane Function of Moisture Content, Saturation, and Air-Void Ratio

Authors: Habib Alehossein, M. S. K. Fernando

Abstract:

Soil density depends on the volume of the voids and the proportion of the water and air in the voids. However, there is a limit to the contraction of the voids at any given compaction energy, whereby additional water is used to reduce the void volume further by lubricating the particles' frictional contacts. Hence, at an optimum moisture content and specific compaction energy, the density of unsaturated soil can be maximized where the void volume is minimum. However, when considering a full compaction curve and permutations and variations of all these components (soil, air, water, and energy), laboratory soil compaction tests can become expensive, time-consuming, and exhausting. Therefore, analytical methods constructed on a few test data can be developed and used to reduce such unnecessary efforts significantly. Concentrating on the compaction testing results, this study discusses the analytical modelling method developed for some fine-grained and coarse-grained soils of Queensland. Soil properties and characteristics, such as full functional compaction curves under various compaction energy conditions, were studied and developed for a few soil types. Using MATLAB, several generic analytical codes were created for this study, covering all possible compaction parameters and results as they occur in a soil mechanics lab. These MATLAB codes produce a family of curves to determine the relationships between the density, moisture content, void ratio, saturation, and compaction energy.

Keywords: analytical, MATLAB, modelling, compaction curve, void ratio, saturation, moisture content

Procedia PDF Downloads 70
27194 Microstracture of Iranian Processed Cheese

Authors: R. Ezzati, M. Dezyani, H. Mirzaei

Abstract:

The effects of the concentration of trisodium citrate (TSC) emulsifying salt (0.25 to 2.75%) and holding time (0 to 20 min) on the textural, rheological, and microstructural properties of Iranian Processed Cheese Cheddar cheese were studied using a central composite rotatable design. The loss tangent parameter (from small amplitude oscillatory rheology), extent of flow, and melt area (from the Schreiber test) all indicated that the meltability of process cheese decreased with increased concentration of TSC and that holding time led to a slight reduction in meltability. Hardness increased as the concentration of TSC increased. Fluorescence micrographs indicated that the size of fat droplets decreased with an increase in the concentration of TSC and with longer holding times. Acid-base titration curves indicated that the buffering peak at pH 4.8, which is due to residual colloidal calcium phosphate, decreased as the concentration of TSC increased. The soluble phosphate content increased as concentration of TSC increased. However, the insoluble Ca decreased with increasing concentration of TSC. The results of this study suggest that TSC chelated Ca from colloidal calcium phosphate and dispersed casein; the citrate-Ca complex remained trapped within the process cheese matrix. Increasing the concentration of TSC helped to improve fat emulsification and casein dispersion during cooking, both of which probably helped to reinforce the structure of process cheese.

Keywords: Iranian processed cheese, cheddar cheese, emulsifying salt, rheology

Procedia PDF Downloads 431
27193 Statistical Analysis and Impact Forecasting of Connected and Autonomous Vehicles on the Environment: Case Study in the State of Maryland

Authors: Alireza Ansariyar, Safieh Laaly

Abstract:

Over the last decades, the vehicle industry has shown increased interest in integrating autonomous, connected, and electrical technologies in vehicle design with the primary hope of improving mobility and road safety while reducing transportation’s environmental impact. Using the State of Maryland (M.D.) in the United States as a pilot study, this research investigates CAVs’ fuel consumption and air pollutants (C.O., PM, and NOx) and utilizes meaningful linear regression models to predict CAV’s environmental effects. Maryland transportation network was simulated in VISUM software, and data on a set of variables were collected through a comprehensive survey. The number of pollutants and fuel consumption were obtained for the time interval 2010 to 2021 from the macro simulation. Eventually, four linear regression models were proposed to predict the amount of C.O., NOx, PM pollutants, and fuel consumption in the future. The results highlighted that CAVs’ pollutants and fuel consumption have a significant correlation with the income, age, and race of the CAV customers. Furthermore, the reliability of four statistical models was compared with the reliability of macro simulation model outputs in the year 2030. The error of three pollutants and fuel consumption was obtained at less than 9% by statistical models in SPSS. This study is expected to assist researchers and policymakers with planning decisions to reduce CAV environmental impacts in M.D.

Keywords: connected and autonomous vehicles, statistical model, environmental effects, pollutants and fuel consumption, VISUM, linear regression models

Procedia PDF Downloads 428
27192 Advanced Phosphorus-Containing Polymer Materials towards Eco-Friendly Flame Retardant Epoxy Thermosets

Authors: Ionela-Daniela Carja, Diana Serbezeanu, Tachita Vlad-Bubulac, Corneliu Hamciuc

Abstract:

Nowadays, epoxy materials are extensively used in ever more areas and under ever more demanding environmental conditions due to their remarkable combination of properties, light weight and ease of processing. However, these materials greatly increase the fire risk due to their flammability and possible release of toxic by-products as a result of their chemical composition which consists mainly from carbon and hydrogen atoms. Therefore, improving the fire retardant behaviour to prevent the loss of life and property is of particular concern among government regulatory bodies, consumers and manufacturers alike. Modification of epoxy resins with organophosphorus compounds, as reactive flame retardants or additives, is the key to achieving non-flammable advanced epoxy materials. Herein, a detailed characterization of fire behaviour for a series of phosphorus-containing epoxy thermosets is reported. A carefully designed phosphorus flame retardant additive was simply blended with a bifunctional bisphenol-A based epoxy resin. Further thermal cross-linking in the presence of various aminic hardeners led to eco-friendly flame retardant epoxy resins. The type of hardener, concentration of flame retardant additive, compatibility between the components of the mixture, char formation and morphology, thermal stability, flame retardant mechanisms were investigated. It was found that even a very low content of phosphorus introduced into the epoxy matrix increased the limiting oxygen index value to about 30%. In addition, the peak of the heat release rate value decreased up to 45% as compared to the one of the neat epoxy system. The main flame retardant mechanism was the condensed-phase one as revealed by SEM and XPS measurements.

Keywords: condensed-phase mechanism, eco-friendly phosphorus flame retardant, epoxy resin, thermal stability

Procedia PDF Downloads 298
27191 The Relationship between Spiritual Well-Being and the Quality of Life among Older Adults Who Live in Aged Institutions

Authors: Li-Fen Wu

Abstract:

Spiritual well-being is one aspect of quality of life that can significantly improve the quality of life of individuals. However, the reports of older adults’ spiritual well-being that live in aged institutions were few. This study aims to identify the relationship between spiritual well-being and quality of life among older adults residing in aged institutions in Taiwan. The correlative study design is used. Data collected by basic personal information, Spiritual Index of Well-being Scale and EuroQol-5D-3L. Case managers help participants complete the questionnaires. This study uses descriptive statistics and correlation test analysis data. The study finds the positive correlation between spiritual well-being and quality of life. According to the correlation between spiritual well-being and quality-of-life score, awareness of the importance of spiritual well-being in caring for these people is recommended.

Keywords: older adult, spiritual well-being, quality of life, aged institution

Procedia PDF Downloads 244
27190 Identifying the Structural Components of Old Buildings from Floor Plans

Authors: Shi-Yu Xu

Abstract:

The top three risk factors that have contributed to building collapses during past earthquake events in Taiwan are: "irregular floor plans or elevations," "insufficient columns in single-bay buildings," and the "weak-story problem." Fortunately, these unsound structural characteristics can be directly identified from the floor plans. However, due to the vast number of old buildings, conducting manual inspections to identify these compromised structural features in all existing structures would be time-consuming and prone to human errors. This study aims to develop an algorithm that utilizes artificial intelligence techniques to automatically pinpoint the structural components within a building's floor plans. The obtained spatial information will be utilized to construct a digital structural model of the building. This information, particularly regarding the distribution of columns in the floor plan, can then be used to conduct preliminary seismic assessments of the building. The study employs various image processing and pattern recognition techniques to enhance detection efficiency and accuracy. The study enables a large-scale evaluation of structural vulnerability for numerous old buildings, providing ample time to arrange for structural retrofitting in those buildings that are at risk of significant damage or collapse during earthquakes.

Keywords: structural vulnerability detection, object recognition, seismic capacity assessment, old buildings, artificial intelligence

Procedia PDF Downloads 72
27189 Improved Multi–Objective Firefly Algorithms to Find Optimal Golomb Ruler Sequences for Optimal Golomb Ruler Channel Allocation

Authors: Shonak Bansal, Prince Jain, Arun Kumar Singh, Neena Gupta

Abstract:

Recently nature–inspired algorithms have widespread use throughout the tough and time consuming multi–objective scientific and engineering design optimization problems. In this paper, we present extended forms of firefly algorithm to find optimal Golomb ruler (OGR) sequences. The OGRs have their one of the major application as unequally spaced channel–allocation algorithm in optical wavelength division multiplexing (WDM) systems in order to minimize the adverse four–wave mixing (FWM) crosstalk effect. The simulation results conclude that the proposed optimization algorithm has superior performance compared to the existing conventional computing and nature–inspired optimization algorithms to find OGRs in terms of ruler length, total optical channel bandwidth and computation time.

Keywords: channel allocation, conventional computing, four–wave mixing, nature–inspired algorithm, optimal Golomb ruler, lévy flight distribution, optimization, improved multi–objective firefly algorithms, Pareto optimal

Procedia PDF Downloads 303
27188 The Attitudes of Senior High School Students Toward Work Immersion Programs of Nazareth School of National University

Authors: Kim Katherine Castillo, Nelson John Datubatang, Terrence Phillip Dy, Norelie Hampac, Reichen Crismark Martinez, Nina Faith Pantinople, Jose Dante Santos II, Marchel Ann Santos, Sophia Abigail Santiago, Zyrill Xsar San Juan, Aira Mae Tagao, Crystal Kylla Viagedor

Abstract:

The Work Immersion Program was implemented to help students gain abundant work-related experiences while on-site; additionally, the program aims to help students improve their competencies and interpersonal skills as they are given the option to join the workforce if they ever choose to do so after senior high school. The work immersion experience posed diverse challenges for students, spanning personal, financial, engagement, environmental, and equipment-related domains. These included the need for assistance in time management, transportation expenses, and procurement of materials. Furthermore, students faced difficulties in independent task completion and encountered suboptimal work environments. Addressing these multifaceted obstacles is crucial to optimize the educational outcomes of work immersion programs. In addition to the challenges, several other issues have been identified, including the absence of standardized work immersion programs across schools and industries, the challenges in securing appropriate work immersion placements, the necessity for enhanced monitoring and evaluation of program effectiveness, and the limited availability of field programs aligned with students' chosen courses. Furthermore, there is a lack of comprehensive information regarding the attitudes of Senior High School students toward work immersion programs within their respective schools. This study aims to investigate the attitudes of senior high school students at Nazareth School of National University towards work immersion programs, with a focus on identifying factors that influence their perception and participation, including collegiality and expectations. By exploring the students' attitudes, the research endeavors to enhance the school's work immersion programs and contribute to the overall educational experience of the students. This study addresses challenges related to work immersion programs, focusing on six subtopics: Work Immersion, Work Immersion in the Philippines, Students' Attitudes, Factors Affecting Students' Attitudes, Effectiveness of Work Immersion for Senior High School Students, and Students' Perception and Willingness to Participate. Using a descriptive research design, the study examines the attitudes of senior high school students at Nazareth School of National University in Manila. Data was collected from 100 students, representing different academic strands, through a 35-item researcher-made survey. Descriptive statistics, including measures of central tendency and variability, will be used to analyze the data using JASP, providing valuable insights into the students' attitudes toward work immersion.

Keywords: attitudes, challenges, educational outcomes, work immersion programs

Procedia PDF Downloads 79
27187 Review of Concepts and Tools Applied to Assess Risks Associated with Food Imports

Authors: A. Falenski, A. Kaesbohrer, M. Filter

Abstract:

Introduction: Risk assessments can be performed in various ways and in different degrees of complexity. In order to assess risks associated with imported foods additional information needs to be taken into account compared to a risk assessment on regional products. The present review is an overview on currently available best practise approaches and data sources used for food import risk assessments (IRAs). Methods: A literature review has been performed. PubMed was searched for articles about food IRAs published in the years 2004 to 2014 (English and German texts only, search string “(English [la] OR German [la]) (2004:2014 [dp]) import [ti] risk”). Titles and abstracts were screened for import risks in the context of IRAs. The finally selected publications were analysed according to a predefined questionnaire extracting the following information: risk assessment guidelines followed, modelling methods used, data and software applied, existence of an analysis of uncertainty and variability. IRAs cited in these publications were also included in the analysis. Results: The PubMed search resulted in 49 publications, 17 of which contained information about import risks and risk assessments. Within these 19 cross references were identified to be of interest for the present study. These included original articles, reviews and guidelines. At least one of the guidelines of the World Organisation for Animal Health (OIE) and the Codex Alimentarius Commission were referenced in any of the IRAs, either for import of animals or for imports concerning foods, respectively. Interestingly, also a combination of both was used to assess the risk associated with the import of live animals serving as the source of food. Methods ranged from full quantitative IRAs using probabilistic models and dose-response models to qualitative IRA in which decision trees or severity tables were set up using parameter estimations based on expert opinions. Calculations were done using @Risk, R or Excel. Most heterogeneous was the type of data used, ranging from general information on imported goods (food, live animals) to pathogen prevalence in the country of origin. These data were either publicly available in databases or lists (e.g., OIE WAHID and Handystatus II, FAOSTAT, Eurostat, TRACES), accessible on a national level (e.g., herd information) or only open to a small group of people (flight passenger import data at national airport customs office). In the IRAs, an uncertainty analysis has been mentioned in some cases, but calculations have been performed only in a few cases. Conclusion: The current state-of-the-art in the assessment of risks of imported foods is characterized by a great heterogeneity in relation to general methodology and data used. Often information is gathered on a case-by-case basis and reformatted by hand in order to perform the IRA. This analysis therefore illustrates the need for a flexible, modular framework supporting the connection of existing data sources with data analysis and modelling tools. Such an infrastructure could pave the way to IRA workflows applicable ad-hoc, e.g. in case of a crisis situation.

Keywords: import risk assessment, review, tools, food import

Procedia PDF Downloads 292