Search results for: facial comparison
4746 Effect of Season on Semen Production of Nubian and Saanen Bucks in Sudan
Authors: E. A. Babiker, S. A. Makawi
Abstract:
The influence of the season (autumn, winter, and summer) on semen production in Nubian and Saanen bucks was studied. Seven mature bucks (4 Nubian and 3 Saanen) were used in this study to prepare semen samples which were collected with an artificial vagina. The samples were extended in Tris-egg yolk-glycerol-glucose extender, frozen, and stored in liquid nitrogen at –196 0C for 48 hours. Straws were thawed in water at –37 0C for 15 seconds before sperm evaluation (post-thaw sperm motility). There was a significant seasonal variation in both semen quantity (volume, concentration, and the total number of spermatozoa per ejaculate) and quality (percentage of sperm motility, percentage of post-thaw sperm motility, and dead spermatozoa). Greater ejaculate volumes were observed during summer and autumn in comparison to winter. Higher values of sperms concentration were observed during autumn, while the lowest sperm concentration values were observed during summer. Higher values of sperm motility were observed during autumn in comparison to summer. Lower values of dead spermatozoa were recorded during autumn, while the highest percentages of dead spermatozoa were observed during summer for the two breeds of bucks. The influence of season on post-thaw sperm motility was significant. Semen frozen during autumn and winter had the highest values, while during summer, lower mean values were observed. The best semen was produced during autumn and winter, while during summer, poor semen quality was recorded.Keywords: season, Nubian, Saanen, semen production, Sudan
Procedia PDF Downloads 1124745 The application of Gel Dosimeters and Comparison with other Dosimeters in Radiotherapy: A Literature Review
Authors: Sujan Mahamud
Abstract:
Purpose: A major challenge in radiotherapy treatment is to deliver precise dose of radiation to the tumor with minimum dose to the healthy normal tissues. Recently, gel dosimetry has emerged as a powerful tool to measure three-dimensional (3D) dose distribution for complex delivery verification and quality assurance. These dosimeters act both as a phantom and detector, thus confirming the versatility of dosimetry technique. The aim of the study is to know the application of Gel Dosimeters in Radiotherapy and find out the comparison with 1D and 2D dimensional dosimeters. Methods and Materials: The study is carried out from Gel Dosimeter literatures. Secondary data and images have been collected from different sources such as different guidelines, books, and internet, etc. Result: Analyzing, verifying, and comparing data from treatment planning system (TPS) is determined that gel dosimeter is a very excellent powerful tool to measure three-dimensional (3D) dose distribution. The TPS calculated data were in very good agreement with the dose distribution measured by the ferrous gel. The overall uncertainty in the ferrous-gel dose determination was considerably reduced using an optimized MRI acquisition protocol and a new MRI scanner. The method developed for comparing measuring gel data with calculated treatment plans, the gel dosimetry method, was proven to be a useful for radiation treatment planning verification. In 1D and 2D Film, the depth dose and lateral for RMSD are 1.8% and 2%, and max (Di-Dj) are 2.5% and 8%. Other side 2D+ ( 3D) Film Gel and Plan Gel for RMSDstruct and RMSDstoch are 2.3% & 3.6% and 1% & 1% and system deviation are -0.6% and 2.5%. The study is investigated that the result fined 2D+ (3D) Film Dosimeter is better than the 1D and 2D Dosimeter. Discussion: Gel Dosimeters is quality control and quality assurance tool which will used the future clinical application.Keywords: gel dosimeters, phantom, rmsd, QC, detector
Procedia PDF Downloads 1514744 An Experimental Investigation on Productivity and Performance of an Improved Design of Basin Type Solar Still
Authors: Mahmoud S. El-Sebaey, Asko Ellman, Ahmed Hegazy, Tarek Ghonim
Abstract:
Due to population growth, the need for drinkable healthy water is highly increased. Consequently, and since the conventional sources of water are limited, researchers devoted their efforts to oceans and seas for obtaining fresh drinkable water by thermal distillation. The current work is dedicated to the design and fabrication of modified solar still model, as well as conventional solar still for the sake of comparison. The modified still is single slope double basin solar still. The still consists of a lower basin with a dimension of 1000 mm x 1000 mm which contains the sea water, as well as the top basin that made with 4 mm acrylic, was temporarily kept on the supporting strips permanently fixed with the side walls. Equally ten spaced vertical glass strips of 50 mm height and 3 mm thickness were provided at the upper basin for the stagnancy of the water. Window glass of 3 mm was used as the transparent cover with 23° inclination at the top of the still. Furthermore, the performance evaluation and comparison of these two models in converting salty seawater into drinkable freshwater are introduced, analyzed and discussed. The experiments were performed during the period from June to July 2018 at seawater depths of 2, 3, 4 and 5 cm. Additionally, the solar still models were operated simultaneously in the same climatic conditions to analyze the influence of the modifications on the freshwater output. It can be concluded that the modified design of double basin single slope solar still shows the maximum freshwater output at all water depths tested. The results showed that the daily productivity for modified and conventional solar still was 2.9 and 1.8 dm³/m² day, indicating an increase of 60% in fresh water production.Keywords: freshwater output, solar still, solar energy, thermal desalination
Procedia PDF Downloads 1354743 Induced Emotional Empathy and Contextual Factors like Presence of Others Reduce the Negative Stereotypes Towards Persons with Disabilities through Stronger Prosociality
Authors: Shailendra Kumar Mishra
Abstract:
In this paper, we focus on how contextual factors like the physical presence of other perceivers and then developed induced emotional empathy towards a person with disabilities may reduce the automatic negative stereotypes and then response towards that person. We demonstrated in study 1 that negative attitude based on negative stereotypes assessed on ATDP-test questionnaires on five points Linkert-scale are significantly less negative when participants were tested with a group of perceivers and then tested alone separately by applying 3 (positive, indifferent, and negative attitude levels) X 2 (physical presence condition and alone) factorial design of ANOVA test. In the second study, we demonstrate, by applying regression analysis, in the presence of other perceivers, whether in a small group, participants showed more induced emotional empathy through stronger prosociality towards a high distress target like a person with disabilities in comparison of that of other stigmatized persons such as racial biased or gender-biased people. Thus results show that automatic affective response in the form of induced emotional empathy in perceiver and contextual factors like the presence of other perceivers automatically activate stronger prosocial norms and egalitarian goals towards physically challenged persons in comparison to other stigmatized persons like racial or gender-biased people. This leads to less negative attitudes and behaviour towards a person with disabilities.Keywords: contextual factors, high distress target, induced emotional empathy, stronger prosociality
Procedia PDF Downloads 1384742 First Formaldehyde Retrieval Using the Raw Data Obtained from Pandora in Seoul: Investigation of the Temporal Characteristics and Comparison with Ozone Monitoring Instrument Measurement
Abstract:
In this present study, for the first time, we retrieved the Formaldehyde (HCHO) Vertical Column Density (HCHOVCD) using Pandora instruments in Seoul, a megacity in northeast Asia, for the period between 2012 and 2014 and investigated the temporal characteristics of HCHOVCD. HCHO Slant Column Density (HCHOSCD) was obtained using the Differential Optical Absorption Spectroscopy (DOAS) method. HCHOSCD was converted to HCHOVCD using geometric Air Mass Factor (AMFG) as Pandora is the direct-sun measurement. The HCHOVCDs is low at 12:00 Local Time (LT) and is high in the morning (10:00 LT) and late afternoon (16:00 LT) except for winter. The maximum (minimum) values of Pandora HCHOVCD are 2.68×1016 (1.63×10¹⁶), 3.19×10¹⁶ (2.23×10¹⁶), 2.00×10¹⁶ (1.26×10¹⁶), and 1.63×10¹⁶ (0.82×10¹⁶) molecules cm⁻² in spring, summer, autumn, and winter, respectively. In terms of seasonal variations, HCHOVCD was high in summer and low in winter which implies that photo-oxidation plays an important role in HCHO production in Seoul. In comparison with the Ozone Monitoring Instrument (OMI) measurements, the HCHOVCDs from the OMI are lower than those from Pandora. The correlation coefficient (R) between monthly HCHOVCDs values from Pandora and OMI is 0.61, with slop of 0.35. Furthermore, to understand HCHO mixing ratio within Planetary Boundary Layer (PBL) in Seoul, we converted Pandora HCHOVCDs to HCHO mixing ratio in the PBL using several meteorological input data from the Atmospheric InfraRed Sounder (AIRS). Seasonal HCHO mixing ratio in PBL converted from Pandora (OMI) HCHOVCDs are estimated to be 6.57 (5.17), 7.08 (6.68), 7.60 (4.70), and 5.00 (4.76) ppbv in spring, summer, autumn, and winter, respectively.Keywords: formaldehyde, OMI, Pandora, remote sensing
Procedia PDF Downloads 1504741 Fake Accounts Detection in Twitter Based on Minimum Weighted Feature Set
Authors: Ahmed ElAzab, Amira M. Idrees, Mahmoud A. Mahmoud, Hesham Hefny
Abstract:
Social networking sites such as Twitter and Facebook attracts over 500 million users across the world, for those users, their social life, even their practical life, has become interrelated. Their interaction with social networking has affected their life forever. Accordingly, social networking sites have become among the main channels that are responsible for vast dissemination of different kinds of information during real time events. This popularity in Social networking has led to different problems including the possibility of exposing incorrect information to their users through fake accounts which results to the spread of malicious content during life events. This situation can result to a huge damage in the real world to the society in general including citizens, business entities, and others. In this paper, we present a classification method for detecting fake accounts on Twitter. The study determines the minimized set of the main factors that influence the detection of the fake accounts on Twitter, then the determined factors have been applied using different classification techniques, a comparison of the results for these techniques has been performed and the most accurate algorithm is selected according to the accuracy of the results. The study has been compared with different recent research in the same area, this comparison has proved the accuracy of the proposed study. We claim that this study can be continuously applied on Twitter social network to automatically detect the fake accounts, moreover, the study can be applied on different Social network sites such as Facebook with minor changes according to the nature of the social network which are discussed in this paper.Keywords: fake accounts detection, classification algorithms, twitter accounts analysis, features based techniques
Procedia PDF Downloads 4164740 Contour Defects of Face with Hyperpigmentation
Authors: Afzaal Bashir, Sunaina Afzaal
Abstract:
Background: Facial contour deformities associated with pigmentary changes are of major concern for plastic surgeons, both being important and difficult in treating such issues. No definite ideal treatment option is available to simultaneously address both the contour defect as well as related pigmentation. Objectives: The aim of the current study is to compare the long-term effects of conventional adipose tissue grafting and ex-vivo expanded Mesenchymal Stem Cells enriched adipose tissue grafting for the treatment of contour deformities with pigmentary changes on the face. Material and Methods: In this study, eighty (80) patients with contour deformities of the face with hyperpigmentation were recruited after informed consent. Two techniques i.e., conventional fat grafting (C-FG) and fat grafts enriched with expanded adipose stem cells (FG-ASCs), were used to address the pigmentation. Both techniques were explained to patients, and enrolled patients were divided into two groups i.e., C-FG and FG-ASCS, per patients’ choice and satisfaction. Patients of the FG-ASCs group were treated with fat grafts enriched with expanded adipose stem cells, while patients of the C-FGs group were treated with conventional fat grafting. Patients were followed for 12 months, and improvement in face pigmentation was assessed clinically as well as measured objectively. Patient satisfaction was also documented as highly satisfied, satisfied, and unsatisfied. Results: Mean age of patients was 24.42(±4.49), and 66 patients were females. The forehead was involved in 61.20% of cases, the cheek in 21.20% of cases, the chin in 11.20% of cases, and the nose in 6.20% of cases. In the GF-ASCs group, the integrated color density (ICD) was decreased (1.08×10⁶ ±4.64×10⁵) as compared to the C-FG group (2.80×10⁵±1.69×10⁵). Patients treated with fat grafts enriched with expanded adipose stem cells were significantly more satisfied as compared to patients treated with conventional fat grafting only. Conclusion: Mesenchymal stem cell-enriched autologous fat grafting is a preferred option for improving the contour deformities related to increased pigmentation of face skin.Keywords: hyperpigmentation, color density, enriched adipose tissue graft, fat grafting, contour deformities, Image J
Procedia PDF Downloads 1104739 Evaluation of Easy-to-Use Energy Building Design Tools for Solar Access Analysis in Urban Contexts: Comparison of Friendly Simulation Design Tools for Architectural Practice in the Early Design Stage
Abstract:
Current building sector is focused on reduction of energy requirements, on renewable energy generation and on regeneration of existing urban areas. These targets need to be solved with a systemic approach, considering several aspects simultaneously such as climate conditions, lighting conditions, solar radiation, PV potential, etc. The solar access analysis is an already known method to analyze the solar potentials, but in current years, simulation tools have provided more effective opportunities to perform this type of analysis, in particular in the early design stage. Nowadays, the study of the solar access is related to the easiness of the use of simulation tools, in rapid and easy way, during the design process. This study presents a comparison of three simulation tools, from the point of view of the user, with the aim to highlight differences in the easy-to-use of these tools. Using a real urban context as case study, three tools; Ecotect, Townscope and Heliodon, are tested, performing models and simulations and examining the capabilities and output results of solar access analysis. The evaluation of the ease-to-use of these tools is based on some detected parameters and features, such as the types of simulation, requirements of input data, types of results, etc. As a result, a framework is provided in which features and capabilities of each tool are shown. This framework shows the differences among these tools about functions, features and capabilities. The aim of this study is to support users and to improve the integration of simulation tools for solar access with the design process.Keywords: energy building design tools, solar access analysis, solar potential, urban planning
Procedia PDF Downloads 3404738 Role of Financial Institutions in Promoting Micro Service Enterprises with Special Reference to Hairdressing Salons
Authors: Gururaj Bhajantri
Abstract:
Financial sector is the backbone of any economy and it plays a crucial role in the mobilisation and allocation of resources. One of the main objectives of financial sector is inclusive growth. The constituents of the financial sector are banks, and financial Institutions, which mobilise the resources from the surplus sector and channelize the same to the different needful sectors in the economy. Micro Small and the Medium Enterprises sector in India cover a wide range of economic activities. These enterprises are divided on the basis of investment on equipment. The micro enterprises are divided into manufacturing and services sector. Micro Service enterprises have investment limit up to ten lakhs on equipment. Hairdresser is one who not only cuts and shaves but also provides different types of hair cut, hairstyles, trimming, hair-dye, massage, manicure, pedicure, nail services, colouring, facial, makeup application, waxing, tanning and other beauty treatments etc., hairdressing salons provide these services with the help of equipment. They need investment on equipment not more than ten lakhs. Hence, they can be considered as Micro service enterprises. Hairdressing salons require more than Rs 2.50,000 to start a moderate salon. Moreover, hairdressers are unable to access the organised finance. Still these individuals access finance from money lenders with high rate of interest to lead life. The socio economic conditions of hairdressers are not known properly. Hence, the present study brings a light on the role of financial institutions in promoting hairdressing salons. The study also focuses the socio-economic background of individuals in hairdressings salons, problems faced by them. The present study is based on primary and secondary data. Primary data collected among hairdressing salons in Davangere city. Samples selected with the help of simple random sampling techniques. Collected data analysed and interpreted with the help of simple statistical tools.Keywords: micro service enterprises, financial institutions, hairdressing salons, financial sector
Procedia PDF Downloads 2054737 Digital Twin Smart Hospital: A Guide for Implementation and Improvements
Authors: Enido Fabiano de Ramos, Ieda Kanashiro Makiya, Francisco I. Giocondo Cesar
Abstract:
This study investigates the application of Digital Twins (DT) in Smart Hospital Environments (SHE), through a bibliometric study and literature review, including comparison with the principles of Industry 4.0. It aims to analyze the current state of the implementation of digital twins in clinical and non-clinical operations in healthcare settings, identifying trends and challenges, comparing these practices with Industry 4.0 concepts and technologies, in order to present a basic framework including stages and maturity levels. The bibliometric methodology will allow mapping the existing scientific production on the theme, while the literature review will synthesize and critically analyze the relevant studies, highlighting pertinent methodologies and results, additionally the comparison with Industry 4.0 will provide insights on how the principles of automation, interconnectivity and digitalization can be applied in healthcare environments/operations, aiming at improvements in operational efficiency and quality of care. The results of this study will contribute to a deeper understanding of the potential of Digital Twins in Smart Hospitals, in addition to the future potential from the effective integration of Industry 4.0 concepts in this specific environment, presented through the practical framework, after all, the urgent need for changes addressed in this article is undeniable, as well as all their value contribution to human sustainability, designed in SDG3 – Health and well-being: ensuring that all citizens have a healthy life and well-being, at all ages and in all situations. We know that the validity of these relationships will be constantly discussed, and technology can always change the rules of the game.Keywords: digital twin, smart hospital, healthcare operations, industry 4.0, SDG3, technology
Procedia PDF Downloads 534736 Milling Process of Rigid Flex Printed Circuit Board to Which Polyimide Covers the Whole Surface
Authors: Daniela Evtimovska, Ivana Srbinovska, Padraig O’Rourke
Abstract:
Kostal Macedonia has the challenge to mill a rigid-flex printed circuit board (PCB). The PCB elaborated in this paper is made of FR4 material covered with polyimide through the whole surface on the one side, including the tabs where PCBs need to be separated. After milling only 1.44 meters, the updraft routing tool isn’t effective and causes polyimide debris on all PCB cuts if it continues to mill with the same tool. Updraft routing tool is used for all another product in Kostal Macedonia, and it is changing after milling 60 meters. Changing the tool adds 80 seconds to the cycle time. One solution is using a laser-cut machine. Buying a laser-cut machine for cutting only one product doesn’t make financial sense. The focus is given to find an internal solution among the options under review to solve the issue with polyimide debris. In the paper, the design of the rigid-flex panel is described deeply. It is evaluated downdraft routing tool as a possible solution which could be used for the flex rigid panel as a specific product. It is done a comparison between updraft and down draft routing tools from a technical and financial aspect of view, taking into consideration the customer requirements for the rigid-flex PCB. The results show that using the downdraft routing tool is the best solution in this case. This tool is more expensive for 0.62 euros per piece than updraft. The downdraft routing tool needs to be changed after milling 43.44 meters in comparison with the updraft tool, which needs to be changed after milling only 1.44 meters. It is done analysis which actions should be taken in order further improvements and the possibility of maximum serving of downdraft routing tool.Keywords: Kostal Macedonia, rigid flex PCB, polyimide, debris, milling process, up/down draft routing tool
Procedia PDF Downloads 1934735 Morphological Comparison of the Total Skeletal of (Common Bottlenose Dolphin) Tursiops truncatus and (Harbour Porpoise) Phocoena phocoena
Authors: Onur Yaşar, Okan Bilge, Ortaç Onmuş
Abstract:
The aim of this study is to investigate and compare the locomotion structures, especially the bone structures, of two different dolphin species, the Common bottlenose dolphin Tursiops truncatus and the Harbor porpoise Phocoena phocoena, and to provide a more detailed and descriptive comparison. To compare the structures of bones of two study species; first, the Spinous Process (SP), Inferior Articular Process (IAP), Laminae Vertebrae (LA), Foramen Vertebrae (FV), Corpus Vertebrae (CV), Transverse Process (TP) were determined and then the length of the Spinous Process (LSP), length of the Foramen Vertebrae (LFV), area of the Corpus Vertebrae (ACV), and length of the Transverse Process (LTP) were measured from the caudal view. The spine consists of a total of 61 vertebrae (7 cervical, 13 thoracic, 14 lumbar, and 27 caudal vertebrae) in the Common bottlenose dolphin, while the Harbor Porpoise has 63 vertebrae (7 cervical, 12 thoracic, 14 lumbar, 30 caudal. In the Common bottlenose dolphin, epiphyseal ossification was between the 21st caudal vertebra and the 27th caudal vertebra, while in the Harbor porpoise, it was observed in all vertebrae. Ankylosing spondylitis was observed in the C1 and C2 vertebrae in the Common bottlenose dolphin and in all cervical vertebrae between C1 and C6 in the Harbor porpoise. We argue that this difference in fused cervical vertebrae between the two species may be due to the fact that the neck movements of the Harbor porpoise in the vertical and horizontal axes are more limited than those of the Common bottlenose dolphin. We also think that as the number of fused cervical vertebrae increases, underwater maneuvers are performed at a wider angle, but to test this idea, we think that different species of dolphins should be compared and the different age groups should be investigated.Keywords: anatomy, morphometry, vertebrae, common bottlenose dolphin, Tursiops truncatus, harbour porpoise, Phocoena phocoena
Procedia PDF Downloads 484734 Computational Investigation of Secondary Flow Losses in Linear Turbine Cascade by Modified Leading Edge Fence
Authors: K. N. Kiran, S. Anish
Abstract:
It is well known that secondary flow loses account about one third of the total loss in any axial turbine. Modern gas turbine height is smaller and have longer chord length, which might lead to increase in secondary flow. In order to improve the efficiency of the turbine, it is important to understand the behavior of secondary flow and device mechanisms to curtail these losses. The objective of the present work is to understand the effect of a stream wise end-wall fence on the aerodynamics of a linear turbine cascade. The study is carried out computationally by using commercial software ANSYS CFX. The effect of end-wall on the flow field are calculated based on RANS simulation by using SST transition turbulence model. Durham cascade which is similar to high-pressure axial flow turbine for simulation is used. The aim of fencing in blade passage is to get the maximum benefit from flow deviation and destroying the passage vortex in terms of loss reduction. It is observed that, for the present analysis, fence in the blade passage helps reducing the strength of horseshoe vortex and is capable of restraining the flow along the blade passage. Fence in the blade passage helps in reducing the under turning by 70 in comparison with base case. Fence on end-wall is effective in preventing the movement of pressure side leg of horseshoe vortex and helps in breaking the passage vortex. Computations are carried for different fence height whose curvature is different from the blade camber. The optimum fence geometry and location reduces the loss coefficient by 15.6% in comparison with base case.Keywords: boundary layer fence, horseshoe vortex, linear cascade, passage vortex, secondary flow
Procedia PDF Downloads 3494733 Fatigue Life Prediction under Variable Loading Based a Non-Linear Energy Model
Authors: Aid Abdelkrim
Abstract:
A method of fatigue damage accumulation based upon application of energy parameters of the fatigue process is proposed in the paper. Using this model is simple, it has no parameter to be determined, it requires only the knowledge of the curve W–N (W: strain energy density N: number of cycles at failure) determined from the experimental Wöhler curve. To examine the performance of nonlinear models proposed in the estimation of fatigue damage and fatigue life of components under random loading, a batch of specimens made of 6082 T 6 aluminium alloy has been studied and some of the results are reported in the present paper. The paper describes an algorithm and suggests a fatigue cumulative damage model, especially when random loading is considered. This work contains the results of uni-axial random load fatigue tests with different mean and amplitude values performed on 6082T6 aluminium alloy specimens. The proposed model has been formulated to take into account the damage evolution at different load levels and it allows the effect of the loading sequence to be included by means of a recurrence formula derived for multilevel loading, considering complex load sequences. It is concluded that a ‘damaged stress interaction damage rule’ proposed here allows a better fatigue damage prediction than the widely used Palmgren–Miner rule, and a formula derived in random fatigue could be used to predict the fatigue damage and fatigue lifetime very easily. The results obtained by the model are compared with the experimental results and those calculated by the most fatigue damage model used in fatigue (Miner’s model). The comparison shows that the proposed model, presents a good estimation of the experimental results. Moreover, the error is minimized in comparison to the Miner’s model.Keywords: damage accumulation, energy model, damage indicator, variable loading, random loading
Procedia PDF Downloads 3964732 Comparison of Various Policies under Different Maintenance Strategies on a Multi-Component System
Authors: Demet Ozgur-Unluakin, Busenur Turkali, Ayse Karacaorenli
Abstract:
Maintenance strategies can be classified into two types, which are reactive and proactive, with respect to the time of the failure and maintenance. If the maintenance activity is done after a breakdown, it is called reactive maintenance. On the other hand, proactive maintenance, which is further divided as preventive and predictive, focuses on maintaining components before a failure occurs to prevent expensive halts. Recently, the number of interacting components in a system has increased rapidly and therefore, the structure of the systems have become more complex. This situation has made it difficult to provide the right maintenance decisions. Herewith, determining effective decisions has played a significant role. In multi-component systems, many methodologies and strategies can be applied when a component or a system has already broken down or when it is desired to identify and avoid proactively defects that could lead to future failure. This study focuses on the comparison of various maintenance strategies on a multi-component dynamic system. Components in the system are hidden, although there exists partial observability to the decision maker and they deteriorate in time. Several predefined policies under corrective, preventive and predictive maintenance strategies are considered to minimize the total maintenance cost in a planning horizon. The policies are simulated via Dynamic Bayesian Networks on a multi-component system with different policy parameters and cost scenarios, and their performances are evaluated. Results show that when the difference between the corrective and proactive maintenance cost is low, none of the proactive maintenance policies is significantly better than the corrective maintenance. However, when the difference is increased, at least one policy parameter for each proactive maintenance strategy gives significantly lower cost than the corrective maintenance.Keywords: decision making, dynamic Bayesian networks, maintenance, multi-component systems, reliability
Procedia PDF Downloads 1294731 Management of Recurrent Temporomandibular Joint True Bony Ankylosis : A Case Report
Authors: Mahmoud A. Amin, Essam Taman, Ahmed Omran, Mahmoud Shawky, Ahmed Mekawy, Abdallah M. Kotkat, Saber Younes, Nehad N. Ghonemy, Amin Saad, Ezz-Aleslam, Abdullah M. Elosh
Abstract:
Introduction: TMJ is a one-of-a-kind, complicated synovial joint that helps with masticatory function by allowing the mandible to open and close the mouth. True ankylosis is a situation in which condylar movement is limited by a mechanical defect in the joint, whereas false ankylosis is a condition in which there is a restriction in mandibular movement due to muscular spasm myositis ossificans, and coronoid process hyperplasia. Ankylosis is characterized by the inability to open the mouth due to fusion of the TMJ condyle to the base of the skull as a result of trauma, infection, or systemic diseases such as rheumatoid arthritis (the most common) and psoraisis. Ankylosis causes facial asymmetry and affects the patient psychologically as well as speech, difficult mastication, poor oral hygiene, malocclusion, and other factors. TMJ is a technically challenging joint; hence TMJ ankylosis management is complicated. Case presentation: this case is a male patient 25 years old reported to our maxillofacial clinic in Damietta faculty of medicine, Al-Azhar University with the inability to open the mouth at all, with a history of difficulty of mouth breathing and eating foods, there was a history of falling from height at 2006, and the patient underwent corrective surgery before with no improvement because the ankylosis was relapsed short period after the previous operations with that done out of our hospital inter-incisor distant ZERO so, this condition need mandatory management. Clinical examination and radiological investigations were done after complete approval from the patient and his brother; tracheostomy was done for our patient before the operation. The patient entered the operation in our hospital and drastic improvement in mouth opening was noticed, helping to restore the physical psychological health of the patient.Keywords: temporomandibular joint, TMJ, Ankylosis, mouth opening, physiotherapy, condylar plate
Procedia PDF Downloads 1534730 Comparison of Different Machine Learning Algorithms for Solubility Prediction
Authors: Muhammet Baldan, Emel Timuçin
Abstract:
Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.Keywords: random forest, machine learning, comparison, feature extraction
Procedia PDF Downloads 404729 Effects of Allowance for Corporate Equity on the Financing Choices of Belgian Small and Medium-Sized Enterprises in a Crisis Context
Authors: O. Colot, M. Croquet, L. Cultrera, Y. Fandja Collince
Abstract:
The objective of our research is to evaluate the impact of the allowance for corporate equity (ACE) on the financial structure of Belgian SME in order to highlight the potential existence of a fiscal leverage. To limit the biases linked to the rationing of the capital further to the financial crisis, we compare first the dynamic evolution of the financial structure of the Belgian firms over the period 2006-2015 by focusing on three sub-periods: 2006-2008, 2009-2012 and 2013-2015. We give then an international size to this comparison by including SMEs from countries adjoining Belgium (France, Germany, Netherlands and the United Kingdom) and within which there is no ACE. This comparison allows better understanding the fiscal advantage linked to the ACE of firms evolving in a relatively unstable economic environment further to the financial crisis of 2008. This research is relevant given the economic and political context in which Belgium operates and the very uncertain future of the Belgian ACE. The originality of this research is twofold: the long study period and the consideration of the effects of the financial and economic crisis on the financing structure of Belgian SMEs. The results of this research, even though they confirm the existence of a positive fiscal leverage for the tax deduction for venture capital on the financing structure of Belgian SMEs, do not allow the extent of this leverage to be clearly quantified. The comparative evolution of financing structures over the period 2006-2015 of Belgian, French, German, Dutch and English SMEs shows a strong similarity in the overall evolution of their financing.Keywords: allowance for corporate equity, Belgium, financial structure, small and medium sized firms
Procedia PDF Downloads 2024728 Exploring the Intrinsic Ecology and Suitable Density of Historic Districts Through a Comparative Analysis of Ancient and Modern Ecological Smart Practices
Authors: Hu Changjuan, Gong Cong, Long Hao
Abstract:
Although urban ecological policies and the public's aspiration for livable environments have expedited the pace of ecological revitalization, historic districts that have evolved through natural ecological processes often become obsolete and less habitable amid rapid urbanization. This raises a critical question about historic districts inherently incapable of being ecological and livable. The thriving concept of ‘intrinsic ecology,’ characterized by its ability to transform city-district systems into healthy ecosystems with diverse environments, stable functions, and rapid restoration capabilities, holds potential for guiding the integration of ancient and modern ecological wisdom while supporting the dynamic involvement of cultures. This study explores the intrinsic ecology of historic districts from three aspects: 1) Population Density: By comparing the population density before urban population expansion to the present day, determine the reasonable population density for historic districts. 2) Building Density: Using the ‘Space-mate’ tool for comparative analysis, form a spatial matrix to explore the intrinsic ecology of building density in Chinese historic districts. 3) Green Capacity Ratio: By using ecological districts as control samples, conduct dual comparative analyses (related comparison and upgraded comparison) to determine the intrinsic ecological advantages of the two-dimensional and three-dimensional green volume in historic districts. The study inform a density optimization strategy that supports cultural, social, natural, and economic ecology, contributing to the creation of eco-historic districts.Keywords: eco-historic districts, intrinsic ecology, suitable density, green capacity ratio.
Procedia PDF Downloads 234727 Nonlinear Finite Element Modeling of Deep Beam Resting on Linear and Nonlinear Random Soil
Authors: M. Seguini, D. Nedjar
Abstract:
An accuracy nonlinear analysis of a deep beam resting on elastic perfectly plastic soil is carried out in this study. In fact, a nonlinear finite element modeling for large deflection and moderate rotation of Euler-Bernoulli beam resting on linear and nonlinear random soil is investigated. The geometric nonlinear analysis of the beam is based on the theory of von Kàrmàn, where the Newton-Raphson incremental iteration method is implemented in a Matlab code to solve the nonlinear equation of the soil-beam interaction system. However, two analyses (deterministic and probabilistic) are proposed to verify the accuracy and the efficiency of the proposed model where the theory of the local average based on the Monte Carlo approach is used to analyze the effect of the spatial variability of the soil properties on the nonlinear beam response. The effect of six main parameters are investigated: the external load, the length of a beam, the coefficient of subgrade reaction of the soil, the Young’s modulus of the beam, the coefficient of variation and the correlation length of the soil’s coefficient of subgrade reaction. A comparison between the beam resting on linear and nonlinear soil models is presented for different beam’s length and external load. Numerical results have been obtained for the combination of the geometric nonlinearity of beam and material nonlinearity of random soil. This comparison highlighted the need of including the material nonlinearity and spatial variability of the soil in the geometric nonlinear analysis, when the beam undergoes large deflections.Keywords: finite element method, geometric nonlinearity, material nonlinearity, soil-structure interaction, spatial variability
Procedia PDF Downloads 4144726 Lexical Collocations in Medical Articles of Non-Native vs Native English-Speaking Researchers
Authors: Waleed Mandour
Abstract:
This study presents multidimensional scrutiny of Benson et al.’s seven-category taxonomy of lexical collocations used by Egyptian medical authors and their peers of native-English speakers. It investigates 212 medical papers, all published during a span of 6 years (from 2013 to 2018). The comparison is held to the medical research articles submitted by native speakers of English (25,238 articles in total with over 103 million words) as derived from the Directory of Open Access Journals (a 2.7 billion-word corpus). The non-native speakers compiled corpus was properly annotated and marked-up manually by the researcher according to the standards of Weisser. In terms of statistical comparisons, though, deployed were the conventional frequency-based analysis besides the relevant criteria, such as association measures (AMs) in which LogDice is deployed as per the recommendation of Kilgariff et al. when comparing large corpora. Despite the terminological convergence in the subject corpora, comparison results confirm the previous literature of which the non-native speakers’ compositions reveal limited ranges of lexical collocations in terms of their distribution. However, there is a ubiquitous tendency of overusing the NS-high-frequency multi-words in all lexical categories investigated. Furthermore, Egyptian authors, conversely to their English-speaking peers, tend to embrace more collocations denoting quantitative rather than qualitative analyses in their produced papers. This empirical work, per se, contributes to the English for Academic Purposes (EAP) and English as a Lingua Franca in Academic settings (ELFA). In addition, there are pedagogical implications that would promote a better quality of medical research papers published in Egyptian universities.Keywords: corpus linguistics, EAP, ELFA, lexical collocations, medical discourse
Procedia PDF Downloads 1304725 An Assessment on the Effect of Participation of Rural Woman on Sustainable Rural Water Supply in Yemen
Authors: Afrah Saad Mohsen Al-Mahfadi
Abstract:
In rural areas of developing countries, participation of all stakeholders in water supply projects is an important step towards further development. As most of the beneficiaries are women, it is important that they should be involved to achieve successful and sustainable water supply projects. Women are responsible for the management of water both inside and outside home, and often spend more than six-hours a day fetching drinking water from distant water sources. The problem is that rural women play a role of little importance in the water supply projects’ phases in rural Yemen. Therefore, this research aimed at analyzing the different reasons of their lack of participation in projects and in what way a full participation -if achieved- could contribute to sustainable water supply projects in the rural mountainous areas in Yemen. Four water supply projects were selected as a case study in Al-Della'a Alaala sub-district in the Al-Mahweet governorate, two of them were implemented by the Social Fund and Development (SFD), while others were implemented by the General Authority for Rural Water Supply Projects (GARWSSP). Furthermore, the successful Al-Galba project, which is located in Badan district in Ibb governorate, was selected for comparison. The rural women's active participation in water projects have potential consequences including continuity and maintenance improvement, equipment security, and improvement in the overall health and education status of these areas. The majority of respondents taking part in GARWSSP projects estimated that there is no reason to involve women in the project activities. In the comparison project - in which a woman worked as a supervisor and implemented the project – all respondents indicated that the participation of women is vital for sustainability. Therefore, the results of this research are intended to stimulate rural women's participation in the mountainous areas of Yemen.Keywords: assessment, rural woman, sustainability, water management
Procedia PDF Downloads 6904724 Numerical Simulation of a Combined Impact of Cooling and Ventilation on the Indoor Environmental Quality
Authors: Matjaz Prek
Abstract:
Impact of three different combinations of cooling and ventilation systems on the indoor environmental quality (IEQ) has been studied. Comparison of chilled ceiling cooling in combination with displacement ventilation, cooling with fan coil unit and cooling with flat wall displacement outlets was performed. All three combinations were evaluated from the standpoint of whole-body and local thermal comfort criteria as well as from the standpoint of ventilation effectiveness. The comparison was made on the basis of numerical simulation with DesignBuilder and Fluent. Numerical simulations were carried out in two steps. Firstly the DesignBuilder software environment was used to model the buildings thermal performance and evaluation of the interaction between the environment and the building. Heat gains of the building and of the individual space, as well as the heat loss on the boundary surfaces in the room, were calculated. In the second step Fluent software environment was used to simulate the response of the indoor environment, evaluating the interaction between building and human, using the simulation results obtained in the first step. Among the systems presented, the ceiling cooling system in combination with displacement ventilation was found to be the most suitable as it offers a high level of thermal comfort with adequate ventilation efficiency. Fan coil cooling has proved inadequate from the standpoint of thermal comfort whereas flat wall displacement outlets were inadequate from the standpoint of ventilation effectiveness. The study showed the need in evaluating indoor environment not solely from the energy use point of view, but from the point of view of indoor environmental quality as well.Keywords: cooling, ventilation, thermal comfort, ventilation effectiveness, indoor environmental quality, IEQ, computational fluid dynamics
Procedia PDF Downloads 1874723 Hepatoprotective Assessment of L-Ascorbate 1-(2-Hydroxyethyl)-4,6-Dimethyl-1, 2-Dihydropyrimidine-2-On Exposure to Carbon Tetrachloride
Authors: Nail Nazarov, Alexandra Vyshtakalyuk, Vyacheslav Semenov, Irina Galyametdinova, Vladimir Zobov, Vladimir Reznik
Abstract:
Among hepatic pyrimidine used as a means of stimulating protein synthesis and recovery of liver cells in her damaged toxic and infectious etiology. When an experimental toxic hepatitis hepatoprotective activity detected some pyrimidine derivatives. There are literature data on oksimetiluratcila hepatoprotective effect. For analogs of pyrimidine nucleobases - drugs Methyluracilum pentoxy and hepatoprotective effect of weakly expressed. According to the American scientists broad spectrum of biological activity, including hepatoprotective properties, have a 2,4-dioxo-5-arilidenimino uracils. Influenced Xymedon medicinal preparation (1- (beta-hydroxyethyl) -4,6-dimethyl-1,2-dihydro-2-oksopirimidin) developed as a means of stimulating the regeneration of tissue revealed increased activity of microsomal oxidases human liver. In studies on the model of toxic liver damage in rats have shown hepatoprotective effect xymedon and stimulating its impact on the recovery of the liver tissue. Hepatoprotective properties of the new compound in the series of pyrimidine derivatives L-ascorbate 1-(2-hydroxyethyl)-4,6-dimethyl-1,2-dihydropirimidine-2-one synthesized on the basis Xymedon preparation were firstly investigated on rats under the carbon tetrachloride action. It was shown the differences of biochemical parameters from the reference value and severity of structural-morphological liver violations decreased in comparison with control group under the influence of the compound injected before exposure carbon tetrachloride. Hepatoprotective properties of the investigated compound were more pronounced in comparison with Xymedon.Keywords: hepatoprotectors, pyrimidine derivatives, toxic liver damage, xymedon
Procedia PDF Downloads 4244722 Effect of Threshold Configuration on Accuracy in Upper Airway Analysis Using Cone Beam Computed Tomography
Authors: Saba Fahham, Supak Ngamsom, Suchaya Damrongsri
Abstract:
Objective: The objective is to determine the optimal threshold of Romexis software for the airway volume and minimum cross-section area (MCA) analysis using Image J as a gold standard. Materials and Methods: A total of ten cone-beam computed tomography (CBCT) images were collected. The airway volume and MCA of each patient were analyzed using the automatic airway segmentation function in the CBCT DICOM viewer (Romexis). Airway volume and MCA measurements were conducted on each CBCT sagittal view with fifteen different threshold values from the Romexis software, Ranging from 300 to 1000. Duplicate DICOM files, in axial view, were imported into Image J for concurrent airway volume and MCA analysis as the gold standard. The airway volume and MCA measured from Romexis and Image J were compared using a t-test with Bonferroni correction, and statistical significance was set at p<0.003. Results: Concerning airway volume, thresholds of 600 to 850 as well as 1000, exhibited results that were not significantly distinct from those obtained through Image J. Regarding MCA, employing thresholds from 400 to 850 within Romexis Viewer showed no variance from Image J. Notably, within the threshold range of 600 to 850, there were no statistically significant differences observed in both airway volume and MCA analyses, in comparison to Image J. Conclusion: This study demonstrated that the utilization of Planmeca Romexis Viewer 6.4.3.3 within threshold range of 600 to 850 yields airway volume and MCA measurements that exhibit no statistically significant variance in comparison to measurements obtained through Image J. This outcome holds implications for diagnosing upper airway obstructions and post-orthodontic surgical monitoring.Keywords: airway analysis, airway segmentation, cone beam computed tomography, threshold
Procedia PDF Downloads 444721 Alterations of Molecular Characteristics of Polyethylene under the Influence of External Effects
Authors: Vigen Barkhudaryan
Abstract:
The influence of external effects (γ-, UV–radiations, high temperature) in presence of air oxygen on structural transformations of low-density polyethylene (LDPE) have been investigated dependent on the polymers’ thickness, the intensity and the dose of external actions. The methods of viscosimetry, light scattering, turbidimetry and gelation measuring were used for this purpose. The comparison of influence of external effects on LDPE shows, that the destruction and cross-linking processes of macromolecules proceed simultaneously with all kinds of external effects. A remarkable growth of average molecular mass of LDPE along with the irradiation doses and heat treatment exposure growth was established. It was linear for the mass average molecular mass and at the initial doses is mainly the result of the increase of the macromolecular branching. As a result, the macromolecular hydrodynamic volumes have been changed, and therefore the dependence of viscosity average molecular mass on the doses was going through the minimum at initial doses. A significant change of molecular mass, sizes and shape of macromolecules of LDPE occurs under the influence of external effects. The influence is limited only by diffusion of oxygen during -irradiation and heat treatment. At UV–irradiation the influence is limited both by diffusion of oxygen and penetration of radiation. Consequently, the molecular transformations are deeper and evident in case of -irradiation, as soon as the polymer is transformed in a whole volume. It was also established, that the mechanism of molecular transformations in polymers from the surface layer distinctly differs from those of the sample deeper layer. A comparison of the results of these investigations allows us to conclude, that the mechanisms of influence of investigated external effects on polyethylene are similar.Keywords: cross-linking, destruction, high temperature, LDPE, γ-radiations, UV-radiations
Procedia PDF Downloads 3164720 Comparison of Gait Variability in Individuals with Trans-Tibial and Trans-Femoral Lower Limb Loss: A Pilot Study
Authors: Hilal Keklicek, Fatih Erbahceci, Elif Kirdi, Ali Yalcin, Semra Topuz, Ozlem Ulger, Gul Sener
Abstract:
Objectives and Goals: The stride-to-stride fluctuations in gait is a determinant of qualified locomotion as known as gait variability. Gait variability is an important predictive factor of fall risk and useful for monitoring the effects of therapeutic interventions and rehabilitation. Comparison of gait variability in individuals with trans-tibial lower limb loss and trans femoral lower limb loss was the aim of the study. Methods: Ten individuals with traumatic unilateral trans femoral limb loss(TF), 12 individuals with traumatic transtibial lower limb loss(TT) and 12 healthy individuals(HI) were the participants of the study. All participants were evaluated with treadmill. Gait characteristics including mean step length, step length variability, ambulation index, time on each foot of participants were evaluated with treadmill. Participants were walked at their preferred speed for six minutes. Data from 4th minutes to 6th minutes were selected for statistical analyses to eliminate learning effect. Results: There were differences between the groups in intact limb step length variation, time on each foot, ambulation index and mean age (p < .05) according to the Kruskal Wallis Test. Pairwise analyses showed that there were differences between the TT and TF in residual limb variation (p=.041), time on intact foot (p=.024), time on prosthetic foot(p=.024), ambulation index(p = .003) in favor of TT group. There were differences between the TT and HI group in intact limb variation (p = .002), time on intact foot (p<.001), time on prosthetic foot (p < .001), ambulation index result (p < .001) in favor of HI group. There were differences between the TF and HI group in intact limb variation (p = .001), time on intact foot (p=.01) ambulation index result (p < .001) in favor of HI group. There was difference between the groups in mean age result from HI group were younger (p < .05).There were similarity between the groups in step lengths (p>.05) and time of prosthesis using in individuals with lower limb loss (p > .05). Conclusions: The pilot study provided basic data about gait stability in individuals with traumatic lower limb loss. Results of the study showed that to evaluate the gait differences between in different amputation level, long-range gait analyses methods may be useful to get more valuable information. On the other hand, similarity in step length may be resulted from effective prosthetic using or effective gait rehabilitation, in conclusion, all participants with lower limb loss were already trained. The differences between the TT and HI; TF and HI may be resulted from the age related features, therefore, age matched population in HI were recommended future studies. Increasing the number of participants and comparison of age-matched groups also recommended to generalize these result.Keywords: lower limb loss, amputee, gait variability, gait analyses
Procedia PDF Downloads 2804719 Cost-Benefit Analysis for the Optimization of Noise Abatement Treatments at the Workplace
Authors: Paolo Lenzuni
Abstract:
Cost-effectiveness of noise abatement treatments at the workplace has not yet received adequate consideration. Furthermore, most of the published work is focused on productivity, despite the poor correlation of this quantity with noise levels. There is currently no tool to estimate the social benefit associated to a specific noise abatement treatment, and no comparison among different options is accordingly possible. In this paper, we present an algorithm which has been developed to predict the cost-effectiveness of any planned noise control treatment in a workplace. This algorithm is based the estimates of hearing threshold shifts included in ISO 1999, and on compensations that workers are entitled to once their work-related hearing impairments have been certified. The benefits of a noise abatement treatment are estimated by means of the lower compensation costs which are paid to the impaired workers. Although such benefits have no real meaning in strictly monetary terms, they allow a reliable comparison between different treatments, since actual social costs can be assumed to be proportional to compensation costs. The existing European legislation on occupational exposure to noise it mandates that the noise exposure level be reduced below the upper action limit (85 dBA). There is accordingly little or no motivation for employers to sustain the extra costs required to lower the noise exposure below the lower action limit (80 dBA). In order to make this goal more appealing for employers, the algorithm proposed in this work also includes an ad-hoc element that promotes actions which bring the noise exposure down below 80 dBA. The algorithm has a twofold potential: 1) it can be used as a quality index to promote cost-effective practices; 2) it can be added to the existing criteria used by workers’ compensation authorities to evaluate the cost-effectiveness of technical actions, and support dedicated employers.Keywords: cost-effectiveness, noise, occupational exposure, treatment
Procedia PDF Downloads 3224718 Restricted Boltzmann Machines and Deep Belief Nets for Market Basket Analysis: Statistical Performance and Managerial Implications
Authors: H. Hruschka
Abstract:
This paper presents the first comparison of the performance of the restricted Boltzmann machine and the deep belief net on binary market basket data relative to binary factor analysis and the two best-known topic models, namely Dirichlet allocation and the correlated topic model. This comparison shows that the restricted Boltzmann machine and the deep belief net are superior to both binary factor analysis and topic models. Managerial implications that differ between the investigated models are treated as well. The restricted Boltzmann machine is defined as joint Boltzmann distribution of hidden variables and observed variables (purchases). It comprises one layer of observed variables and one layer of hidden variables. Note that variables of the same layer are not connected. The comparison also includes deep belief nets with three layers. The first layer is a restricted Boltzmann machine based on category purchases. Hidden variables of the first layer are used as input variables by the second-layer restricted Boltzmann machine which then generates second-layer hidden variables. Finally, in the third layer hidden variables are related to purchases. A public data set is analyzed which contains one month of real-world point-of-sale transactions in a typical local grocery outlet. It consists of 9,835 market baskets referring to 169 product categories. This data set is randomly split into two halves. One half is used for estimation, the other serves as holdout data. Each model is evaluated by the log likelihood for the holdout data. Performance of the topic models is disappointing as the holdout log likelihood of the correlated topic model – which is better than Dirichlet allocation - is lower by more than 25,000 compared to the best binary factor analysis model. On the other hand, binary factor analysis on its own is clearly surpassed by both the restricted Boltzmann machine and the deep belief net whose holdout log likelihoods are higher by more than 23,000. Overall, the deep belief net performs best. We also interpret hidden variables discovered by binary factor analysis, the restricted Boltzmann machine and the deep belief net. Hidden variables characterized by the product categories to which they are related differ strongly between these three models. To derive managerial implications we assess the effect of promoting each category on total basket size, i.e., the number of purchased product categories, due to each category's interdependence with all the other categories. The investigated models lead to very different implications as they disagree about which categories are associated with higher basket size increases due to a promotion. Of course, recommendations based on better performing models should be preferred. The impressive performance advantages of the restricted Boltzmann machine and the deep belief net suggest continuing research by appropriate extensions. To include predictors, especially marketing variables such as price, seems to be an obvious next step. It might also be feasible to take a more detailed perspective by considering purchases of brands instead of purchases of product categories.Keywords: binary factor analysis, deep belief net, market basket analysis, restricted Boltzmann machine, topic models
Procedia PDF Downloads 1994717 WWSE School Development in German Christian Schools Revisited: Organizational Development Taken to a Test
Authors: Marco Sewald
Abstract:
WWSE School Development (Wahrnehmungs- und wertorientierte Schulentwicklung) contains surveys on pupils, teachers and parents and enables schools to align the development to the requirements mentioned by these three stakeholders. WWSE includes a derivative set of questions for Christian schools, meeting their specific needs. The conducted research on WWSE is reflecting contemporary questions on school development, questioning the quality of the implementation of the results of past surveys, delivered by WWSE School Development in Christian schools in Germany. The research focused on questions connected to organizational development, including leadership and change management. This is done contoured to the two other areas of WWSE: human resources development and development of school teaching methods. The chosen research methods are: (1) A quantitative triangulation on three sets of data. Data from a past evaluation taken in 2011, data from a second evaluation covering the same school conducted in 2014 and a structured survey among the teachers, headmasters and members of the school board taken within the research. (2) Interviews with teachers and headmasters have been conducted during the research as a second stage to fortify the result of the quantitative first stage. Results: WWSE is supporting modern school development. While organizational development, leadership, and change management are proofed to be important for modern school development, these areas are widespread underestimated by teachers and headmasters. Especially in comparison to the field of human resource development and to an even bigger extent in comparison to the area of development of school teaching methods. The research concluded, that additional efforts in the area of organizational development are necessary to meet modern demands and the research also shows which areas are the most important ones.Keywords: school as a social organization, school development, school leadership, WWSE, Wahrnehmungs- und wertorientierte Schulentwicklung
Procedia PDF Downloads 226