Search results for: staff promotion effectiveness
402 Elimination of Mother to Child Transmission of HIV/AIDS: A Study of the Knowledge, Attitudes and Perceptions of Healthcare Workers in Abuja Nigeria
Authors: Ezinne K. Okoro, Takahiko Katoh, Yoko Kawamura, Stanley C. Meribe
Abstract:
HIV infection in children is largely as a result of vertical transmission (mother to child transmission [MTCT]). Thus, elimination of mother to child transmission of HIV/AIDS is critical in eliminating HIV infection in children. In Nigeria, drawbacks such as; limited pediatric screening, limited human capital, insufficient advocacy and poor understanding of ART guidelines, have impacted efforts at combating the disease, even as treatment services are free. Prevention of Mother to Child Transmission (PMTCT) program relies on health workers who not only counsel pregnant women on first contact but can competently provide HIV-positive pregnant women with accurate information about the PMTCT program such as feeding techniques and drug adherence. In developing regions like Nigeria where health care delivery faces a lot of drawbacks, it becomes paramount to address these issues of poor PMTCT coverage by conducting a baseline assessment of the knowledge, practices and perceptions related to HIV prevention amongst healthcare workers in Nigeria. A descriptive cross-sectional study was conducted amongst 250 health workers currently employed in health facilities in Abuja, Nigeria where PMTCT services were offered with the capacity to carry out early infant diagnosis testing (EID). Data was collected using a self-administered, pretested, structured questionnaire. This study showed that the knowledge of PMTCT of HIV was poor (30%) among healthcare workers who offer this service day-to-day to pregnant women. When PMTCT practices were analyzed in keeping with National PMTCT guidelines, over 61% of the respondents reported observing standard practices and the majority (58%) had good attitudes towards caring for patients with HIV/AIDS. Although 61% of the respondents reported being satisfied with the quality of service being rendered, 63% reported not being satisfied with their level of knowledge. Predictors of good knowledge were job designation and level of educational attainment. Health workers who were more satisfied with their working conditions and those who had worked for a longer time in the PMTCT service were more likely to observe standard PMTCT practices. With over 62% of the healthcare workers suggesting that more training would improve the quality of service being rendered, this is a strong pointer to stakeholders to consider a ‘healthcare worker-oriented approach’ when planning and conducting PMTCT training for healthcare workers. This in turn will increase pediatric ARV coverage, the knowledge and effectiveness of the healthcare workers in carrying out appropriate PMTCT interventions and culminating in the reduction/elimination of HIV transmission to newborns.Keywords: attitudes, HIV/AIDS, healthcare workers, knowledge, mother to child transmission, Nigeria, perceptions
Procedia PDF Downloads 206401 Unsupervised Detection of Burned Area from Remote Sensing Images Using Spatial Correlation and Fuzzy Clustering
Authors: Tauqir A. Moughal, Fusheng Yu, Abeer Mazher
Abstract:
Land-cover and land-use change information are important because of their practical uses in various applications, including deforestation, damage assessment, disasters monitoring, urban expansion, planning, and land management. Therefore, developing change detection methods for remote sensing images is an important ongoing research agenda. However, detection of change through optical remote sensing images is not a trivial task due to many factors including the vagueness between the boundaries of changed and unchanged regions and spatial dependence of the pixels to its neighborhood. In this paper, we propose a binary change detection technique for bi-temporal optical remote sensing images. As in most of the optical remote sensing images, the transition between the two clusters (change and no change) is overlapping and the existing methods are incapable of providing the accurate cluster boundaries. In this regard, a methodology has been proposed which uses the fuzzy c-means clustering to tackle the problem of vagueness in the changed and unchanged class by formulating the soft boundaries between them. Furthermore, in order to exploit the neighborhood information of the pixels, the input patterns are generated corresponding to each pixel from bi-temporal images using 3×3, 5×5 and 7×7 window. The between images and within image spatial dependence of the pixels to its neighborhood is quantified by using Pearson product moment correlation and Moran’s I statistics, respectively. The proposed technique consists of two phases. At first, between images and within image spatial correlation is calculated to utilize the information that the pixels at different locations may not be independent. Second, fuzzy c-means technique is used to produce two clusters from input feature by not only taking care of vagueness between the changed and unchanged class but also by exploiting the spatial correlation of the pixels. To show the effectiveness of the proposed technique, experiments are conducted on multispectral and bi-temporal remote sensing images. A subset (2100×1212 pixels) of a pan-sharpened, bi-temporal Landsat 5 thematic mapper optical image of Los Angeles, California, is used in this study which shows a long period of the forest fire continued from July until October 2009. Early forest fire and later forest fire optical remote sensing images were acquired on July 5, 2009 and October 25, 2009, respectively. The proposed technique is used to detect the fire (which causes change on earth’s surface) and compared with the existing K-means clustering technique. Experimental results showed that proposed technique performs better than the already existing technique. The proposed technique can be easily extendable for optical hyperspectral images and is suitable for many practical applications.Keywords: burned area, change detection, correlation, fuzzy clustering, optical remote sensing
Procedia PDF Downloads 169400 Creating Standards to Define the Role of Employment Specialists: A Case Study
Authors: Joseph Ippolito, David Megenhardt
Abstract:
In the United States, displaced workers, the unemployed and those seeking to build additional work skills are provided employment training and job placement services through a system of One-Stop Career Centers that are sponsored by the country’s 593 local Workforce Boards. During the period 2010-2015, these centers served roughly 8 million individuals each year. The quality of services provided at these centers rests upon professional employment specialists who work closely with clients to identify their job interests, to connect them to appropriate training opportunities, to match them with needed supportive social services and to guide them to eventual employment. Despite the crucial role these Employment Specialists play, currently there are no broadly accepted standards that establish what these individuals are expected to do in the workplace, nor are there indicators to assess how well an individual performs these responsibilities. Education Development Center (EDC) and the United Labor Agency (ULA) have partnered to create a foundation upon which curriculum can be developed that addresses the skills, knowledge and behaviors that Employment Specialists must master in order to serve their clients effectively. EDC is a non-profit, education research and development organization that designs, implements, and evaluates programs to improve education, health and economic opportunity worldwide. ULA is the social action arm of organized labor in Greater Cleveland, Ohio. ULA currently operates One-Stop Career Centers in both Cleveland and Pittsburgh, Pennsylvania. This case study outlines efforts taken to create standards that define the work of Employment Specialists and to establish indicators that can guide assessment of work performance. The methodology involved in the study has engaged a panel of expert Employment Specialists in rigorous, structured dialogues that analyze and identify the characteristics that enable them to be effective in their jobs. It has also drawn upon and integrated reviews of the panel’s work by more than 100 other Employment Specialists across the country. The results of this process are two documents that provide resources for developing training curriculum for future Employment Specialists, namely: an occupational profile of an Employment Specialist that offers a detailed articulation of the skills, knowledge and behaviors that enable individuals to be successful at this job, and; a collection of performance based indicators, aligned to the profile, which illustrate what the work responsibilities of an Employment Specialist 'look like' a four levels of effectiveness ranging from novice to expert. The method of occupational analysis used by the study has application across a broad number of fields.Keywords: assessment, employability, job standards, workforce development
Procedia PDF Downloads 236399 Semiconductor Properties of Natural Phosphate Application to Photodegradation of Basic Dyes in Single and Binary Systems
Authors: Y. Roumila, D. Meziani, R. Bagtache, K. Abdmeziem, M. Trari
Abstract:
Heterogeneous photocatalysis over semiconductors has proved its effectiveness in the treatment of wastewaters since it works under soft conditions. It has emerged as a promising technique, giving rise to less toxic effluents and offering the opportunity of using sunlight as a sustainable and renewable source of energy. Many compounds have been used as photocatalysts. Though synthesized ones are intensively used, they remain expensive, and their synthesis involves special conditions. We thus thought of implementing a natural material, a phosphate ore, due to its low cost and great availability. Our work is devoted to the removal of hazardous organic pollutants, which cause several environmental problems and health risks. Among them, dye pollutants occupy a large place. This work relates to the study of the photodegradation of methyl violet (MV) and rhodamine B (RhB), in single and binary systems, under UV light and sunlight irradiation. Methyl violet is a triarylmethane dye, while RhB is a heteropolyaromatic dye belonging to the Xanthene family. In the first part of this work, the natural compound was characterized using several physicochemical and photo-electrochemical (PEC) techniques: X-Ray diffraction, chemical, and thermal analyses scanning electron microscopy, UV-Vis diffuse reflectance measurements, and FTIR spectroscopy. The electrochemical and photoelectrochemical studies were performed with a Voltalab PGZ 301 potentiostat/galvanostat at room temperature. The structure of the phosphate material was well characterized. The photo-electrochemical (PEC) properties are crucial for drawing the energy band diagram, in order to suggest the formation of radicals and the reactions involved in the dyes photo-oxidation mechanism. The PEC characterization of the natural phosphate was investigated in neutral solution (Na₂SO₄, 0.5 M). The study revealed the semiconducting behavior of the phosphate rock. Indeed, the thermal evolution of the electrical conductivity was well fitted by an exponential type law, and the electrical conductivity increases with raising the temperature. The Mott–Schottky plot and current-potential J(V) curves recorded in the dark and under illumination clearly indicate n-type behavior. From the results of photocatalysis, in single solutions, the changes in MV and RhB absorbance in the function of time show that practically all of the MV was removed after 240 mn irradiation. For RhB, the complete degradation was achieved after 330 mn. This is due to its complex and resistant structure. In binary systems, it is only after 120 mn that RhB begins to be slowly removed, while about 60% of MV is already degraded. Once nearly all of the content of MV in the solution has disappeared (after about 250 mn), the remaining RhB is degraded rapidly. This behaviour is different from that observed in single solutions where both dyes are degraded since the first minutes of irradiation.Keywords: environment, organic pollutant, phosphate ore, photodegradation
Procedia PDF Downloads 132398 Myanmar Consonants Recognition System Based on Lip Movements Using Active Contour Model
Authors: T. Thein, S. Kalyar Myo
Abstract:
Human uses visual information for understanding the speech contents in noisy conditions or in situations where the audio signal is not available. The primary advantage of visual information is that it is not affected by the acoustic noise and cross talk among speakers. Using visual information from the lip movements can improve the accuracy and robustness of automatic speech recognition. However, a major challenge with most automatic lip reading system is to find a robust and efficient method for extracting the linguistically relevant speech information from a lip image sequence. This is a difficult task due to variation caused by different speakers, illumination, camera setting and the inherent low luminance and chrominance contrast between lip and non-lip region. Several researchers have been developing methods to overcome these problems; the one is lip reading. Moreover, it is well known that visual information about speech through lip reading is very useful for human speech recognition system. Lip reading is the technique of a comprehensive understanding of underlying speech by processing on the movement of lips. Therefore, lip reading system is one of the different supportive technologies for hearing impaired or elderly people, and it is an active research area. The need for lip reading system is ever increasing for every language. This research aims to develop a visual teaching method system for the hearing impaired persons in Myanmar, how to pronounce words precisely by identifying the features of lip movement. The proposed research will work a lip reading system for Myanmar Consonants, one syllable consonants (င (Nga)၊ ည (Nya)၊ မ (Ma)၊ လ (La)၊ ၀ (Wa)၊ သ (Tha)၊ ဟ (Ha)၊ အ (Ah) ) and two syllable consonants ( က(Ka Gyi)၊ ခ (Kha Gway)၊ ဂ (Ga Nge)၊ ဃ (Ga Gyi)၊ စ (Sa Lone)၊ ဆ (Sa Lain)၊ ဇ (Za Gwe) ၊ ဒ (Da Dway)၊ ဏ (Na Gyi)၊ န (Na Nge)၊ ပ (Pa Saug)၊ ဘ (Ba Gone)၊ ရ (Ya Gaug)၊ ဠ (La Gyi) ). In the proposed system, there are three subsystems, the first one is the lip localization system, which localizes the lips in the digital inputs. The next one is the feature extraction system, which extracts features of lip movement suitable for visual speech recognition. And the final one is the classification system. In the proposed research, Two Dimensional Discrete Cosine Transform (2D-DCT) and Linear Discriminant Analysis (LDA) with Active Contour Model (ACM) will be used for lip movement features extraction. Support Vector Machine (SVM) classifier is used for finding class parameter and class number in training set and testing set. Then, experiments will be carried out for the recognition accuracy of Myanmar consonants using the only visual information on lip movements which are useful for visual speech of Myanmar languages. The result will show the effectiveness of the lip movement recognition for Myanmar Consonants. This system will help the hearing impaired persons to use as the language learning application. This system can also be useful for normal hearing persons in noisy environments or conditions where they can find out what was said by other people without hearing voice.Keywords: feature extraction, lip reading, lip localization, Active Contour Model (ACM), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Two Dimensional Discrete Cosine Transform (2D-DCT)
Procedia PDF Downloads 286397 Planning a European Policy for Increasing Graduate Population: The Conditions That Count
Authors: Alice Civera, Mattia Cattaneo, Michele Meoli, Stefano Paleari
Abstract:
Despite the fact that more equal access to higher education has been an objective public policy for several decades, little is known about the effectiveness of alternative means for achieving such goal. Indeed, nowadays, high level of graduate population can be observed both in countries with the high and low level of fees, or high and low level of public expenditure in higher education. This paper surveys the extant literature providing some background on the economic concepts of the higher education market, and reviews key determinants of demand and supply. A theoretical model of aggregate demand and supply of higher education is derived, with the aim to facilitate the understanding of the challenges in today’s higher education systems, as well as the opportunities for development. The model is validated on some exemplary case studies describing the different relationship between the level of public investment and levels of graduate population and helps to derive general implications. In addition, using a two-stage least squares model, we build a macroeconomic model of supply and demand for European higher education. The model allows interpreting policies shifting either the supply or the demand for higher education, and allows taking into consideration contextual conditions with the aim of comparing divergent policies under a common framework. Results show that the same policy objective (i.e., increasing graduate population) can be obtained by shifting either the demand function (i.e., by strengthening student aid) or the supply function (i.e., by directly supporting higher education institutions). Under this theoretical perspective, the level of tuition fees is irrelevant, and empirically we can observe high levels of graduate population in both countries with high (i.e., the UK) or low (i.e., Germany) levels of tuition fees. In practice, this model provides a conceptual framework to help better understanding what are the external conditions that need to be considered, when planning a policy for increasing graduate population. Extrapolating a policy from results in different countries, under this perspective, is a poor solution when contingent factors are not addressed. The second implication of this conceptual framework is that policies addressing the supply or the demand function needs to address different contingencies. In other words, a government aiming at increasing graduate population needs to implement complementary policies, designing them according to the side of the market that is interested. For example, a ‘supply-driven’ intervention, through the direct financial support of higher education institutions, needs to address the issue of institutions’ moral hazard, by creating incentives to supply higher education services in efficient conditions. By contrast, a ‘demand-driven’ policy, providing student aids, need to tackle the students’ moral hazard, by creating an incentive to responsible behavior.Keywords: graduates, higher education, higher education policies, tuition fees
Procedia PDF Downloads 169396 Evotrader: Bitcoin Trading Using Evolutionary Algorithms on Technical Analysis and Social Sentiment Data
Authors: Martin Pellon Consunji
Abstract:
Due to the rise in popularity of Bitcoin and other crypto assets as a store of wealth and speculative investment, there is an ever-growing demand for automated trading tools, such as bots, in order to gain an advantage over the market. Traditionally, trading in the stock market was done by professionals with years of training who understood patterns and exploited market opportunities in order to gain a profit. However, nowadays a larger portion of market participants are at minimum aided by market-data processing bots, which can generally generate more stable signals than the average human trader. The rise in trading bot usage can be accredited to the inherent advantages that bots have over humans in terms of processing large amounts of data, lack of emotions of fear or greed, and predicting market prices using past data and artificial intelligence, hence a growing number of approaches have been brought forward to tackle this task. However, the general limitation of these approaches can still be broken down to the fact that limited historical data doesn’t always determine the future, and that a lot of market participants are still human emotion-driven traders. Moreover, developing markets such as those of the cryptocurrency space have even less historical data to interpret than most other well-established markets. Due to this, some human traders have gone back to the tried-and-tested traditional technical analysis tools for exploiting market patterns and simplifying the broader spectrum of data that is involved in making market predictions. This paper proposes a method which uses neuro evolution techniques on both sentimental data and, the more traditionally human-consumed, technical analysis data in order to gain a more accurate forecast of future market behavior and account for the way both automated bots and human traders affect the market prices of Bitcoin and other cryptocurrencies. This study’s approach uses evolutionary algorithms to automatically develop increasingly improved populations of bots which, by using the latest inflows of market analysis and sentimental data, evolve to efficiently predict future market price movements. The effectiveness of the approach is validated by testing the system in a simulated historical trading scenario, a real Bitcoin market live trading scenario, and testing its robustness in other cryptocurrency and stock market scenarios. Experimental results during a 30-day period show that this method outperformed the buy and hold strategy by over 260% in terms of net profits, even when taking into consideration standard trading fees.Keywords: neuro-evolution, Bitcoin, trading bots, artificial neural networks, technical analysis, evolutionary algorithms
Procedia PDF Downloads 124395 Emotion-Convolutional Neural Network for Perceiving Stress from Audio Signals: A Brain Chemistry Approach
Authors: Anup Anand Deshmukh, Catherine Soladie, Renaud Seguier
Abstract:
Emotion plays a key role in many applications like healthcare, to gather patients’ emotional behavior. Unlike typical ASR (Automated Speech Recognition) problems which focus on 'what was said', it is equally important to understand 'how it was said.' There are certain emotions which are given more importance due to their effectiveness in understanding human feelings. In this paper, we propose an approach that models human stress from audio signals. The research challenge in speech emotion detection is finding the appropriate set of acoustic features corresponding to an emotion. Another difficulty lies in defining the very meaning of emotion and being able to categorize it in a precise manner. Supervised Machine Learning models, including state of the art Deep Learning classification methods, rely on the availability of clean and labelled data. One of the problems in affective computation is the limited amount of annotated data. The existing labelled emotions datasets are highly subjective to the perception of the annotator. We address the first issue of feature selection by exploiting the use of traditional MFCC (Mel-Frequency Cepstral Coefficients) features in Convolutional Neural Network. Our proposed Emo-CNN (Emotion-CNN) architecture treats speech representations in a manner similar to how CNN’s treat images in a vision problem. Our experiments show that Emo-CNN consistently and significantly outperforms the popular existing methods over multiple datasets. It achieves 90.2% categorical accuracy on the Emo-DB dataset. We claim that Emo-CNN is robust to speaker variations and environmental distortions. The proposed approach achieves 85.5% speaker-dependant categorical accuracy for SAVEE (Surrey Audio-Visual Expressed Emotion) dataset, beating the existing CNN based approach by 10.2%. To tackle the second problem of subjectivity in stress labels, we use Lovheim’s cube, which is a 3-dimensional projection of emotions. Monoamine neurotransmitters are a type of chemical messengers in the brain that transmits signals on perceiving emotions. The cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space. The learnt emotion representations from the Emo-CNN are mapped to the cube using three component PCA (Principal Component Analysis) which is then used to model human stress. This proposed approach not only circumvents the need for labelled stress data but also complies with the psychological theory of emotions given by Lovheim’s cube. We believe that this work is the first step towards creating a connection between Artificial Intelligence and the chemistry of human emotions.Keywords: deep learning, brain chemistry, emotion perception, Lovheim's cube
Procedia PDF Downloads 156394 Comparison of the Effectiveness of Tree Algorithms in Classification of Spongy Tissue Texture
Authors: Roza Dzierzak, Waldemar Wojcik, Piotr Kacejko
Abstract:
Analysis of the texture of medical images consists of determining the parameters and characteristics of the examined tissue. The main goal is to assign the analyzed area to one of two basic groups: as a healthy tissue or a tissue with pathological changes. The CT images of the thoracic lumbar spine from 15 healthy patients and 15 with confirmed osteoporosis were used for the analysis. As a result, 120 samples with dimensions of 50x50 pixels were obtained. The set of features has been obtained based on the histogram, gradient, run-length matrix, co-occurrence matrix, autoregressive model, and Haar wavelet. As a result of the image analysis, 290 descriptors of textural features were obtained. The dimension of the space of features was reduced by the use of three selection methods: Fisher coefficient (FC), mutual information (MI), minimization of the classification error probability and average correlation coefficients between the chosen features minimization of classification error probability (POE) and average correlation coefficients (ACC). Each of them returned ten features occupying the initial place in the ranking devised according to its own coefficient. As a result of the Fisher coefficient and mutual information selections, the same features arranged in a different order were obtained. In both rankings, the 50% percentile (Perc.50%) was found in the first place. The next selected features come from the co-occurrence matrix. The sets of features selected in the selection process were evaluated using six classification tree methods. These were: decision stump (DS), Hoeffding tree (HT), logistic model trees (LMT), random forest (RF), random tree (RT) and reduced error pruning tree (REPT). In order to assess the accuracy of classifiers, the following parameters were used: overall classification accuracy (ACC), true positive rate (TPR, classification sensitivity), true negative rate (TNR, classification specificity), positive predictive value (PPV) and negative predictive value (NPV). Taking into account the classification results, it should be stated that the best results were obtained for the Hoeffding tree and logistic model trees classifiers, using the set of features selected by the POE + ACC method. In the case of the Hoeffding tree classifier, the highest values of three parameters were obtained: ACC = 90%, TPR = 93.3% and PPV = 93.3%. Additionally, the values of the other two parameters, i.e., TNR = 86.7% and NPV = 86.6% were close to the maximum values obtained for the LMT classifier. In the case of logistic model trees classifier, the same ACC value was obtained ACC=90% and the highest values for TNR=88.3% and NPV= 88.3%. The values of the other two parameters remained at a level close to the highest TPR = 91.7% and PPV = 91.6%. The results obtained in the experiment show that the use of classification trees is an effective method of classification of texture features. This allows identifying the conditions of the spongy tissue for healthy cases and those with the porosis.Keywords: classification, feature selection, texture analysis, tree algorithms
Procedia PDF Downloads 180393 Federated Knowledge Distillation with Collaborative Model Compression for Privacy-Preserving Distributed Learning
Authors: Shayan Mohajer Hamidi
Abstract:
Federated learning has emerged as a promising approach for distributed model training while preserving data privacy. However, the challenges of communication overhead, limited network resources, and slow convergence hinder its widespread adoption. On the other hand, knowledge distillation has shown great potential in compressing large models into smaller ones without significant loss in performance. In this paper, we propose an innovative framework that combines federated learning and knowledge distillation to address these challenges and enhance the efficiency of distributed learning. Our approach, called Federated Knowledge Distillation (FKD), enables multiple clients in a federated learning setting to collaboratively distill knowledge from a teacher model. By leveraging the collaborative nature of federated learning, FKD aims to improve model compression while maintaining privacy. The proposed framework utilizes a coded teacher model that acts as a reference for distilling knowledge to the client models. To demonstrate the effectiveness of FKD, we conduct extensive experiments on various datasets and models. We compare FKD with baseline federated learning methods and standalone knowledge distillation techniques. The results show that FKD achieves superior model compression, faster convergence, and improved performance compared to traditional federated learning approaches. Furthermore, FKD effectively preserves privacy by ensuring that sensitive data remains on the client devices and only distilled knowledge is shared during the training process. In our experiments, we explore different knowledge transfer methods within the FKD framework, including Fine-Tuning (FT), FitNet, Correlation Congruence (CC), Similarity-Preserving (SP), and Relational Knowledge Distillation (RKD). We analyze the impact of these methods on model compression and convergence speed, shedding light on the trade-offs between size reduction and performance. Moreover, we address the challenges of communication efficiency and network resource utilization in federated learning by leveraging the knowledge distillation process. FKD reduces the amount of data transmitted across the network, minimizing communication overhead and improving resource utilization. This makes FKD particularly suitable for resource-constrained environments such as edge computing and IoT devices. The proposed FKD framework opens up new avenues for collaborative and privacy-preserving distributed learning. By combining the strengths of federated learning and knowledge distillation, it offers an efficient solution for model compression and convergence speed enhancement. Future research can explore further extensions and optimizations of FKD, as well as its applications in domains such as healthcare, finance, and smart cities, where privacy and distributed learning are of paramount importance.Keywords: federated learning, knowledge distillation, knowledge transfer, deep learning
Procedia PDF Downloads 76392 Motivation of Doctors and its Impact on the Quality of Working Life
Authors: E. V. Fakhrutdinova, K. R. Maksimova, P. B. Chursin
Abstract:
At the present stage of the society progress the health care is an integral part of both the economic system and social, while in the second case the medicine is a major component of a number of basic and necessary social programs. Since the foundation of the health system are highly qualified health professionals, it is logical proposition that increase of doctor`s professionalism improves the effectiveness of the system as a whole. Professionalism of the doctor is a collection of many components, essential role played by such personal-psychological factors as honesty, willingness and desire to help people, and motivation. A number of researchers consider motivation as an expression of basic human needs that have passed through the “filter” which is a worldview and values learned in the process of socialization by the individual, to commit certain actions designed to achieve the expected result. From this point of view a number of researchers propose the following classification of highly skilled employee’s needs: 1. the need for confirmation the competence (setting goals that meet the professionalism and receipt of positive emotions in their decision), 2. The need for independence (the ability to make their own choices in contentious situations arising in the process carry out specialist functions), 3. The need for ownership (in the case of health care workers, to the profession and accordingly, high in the eyes of the public status of the doctor). Nevertheless, it is important to understand that in a market economy a significant motivator for physicians (both legal and natural persons) is to maximize its own profits. In the case of health professionals duality motivational structure creates an additional contrast, as in the public mind the image of the ideal physician; usually a altruistically minded person thinking is not primarily about their own benefit, and to assist others. In this context, the question of the real motivation of health workers deserves special attention. The survey conducted by the American researcher Harrison Terni for the magazine "Med Tech" in 2010 revealed the opinion of more than 200 medical students starting courses, and the primary motivation in a profession choice is "desire to help people", only 15% said that they want become a doctor, "to earn a lot". From the point of view of most of the classical theories of motivation this trend can be called positive, as intangible incentives are more effective. However, it is likely that over time the opinion of the respondents may change in the direction of mercantile motives. Thus, it is logical to assume that well-designed system of motivation of doctor`s labor should be based on motivational foundations laid during training in higher education.Keywords: motivation, quality of working life, health system, personal-psychological factors, motivational structure
Procedia PDF Downloads 360391 Applying the View of Cognitive Linguistics on Teaching and Learning English at UFLS - UDN
Authors: Tran Thi Thuy Oanh, Nguyen Ngoc Bao Tran
Abstract:
In the view of Cognitive Linguistics (CL), knowledge and experience of things and events are used by human beings in expressing concepts, especially in their daily life. The human conceptual system is considered to be fundamentally metaphorical in nature. It is also said that the way we think, what we experience, and what we do everyday is very much a matter of language. In fact, language is an integral factor of cognition in that CL is a family of broadly compatible theoretical approaches sharing the fundamental assumption. The relationship between language and thought, of course, has been addressed by many scholars. CL, however, strongly emphasizes specific features of this relation. By experiencing, we receive knowledge of lives. The partial things are ideal domains, we make use of all aspects of this domain in metaphorically understanding abstract targets. The paper refered to applying this theory on pragmatics lessons for major English students at University of Foreign Language Studies - The University of Da Nang, Viet Nam. We conducted the study with two third – year students groups studying English pragmatics lessons. To clarify this study, the data from these two classes were collected for analyzing linguistic perspectives in the view of CL and traditional concepts. Descriptive, analytic, synthetic, comparative, and contrastive methods were employed to analyze data from 50 students undergoing English pragmatics lessons. The two groups were taught how to transfer the meanings of expressions in daily life with the view of CL and one group used the traditional view for that. The research indicated that both ways had a significant influence on students' English translating and interpreting abilities. However, the traditional way had little effect on students' understanding, but the CL view had a considerable impact. The study compared CL and traditional teaching approaches to identify benefits and challenges associated with incorporating CL into the curriculum. It seeks to extend CL concepts by analyzing metaphorical expressions in daily conversations, offering insights into how CL can enhance language learning. The findings shed light on the effectiveness of applying CL in teaching and learning English pragmatics. They highlight the advantages of using metaphorical expressions from daily life to facilitate understanding and explore how CL can enhance cognitive processes in language learning in general and teaching English pragmatics to third-year students at the UFLS - UDN, Vietnam in personal. The study contributes to the theoretical understanding of the relationship between language, cognition, and learning. By emphasizing the metaphorical nature of human conceptual systems, it offers insights into how CL can enrich language teaching practices and enhance students' comprehension of abstract concepts.Keywords: cognitive linguisitcs, lakoff and johnson, pragmatics, UFLS
Procedia PDF Downloads 37390 Optimization Principles of Eddy Current Separator for Mixtures with Different Particle Sizes
Authors: Cao Bin, Yuan Yi, Wang Qiang, Amor Abdelkader, Ali Reza Kamali, Diogo Montalvão
Abstract:
The study of the electrodynamic behavior of non-ferrous particles in time-varying magnetic fields is a promising area of research with wide applications, including recycling of non-ferrous metals, mechanical transmission, and space debris. The key technology for recovering non-ferrous metals is eddy current separation (ECS), which utilizes the eddy current force and torque to separate non-ferrous metals. ECS has several advantages, such as low energy consumption, large processing capacity, and no secondary pollution, making it suitable for processing various mixtures like electronic scrap, auto shredder residue, aluminum scrap, and incineration bottom ash. Improving the separation efficiency of mixtures with different particle sizes in ECS can create significant social and economic benefits. Our previous study investigated the influence of particle size on separation efficiency by combining numerical simulations and separation experiments. Pearson correlation analysis found a strong correlation between the eddy current force in simulations and the repulsion distance in experiments, which confirmed the effectiveness of our simulation model. The interaction effects between particle size and material type, rotational speed, and magnetic pole arrangement were examined. It offer valuable insights for the design and optimization of eddy current separators. The underlying mechanism behind the effect of particle size on separation efficiency was discovered by analyzing eddy current and field gradient. The results showed that the magnitude and distribution heterogeneity of eddy current and magnetic field gradient increased with particle size in eddy current separation. Based on this, we further found that increasing the curvature of magnetic field lines within particles could also increase the eddy current force, providing a optimized method to improving the separation efficiency of fine particles. By combining the results of the studies, a more systematic and comprehensive set of optimization guidelines can be proposed for mixtures with different particle size ranges. The separation efficiency of fine particles could be improved by increasing the rotational speed, curvature of magnetic field lines, and electrical conductivity/density of materials, as well as utilizing the eddy current torque. When designing an ECS, the particle size range of the target mixture should be investigated in advance, and the suitable parameters for separating the mixture can be fixed accordingly. In summary, these results can guide the design and optimization of ECS, and also expand the application areas for ECS.Keywords: eddy current separation, particle size, numerical simulation, metal recovery
Procedia PDF Downloads 91389 Deasphalting of Crude Oil by Extraction Method
Authors: A. N. Kurbanova, G. K. Sugurbekova, N. K. Akhmetov
Abstract:
The asphaltenes are heavy fraction of crude oil. Asphaltenes on oilfield is known for its ability to plug wells, surface equipment and pores of the geologic formations. The present research is devoted to the deasphalting of crude oil as the initial stage refining oil. Solvent deasphalting was conducted by extraction with organic solvents (cyclohexane, carbon tetrachloride, chloroform). Analysis of availability of metals was conducted by ICP-MS and spectral feature at deasphalting was achieved by FTIR. High contents of asphaltenes in crude oil reduce the efficiency of refining processes. Moreover, high distribution heteroatoms (e.g., S, N) were also suggested in asphaltenes cause some problems: environmental pollution, corrosion and poisoning of the catalyst. The main objective of this work is to study the effect of deasphalting process crude oil to improve its properties and improving the efficiency of recycling processes. Experiments of solvent extraction are using organic solvents held in the crude oil JSC “Pavlodar Oil Chemistry Refinery. Experimental results show that deasphalting process also leads to decrease Ni, V in the composition of the oil. One solution to the problem of cleaning oils from metals, hydrogen sulfide and mercaptan is absorption with chemical reagents directly in oil residue and production due to the fact that asphalt and resinous substance degrade operational properties of oils and reduce the effectiveness of selective refining of oils. Deasphalting of crude oil is necessary to separate the light fraction from heavy metallic asphaltenes part of crude oil. For this oil is pretreated deasphalting, because asphaltenes tend to form coke or consume large quantities of hydrogen. Removing asphaltenes leads to partly demetallization, i.e. for removal of asphaltenes V/Ni and organic compounds with heteroatoms. Intramolecular complexes are relatively well researched on the example of porphyinous complex (VO2) and nickel (Ni). As a result of studies of V/Ni by ICP MS method were determined the effect of different solvents-deasphalting – on the process of extracting metals on deasphalting stage and select the best organic solvent. Thus, as the best DAO proved cyclohexane (C6H12), which as a result of ICP MS retrieves V-51.2%, Ni-66.4%? Also in this paper presents the results of a study of physical and chemical properties and spectral characteristics of oil on FTIR with a view to establishing its hydrocarbon composition. Obtained by using IR-spectroscopy method information about the specifics of the whole oil give provisional physical, chemical characteristics. They can be useful in the consideration of issues of origin and geochemical conditions of accumulation of oil, as well as some technological challenges. Systematic analysis carried out in this study; improve our understanding of the stability mechanism of asphaltenes. The role of deasphalted crude oil fractions on the stability asphaltene is described.Keywords: asphaltenes, deasphalting, extraction, vanadium, nickel, metalloporphyrins, ICP-MS, IR spectroscopy
Procedia PDF Downloads 242388 Ocean Planner: A Web-Based Decision Aid to Design Measures to Best Mitigate Underwater Noise
Authors: Thomas Folegot, Arnaud Levaufre, Léna Bourven, Nicolas Kermagoret, Alexis Caillard, Roger Gallou
Abstract:
Concern for negative impacts of anthropogenic noise on the ocean’s ecosystems has increased over the recent decades. This concern leads to a similar increased willingness to regulate noise-generating activities, of which shipping is one of the most significant. Dealing with ship noise requires not only knowledge about the noise from individual ships, but also how the ship noise is distributed in time and space within the habitats of concern. Marine mammals, but also fish, sea turtles, larvae and invertebrates are mostly dependent on the sounds they use to hunt, feed, avoid predators, during reproduction to socialize and communicate, or to defend a territory. In the marine environment, sight is only useful up to a few tens of meters, whereas sound can propagate over hundreds or even thousands of kilometers. Directive 2008/56/EC of the European Parliament and of the Council of June 17, 2008 called the Marine Strategy Framework Directive (MSFD) require the Member States of the European Union to take the necessary measures to reduce the impacts of maritime activities to achieve and maintain a good environmental status of the marine environment. The Ocean-Planner is a web-based platform that provides to regulators, managers of protected or sensitive areas, etc. with a decision support tool that enable to anticipate and quantify the effectiveness of management measures in terms of reduction or modification the distribution of underwater noise, in response to Descriptor 11 of the MSFD and to the Marine Spatial Planning Directive. Based on the operational sound modelling tool Quonops Online Service, Ocean-Planner allows the user via an intuitive geographical interface to define management measures at local (Marine Protected Area, Natura 2000 sites, Harbors, etc.) or global (Particularly Sensitive Sea Area) scales, seasonal (regulation over a period of time) or permanent, partial (focused to some maritime activities) or complete (all maritime activities), etc. Speed limit, exclusion area, traffic separation scheme (TSS), and vessel sound level limitation are among the measures supported be the tool. Ocean Planner help to decide on the most effective measure to apply to maintain or restore the biodiversity and the functioning of the ecosystems of the coastal seabed, maintain a good state of conservation of sensitive areas and maintain or restore the populations of marine species.Keywords: underwater noise, marine biodiversity, marine spatial planning, mitigation measures, prediction
Procedia PDF Downloads 123387 Developing a Quality Mentor Program: Creating Positive Change for Students in Enabling Programs
Authors: Bianca Price, Jennifer Stokes
Abstract:
Academic and social support systems are critical for students in enabling education; these support systems have the potential to enhance the student experience whilst also serving a vital role for student retention. In the context of international moves toward widening university participation, Australia has developed enabling programs designed to support underrepresented students to access to higher education. The purpose of this study is to examine the effectiveness of a mentor program based within an enabling course. This study evaluates how the mentor program supports new students to develop social networks, improve retention, and increase satisfaction with the student experience. Guided by Social Learning Theory (SLT), this study highlights the benefits that can be achieved when students engage in peer-to-peer based mentoring for both social and learning support. Whilst traditional peer mentoring programs are heavily based on face-to-face contact, the present study explores the difference between mentors who provide face-to-face mentoring, in comparison with mentoring that takes place through the virtual space, specifically via a virtual community in the shape of a Facebook group. This paper explores the differences between these two methods of mentoring within an enabling program. The first method involves traditional face-to-face mentoring that is provided by alumni students who willingly return to the learning community to provide social support and guidance for new students. The second method requires alumni mentor students to voluntarily join a Facebook group that is specifically designed for enabling students. Using this virtual space, alumni students provide advice, support and social commentary on how to be successful within an enabling program. Whilst vastly different methods, both of these mentoring approaches provide students with the support tools needed to enhance their student experience and improve transition into University. To evaluate the impact of each mode, this study uses mixed methods including a focus group with mentors, in-depth interviews, as well as engaging in netnography of the Facebook group ‘Wall’. Netnography is an innovative qualitative research method used to interpret information that is available online to better understand and identify the needs and influences that affect the users of the online space. Through examining the data, this research will reflect upon best practice for engaging students in enabling programs. Findings support the applicability of having both face-to-face and online mentoring available for students to assist enabling students to make a positive transition into University undergraduate studies.Keywords: enabling education, mentoring, netnography, social learning theory
Procedia PDF Downloads 122386 Water Management of Polish Agriculture and Adaptation to Climate Change
Authors: Dorota M. Michalak
Abstract:
The agricultural sector, due to the growing demand for food and over-exploitation of the natural environment, contributes to the deepening of climate change, on the one hand, and on the other hand, shrinking freshwater resources, as a negative effect of climate change, threaten the food security of each country. Therefore, adaptation measures to climate change should take into account effective water management and seek solutions ensuring food production at an unchanged or higher level, while not burdening the environment and not contributing to the worsening of the negative consequences of climate change. The problems of Poland's water management result not only from relatively small, natural water resources but to a large extent on the low efficiency of their use. Appropriate agricultural practices and state solutions in this field can contribute to achieving significant benefits in terms of economical water management in agriculture, providing a greater amount of water that could also be used for other purposes, including for purposes related to environmental protection. The aim of the article is to determine the level of use of water resources in Polish agriculture and the advancement of measures aimed at adapting Polish agriculture in the field of water management to climate change. The study provides knowledge about Polish legal regulations and water management tools, the shaping of water policy of Polish agriculture against the background of EU countries and other sources of energy, and measures supporting Polish agricultural holdings in the effective management of water resources run by state budget institutions. In order to achieve the above-mentioned goals, the author used research tools such as the analysis of existing sources and a survey conducted among five groups of entities, i.e. agricultural advisory centers and departments, agricultural, rural and environmental protection departments, regional water management boards, provincial agricultural chambers and restructuring and modernization of agriculture. The main conclusion of the analyses carried out is the low use of water in Polish agriculture in relation to other EU countries, other sources of intake in Poland, as well as irrigation. The analysis allows us to observe another problem, which is the lack of reporting and data collection, which is extremely important from the point of view of the effectiveness of adaptation measures to climate change. The results obtained from the survey indicate a very low level of support for government institutions in the implementation of adaptation measures to climate change and the water management of Polish farms. Some of the basic problems of the adaptation policy to change climate with regard to water management in Polish agriculture include a lack of knowledge regarding climate change, the possibilities of adapting, the available tools or ways to rationalize the use of water resources. It also refers to the lack of ordering procedures and the separation of responsibility with a proper territorial unit, non-functioning channels of information flow and practically low effects.Keywords: water management, adaptation policy, agriculture, climate change
Procedia PDF Downloads 142385 Optimising Apparel Digital Production in Industrial Clusters
Authors: Minji Seo
Abstract:
Fashion stakeholders are becoming increasingly aware of technological innovation in manufacturing. In 2020, the COVID-19 pandemic caused transformations in working patterns, such as working remotely rather thancommuting. To enable smooth remote working, 3D fashion design software is being adoptedas the latest trend in design and production. The majority of fashion designers, however, are still resistantto this change. Previous studies on 3D fashion design software solely highlighted the beneficial and detrimental factors of adopting design innovations. They lacked research on the relationship between resistance factors and the adoption of innovation. These studies also fell short of exploringthe perspectives of users of these innovations. This paper aims to investigate the key drivers and barriers of employing 3D fashion design software as wellas to explore the challenges faced by designers.It also toucheson the governmental support for digital manufacturing in Seoul, South Korea, and London, the United Kingdom. By conceptualising local support, this study aims to provide a new path for industrial clusters to optimise digital apparel manufacturing. The study uses a mixture of quantitative and qualitative approaches. Initially, it reflects a survey of 350 samples, fashion designers, on innovation resistance factors of 3D fashion design software and the effectiveness of local support. In-depth interviews with 30 participants provide a better understanding of designers’ aspects of the benefits and obstacles of employing 3D fashion design software. The key findings of this research are the main barriers to employing 3D fashion design software in fashion production. The cultural characteristics and interviews resultsare used to interpret the survey results. The findings of quantitative data examine the main resistance factors to adopting design innovations. The dominant obstacles are: the cost of software and its complexity; lack of customers’ interest in innovation; lack of qualified personnel, and lack of knowledge. The main difference between Seoul and London is the attitudes towards government support. Compared to the UK’s fashion designers, South Korean designers emphasise that government support is highly relevant to employing 3D fashion design software. The top-down and bottom-up policy implementation approach distinguishes the perception of government support. Compared to top-down policy approaches in South Korea, British fashion designers based on employing bottom-up approaches are reluctant to receive government support. The findings of this research will contribute to generating solutions for local government and the optimisation of use of 3D fashion design software in fashion industrial clusters.Keywords: digital apparel production, industrial clusters, innovation resistance, 3D fashion design software, manufacturing, innovation, technology, digital manufacturing, innovative fashion design process
Procedia PDF Downloads 102384 Schema Therapy as Treatment for Adults with Autism Spectrum Disorder and Comorbid Personality Disorder: A Multiple Baseline Case Series Study Testing Cognitive-Behavioral and Experiential Interventions
Authors: Richard Vuijk, Arnoud Arntz
Abstract:
Rationale: To our knowledge treatment of personality disorder comorbidity in adults with autism spectrum disorder (ASD) is understudied and is still in its infancy: We do not know if treatment of personality disorders may be applicable to adults with ASD. In particular, it is unknown whether patients with ASD benefit from experiential techniques that are part of schema therapy developed for the treatment of personality disorders. Objective: The aim of the study is to investigate the efficacy of a schema mode focused treatment with adult clients with ASD and comorbid personality pathology (i.e. at least one personality disorder). Specifically, we investigate if they can benefit from both cognitive-behavioral, and experiential interventions. Study design: A multiple baseline case series study. Study population: Adult individuals (age > 21 years) with ASD and at least one personality disorder. Participants will be recruited from Sarr expertise center for autism in Rotterdam. The study requires 12 participants. Intervention: The treatment protocol consists of 35 weekly offered sessions, followed by 10 monthly booster sessions. A multiple baseline design will be used with baseline varying from 5 to 10 weeks, with weekly supportive sessions. After baseline, a 5-week exploration phase follows with weekly sessions during which current and past functioning, psychological symptoms, schema modes are explored, and information about the treatment will be given. Then 15 weekly sessions with cognitive-behavioral interventions and 15 weekly sessions with experiential interventions will be given. Finally, there will be a 10-month follow-up phase with monthly booster sessions. Participants are randomly assigned to baseline length, and respond weekly during treatment and monthly at follow-up on Belief Strength of negative core beliefs (by VAS), and fill out SMI, SCL-90 and SRS-A 7 times during screening procedure (i.e. before baseline), after baseline, after exploration, after cognitive and behavioral interventions, after experiential interventions, and after 5- and 10- month follow-up. The SCID-II will be administered during screening procedure (i.e. before baseline), at 5- and at 10-month follow-up. Main study parameters: The primary study parameter is negative core beliefs. Secondary study parameters include schema modes, personality disorder manifestations, psychological symptoms, and social interaction and communication. Discussion: To the best of author’s knowledge so far no study has been published on the application of schema mode focused interventions in adult patients with ASD and comorbid PD(s). This study offers the first systematic test of application of schema therapy for adults with ASD. The results of this study will provide initial evidence for the effectiveness of schema therapy in treating adults with both ASD and PD(s). The study intends to provide valuable information for future development and implementation of therapeutic interventions for adults with both ASD and PD(s).Keywords: adults, autism spectrum disorder, personality disorder, schema therapy
Procedia PDF Downloads 239383 The Invaluable Contributions of Radiography and Radiotherapy in Modern Medicine
Authors: Sahar Heidary
Abstract:
Radiography and radiotherapy have emerged as crucial pillars of modern medical practice, revolutionizing diagnostics and treatment for a myriad of health conditions. This abstract highlights the pivotal role of radiography and radiotherapy in favor of healthcare and society. Radiography, a non-invasive imaging technique, has significantly advanced medical diagnostics by enabling the visualization of internal structures and abnormalities within the human body. With the advent of digital radiography, clinicians can obtain high-resolution images promptly, leading to faster diagnoses and informed treatment decisions. Radiography plays a pivotal role in detecting fractures, tumors, infections, and various other conditions, allowing for timely interventions and improved patient outcomes. Moreover, its widespread accessibility and cost-effectiveness make it an indispensable tool in healthcare settings worldwide. On the other hand, radiotherapy, a branch of medical science that utilizes high-energy radiation, has become an integral component of cancer treatment and management. By precisely targeting and damaging cancerous cells, radiotherapy offers a potent strategy to control tumor growth and, in many cases, leads to cancer eradication. Additionally, radiotherapy is often used in combination with surgery and chemotherapy, providing a multifaceted approach to combat cancer comprehensively. The continuous advancements in radiotherapy techniques, such as intensity-modulated radiotherapy and stereotactic radiosurgery, have further improved treatment precision while minimizing damage to surrounding healthy tissues. Furthermore, radiography and radiotherapy have demonstrated their worth beyond oncology. Radiography is instrumental in guiding various medical procedures, including catheter placement, joint injections, and dental evaluations, reducing complications and enhancing procedural accuracy. On the other hand, radiotherapy finds applications in non-cancerous conditions like benign tumors, vascular malformations, and certain neurological disorders, offering therapeutic options for patients who may not benefit from traditional surgical interventions. In conclusion, radiography and radiotherapy stand as indispensable tools in modern medicine, driving transformative improvements in patient care and treatment outcomes. Their ability to diagnose, treat, and manage a wide array of medical conditions underscores their favor in medical practice. As technology continues to advance, radiography and radiotherapy will undoubtedly play an ever more significant role in shaping the future of healthcare, ultimately saving lives and enhancing the quality of life for countless individuals worldwide.Keywords: radiology, radiotherapy, medical imaging, cancer treatment
Procedia PDF Downloads 70382 The Impact of the Macro-Level: Organizational Communication in Undergraduate Medical Education
Authors: Julie M. Novak, Simone K. Brennan, Lacey Brim
Abstract:
Undergraduate medical education (UME) curriculum notably addresses micro-level communications (e.g., patient-provider, intercultural, inter-professional), yet frequently under-examines the role and impact of organizational communication, a more macro-level. Organizational communication, however, functions as foundation and through systemic structures of an organization and thereby serves as hidden curriculum and influences learning experiences and outcomes. Yet, little available research exists fully examining how students experience organizational communication while in medical school. Extant literature and best practices provide insufficient guidance for UME programs, in particular. The purpose of this study was to map and examine current organizational communication systems and processes in a UME program. Employing a phenomenology-grounded and participatory approach, this study sought to understand the organizational communication system from medical students' perspective. The research team consisted of a core team and 13 medical student co-investigators. This research employed multiple methods, including focus groups, individual interviews, and two surveys (one reflective of focus group questions, the other requesting students to submit ‘examples’ of communications). To provide context for student responses, nonstudent participants (faculty, administrators, and staff) were sampled, as they too express concerns about communication. Over 400 students across all cohorts and 17 nonstudents participated. Data were iteratively analyzed and checked for triangulation. Findings reveal the complex nature of organizational communication and student-oriented communications. They reveal program-impactful strengths, weaknesses, gaps, and tensions and speak to the role of organizational communication practices influencing both climate and culture. With regard to communications, students receive multiple, simultaneous communications from multiple sources/channels, both formal (e.g., official email) and informal (e.g., social media). Students identified organizational strengths including the desire to improve student voice, and message frequency. They also identified weaknesses related to over-reliance on emails, numerous platforms with inconsistent utilization, incorrect information, insufficient transparency, assessment/input fatigue, tacit expectations, scheduling/deadlines, responsiveness, and mental health confidentiality concerns. Moreover, they noted gaps related to lack of coordination/organization, ambiguous point-persons, student ‘voice-only’, open communication loops, lack of core centralization and consistency, and mental health bridges. Findings also revealed organizational identity and cultural characteristics as impactful on the medical school experience. Cultural characteristics included program size, diversity, urban setting, student organizations, community-engagement, crisis framing, learning for exams, inefficient bureaucracy, and professionalism. Moreover, they identified system structures that do not always leverage cultural strengths or reduce cultural problematics. Based on the results, opportunities for productive change are identified. These include leadership visibly supporting and enacting overall organizational narratives, making greater efforts in consistently ‘closing the loop’, regularly sharing how student input effects change, employing strategies of crisis communication more often, strengthening communication infrastructure, ensuring structures facilitate effective operations and change efforts, and highlighting change efforts in informational communication. Organizational communication and communications are not soft-skills, or of secondary concern within organizations, rather they are foundational in nature and serve to educate/inform all stakeholders. As primary stakeholders, students and their success directly affect the accomplishment of organizational goals. This study demonstrates how inquiries about how students navigate their educational experience extends research-based knowledge and provides actionable knowledge for the improvement of organizational operations in UME.Keywords: medical education programs, organizational communication, participatory research, qualitative mixed methods
Procedia PDF Downloads 118381 The Effects of Alpha-Lipoic Acid Supplementation on Post-Stroke Patients: A Systematic Review and Meta-Analysis of Randomized Controlled Trials
Authors: Hamid Abbasi, Neda Jourabchi, Ranasadat Abedi, Kiarash Tajernarenj, Mehdi Farhoudi, Sarvin Sanaie
Abstract:
Background: Alpha lipoic acid (ALA), fat- and water-soluble, coenzyme with sulfuret content, has received considerable attention for its potential therapeutic role in diabetes, cardiovascular diseases, cancers, and central nervous disease. This investigation aims to evaluate the probable protective effects of ALA in stroke patients. Methods: Based on Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines, This meta-analysis was performed. The PICO criteria for this meta-analysis were as follows: Population/Patients (P: stroke patients); Intervention (I: ALA); Comparison (C: control); Outcome (O: blood glucose, lipid profile, oxidative stress, inflammatory factors).In addition, Studies that were excluded from the analysis consisted of in vitro, in vivo, and ex vivo studies, case reports, quasi-experimental studies. Scopus, PubMed, Web of Science, EMBASE databases were searched until August 2023. Results: Of 496 records that were screened in the title/abstract stage, 9 studies were included in this meta-analysis. The sample sizes in the included studies vary between 28 and 90. The result of risk of bias was performed via risk of bias (RoB) in randomized-controlled trials (RCTs) based on the second version of the Cochrane RoB assessment tool. 8 studies had a definitely high risk of bias. Discussion: To the best of our knowledge, The present meta-analysis is the first study addressing the effectiveness of ALA supplementation in enhancing post-stroke metabolic markers, including lipid profile, oxidative stress, and inflammatory indices. It is imperative to acknowledge certain potential limitations inherent in this study. First of all, type of treatment (oral or intravenous infusion) could alter the bioavailability of ALA. Our study had restricted evidence regarding the impact of ALA supplementation on included outcomes. Therefore, further research is warranted to develop into the effects of ALA specifically on inflammation and oxidative stress. Funding: The research protocol was approved and supported by the Student Research Committee, Tabriz University of Medical Sciences (grant number: 72825). Registration: This study was registered in the International prospective register of systematic reviews (PROSPERO ID: CR42023461612).Keywords: alpha-lipoic acid, lipid profile, blood glucose, inflammatory factors, oxidative stress, meta-analysis, post-stroke
Procedia PDF Downloads 65380 Moral Rights: Judicial Evidence Insufficiency in the Determination of the Truth and Reasoning in Brazilian Morally Charged Cases
Authors: Rainner Roweder
Abstract:
Theme: The present paper aims to analyze the specificity of the judicial evidence linked to the subjects of dignity and personality rights, otherwise known as moral rights, in the determination of the truth and formation of the judicial reasoning in cases concerning these areas. This research is about the way courts in Brazilian domestic law search for truth and handles evidence in cases involving moral rights that are abundant and important in Brazil. The main object of the paper is to analyze the effectiveness of the evidence in the formation of judicial conviction in matters related to morally controverted rights, based on the Brazilian, and as a comparison, the Latin American legal systems. In short, the rights of dignity and personality are moral. However, the evidential legal system expects a rational demonstration of moral rights that generate judicial conviction or persuasion. Moral, in turn, tends to be difficult or impossible to demonstrate in court, generating the problem considered in this paper, that is, the study of the moral demonstration problem as proof in court. In this sense, the more linked to moral, the more difficult to be demonstrated in court that right is, expanding the field of judicial discretion, generating legal uncertainty. More specifically, the new personality rights, such as gender, and their possibility of alteration, further amplify the problem being essentially an intimate manner, which does not exist in the objective, rational evidential system, as normally occurs in other categories, such as contracts. Therefore, evidencing this legal category in court, with the level of security required by the law, is a herculean task. It becomes virtually impossible to use the same evidentiary system when judging the rights researched here; therefore, it generates the need for a new design of the evidential task regarding the rights of the personality, a central effort of the present paper. Methodology: Concerning the methodology, the Method used in the Investigation phase was Inductive, with the use of the comparative law method; in the data treatment phase, the Inductive Method was also used. Doctrine, Legislative, and jurisprudential comparison was the technique research used. Results: In addition to the peculiar characteristics of personality rights that are not found in other rights, part of them are essentially linked to morale and are not objectively verifiable by design, and it is necessary to use specific argumentative theories for their secure confirmation, such as interdisciplinary support. The traditional pragmatic theory of proof, for having an obvious objective character, when applied in the rights linked to the morale, aggravates decisionism and generates legal insecurity, being necessary its reconstruction for morally charged cases, with the possible use of the “predictive theory” ( and predictive facts) through algorithms in data collection and treatment.Keywords: moral rights, proof, pragmatic proof theory, insufficiency, Brazil
Procedia PDF Downloads 110379 Lineament Analysis as a Method of Mineral Deposit Exploration
Authors: Dmitry Kukushkin
Abstract:
Lineaments form complex grids on Earth's surface. Currently, one particular object of study for many researchers is the analysis and geological interpretation of maps of lineament density in an attempt to locate various geological structures. But lineament grids are made up of global, regional and local components, and this superimposition of lineament grids of various scales (global, regional, and local) renders this method less effective. Besides, the erosion processes and the erosional resistance of rocks lying on the surface play a significant role in the formation of lineament grids. Therefore, specific lineament density map is characterized by poor contrast (most anomalies do not exceed the average values by more than 30%) and unstable relation with local geological structures. Our method allows to confidently determine the location and boundaries of local geological structures that are likely to contain mineral deposits. Maps of the fields of lineament distortion (residual specific density) created by our method are characterized by high contrast with anomalies exceeding the average by upward of 200%, and stable correlation to local geological structures containing mineral deposits. Our method considers a lineament grid as a general lineaments field – surface manifestation of stress and strain fields of Earth associated with geological structures of global, regional and local scales. Each of these structures has its own field of brittle dislocations that appears on the surface of its lineament field. Our method allows singling out local components by suppressing global and regional components of the general lineaments field. The remaining local lineament field is an indicator of local geological structures.The following are some of the examples of the method application: 1. Srednevilyuiskoye gas condensate field (Yakutia) - a direct proof of the effectiveness of methodology; 2. Structure of Astronomy (Taimyr) - confirmed by the seismic survey; 3. Active gold mine of Kadara (Chita Region) – confirmed by geochemistry; 4. Active gold mine of Davenda (Yakutia) - determined the boundaries of the granite massif that controls mineralization; 5. Object, promising to search for hydrocarbons in the north of Algeria - correlated with the results of geological, geochemical and geophysical surveys. For both Kadara and Davenda, the method demonstrated that the intensive anomalies of the local lineament fields are consistent with the geochemical anomalies and indicate the presence of the gold content at commercial levels. Our method of suppression of global and regional components results in isolating a local lineament field. In early stages of a geological exploration for oil and gas, this allows determining boundaries of various geological structures with very high reliability. Therefore, our method allows optimization of placement of seismic profile and exploratory drilling equipment, and this leads to a reduction of costs of prospecting and exploration of deposits, as well as acceleration of its commissioning.Keywords: lineaments, mineral exploration, oil and gas, remote sensing
Procedia PDF Downloads 305378 Assessing Sydney Tar Ponds Remediation and Natural Sediment Recovery in Nova Scotia, Canada
Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer
Abstract:
Sydney Harbour, Nova Scotia has long been subject to effluent and atmospheric inputs of metals, polycyclic aromatic hydrocarbons (PAHs), and polychlorinated biphenyls (PCBs) from a large coking operation and steel plant that operated in Sydney for nearly a century until closure in 1988. Contaminated effluents from the industrial site resulted in the creation of the Sydney Tar Ponds, one of Canada’s largest contaminated sites. Since its closure, there have been several attempts to remediate this former industrial site and finally, in 2004, the governments of Canada and Nova Scotia committed to remediate the site to reduce potential ecological and human health risks to the environment. The Sydney Tar Ponds and Coke Ovens cleanup project has become the most prominent remediation project in Canada today. As an integral part of remediation of the site (i.e., which consisted of solidification/stabilization and associated capping of the Tar Ponds), an extensive multiple media environmental effects program was implemented to assess what effects remediation had on the surrounding environment, and, in particular, harbour sediments. Additionally, longer-term natural sediment recovery rates of select contaminants predicted for the harbour sediments were compared to current conditions. During remediation, potential contributions to sediment quality, in addition to remedial efforts, were evaluated which included a significant harbour dredging project, propeller wash from harbour traffic, storm events, adjacent loading/unloading of coal and municipal wastewater treatment discharges. Two sediment sampling methodologies, sediment grab and gravity corer, were also compared to evaluate the detection of subtle changes in sediment quality. Results indicated that overall spatial distribution pattern of historical contaminants remains unchanged, although at much lower concentrations than previously reported, due to natural recovery. Measurements of sediment indicator parameter concentrations confirmed that natural recovery rates of Sydney Harbour sediments were in broad agreement with predicted concentrations, in spite of ongoing remediation activities. Overall, most measured parameters in sediments showed little temporal variability even when using different sampling methodologies, during three years of remediation compared to baseline, except for the detection of significant increases in total PAH concentrations noted during one year of remediation monitoring. The data confirmed the effectiveness of mitigation measures implemented during construction relative to harbour sediment quality, despite other anthropogenic activities and the dynamic nature of the harbour.Keywords: contaminated sediment, monitoring, recovery, remediation
Procedia PDF Downloads 237377 Lateral Retroperitoneal Transpsoas Approach: A Practical Minimal Invasive Surgery Option for Treating Pyogenic Spondylitis of the Lumbar Vertebra
Authors: Sundaresan Soundararajan, Chor Ngee Tan
Abstract:
Introduction: Pyogenic spondylitis, otherwise treated conservatively with long term antibiotics, would require surgical debridement and reconstruction in about 10% to 20% of cases. The classical approach adopted many surgeons have always been anterior approach in ensuring thorough and complete debridement. This, however, comes with high rates of morbidity due to the nature of its access. Direct lateral retroperitoneal approach, which has been growing in usage in degenerative lumbar diseases, has the potential in treating pyogenic spondylitis with its ease of approach and relatively low risk of complications. Aims/Objectives: The objective of this study was to evaluate the effectiveness and clinical outcome of using lateral approach surgery in the surgical management of pyogenic spondylitis of the lumbar spine. Methods: Retrospective chart analysis was done on all patients who presented with pyogenic spondylitis (lumbar discitis/vertebral osteomyelitis) and had undergone direct lateral retroperitoneal lumbar vertebral debridement and posterior instrumentation between 2014 and 2016. Data on blood loss, surgical operating time, surgical complications, clinical outcomes and fusion rates were recorded. Results: A total of 6 patients (3 male and 3 female) underwent this procedure at a single institution by a single surgeon during the defined period. One patient presented with infected implant (PLIF) and vertebral osteomyelitis while the other five presented with single level spondylodiscitis. All patients underwent lumbar debridement, iliac strut grafting and posterior instrumentation (revision of screws for infected PLIF case). The mean operating time was 308.3 mins for all 6 cases. Mean blood loss was reported at 341cc (range from 200cc to 600cc). Presenting symptom of back pain resolved in all 6 cases while 2 cases that presented with lower limb weakness had improvement of neurological deficits. One patient had dislodged strut graft while performing posterior instrumentation and needed graft revision intraoperatively. Infective markers normalized for all patients subsequently. All subjects also showed radiological evidence of fusion on 6 months follow up. Conclusions: Lateral approach in treating pyogenic spondylitis is a viable option as it allows debridement and reconstruction without the risk that comes with other anterior approaches. It allows efficient debridement, short surgical time, moderate blood loss and low risk of vascular injuries. Clinical outcomes and fusion rates by this approach also support its use as practical MIS option surgery for such infection cases.Keywords: lateral approach, minimally invasive, pyogenic spondylitis, XLIF
Procedia PDF Downloads 177376 Armed Forces Special Powers Act and Human Rights in Nagaland
Authors: Khrukulu Khusoh
Abstract:
The strategies and tactics used by governments throughout the world to counter terrorism and insurgency over the past few decades include the declaration of states of siege or martial law, enactment of anti-terrorist legislation and strengthening of judicial powers. Some of these measures taken have been more successful than the other, but some have proved counterproductive, alienating the public from the authorities and further polarizing an already fractured political environment. Such cases of alienation and polarization can be seen in the northeastern states of India. The Armed Forces (Special Powers) Act which was introduced to curb insurgency in the remote jungles of the far-flung areas has remained a telling tale of agony in the north east India. Grievous trauma to humans through encounter killings, custodial deaths, unwarranted torture, exploitation of women and children in several ways have been reported in Nagaland, Manipur and other northeastern states where the Indian army has been exercising powers under the Armed Forces (Special Powers) Act. While terrorism and the insurgency are destructive of human rights, counter-terrorism does not necessarily restore and safeguard human rights. This special law has not proven effective particularly in dealing with terrorism and insurgency. The insurgency has persisted in the state of Nagaland even after sixty years notwithstanding the presence of a good number of special laws. There is a need to fight elements that threaten the security of a nation, but the methods chosen should be measured, otherwise the fight is lost. There has been no review on the effectiveness or failure of the act to realize its intended purpose. Nor was there any attempt on the part of the state to critically look at the violation of rights of innocent citizens by the state agencies. The Indian state keeps enacting laws, but none of these could be effectively applied as there was the absence of clarity of purpose. Therefore, every new law which has been enacted time and again to deal with security threats failed to bring any solution for the last six decades. The Indian state resorts to measures which are actually not giving anything in terms of strategic benefits but are short-term victories that might result in long-term tragedies. Therefore, right thinking citizens and human rights activists across the country feel that introduction of Armed Forces (Special Powers) Act was as much violation of human rights and its continuation is undesirable. What worried everyone is the arbitrary use, or rather misuse of power by the Indian armed forces particularly against the weaker sections of the society, including women. After having being subjected to indiscriminate abuse of that law, people of the north-east India have been demanding its revocation for a long time. The present paper attempts to critically examine the violation of human rights under Armed Forces (Special Powers) Act. It also attempts to bring out the impact of Armed Forces (Special Powers) Act on the Naga people.Keywords: armed forces, insurgency, special laws, violence
Procedia PDF Downloads 497375 Improving the Management Systems of the Ownership Risks in Conditions of Transformation of the Russian Economy
Authors: Mikhail V. Khachaturyan
Abstract:
The article analyzes problems of improving the management systems of the ownership risks in the conditions of the transformation of the Russian economy. Among the main sources of threats business owners should highlight is the inefficiency of the implementation of business models and interaction with hired managers. In this context, it is particularly important to analyze the relationship of business models and ownership risks. The analysis of this problem appears to be relevant for a number of reasons: Firstly, the increased risk appetite of the owner directly affects the business model and the composition of his holdings; secondly, owners with significant stakes in the company are factors in the formation of particular types of risks for owners, for which relations have a significant influence on a firm's competitiveness and ultimately determines its survival; and thirdly, inefficient system of management ownership of risk is one of the main causes of mass bankruptcies, which significantly affects the stable operation of the economy as a whole. The separation of the processes of possession, disposal and use in modern organizations is the cause of not only problems in the process of interaction between the owner and managers in managing the organization as a whole, but also the asymmetric information about the kinds and forms of the main risks. Managers tend to avoid risky projects, inhibit the diversification of the organization's assets, while owners can insist on the development of such projects, with the aim not only of creating new values for themselves and consumers, but also increasing the value of the company as a result of increasing capital. In terms of separating ownership and management, evaluation of projects by the ratio of risk-yield requires preservation of the influence of the owner on the process of development and making management decisions. It is obvious that without a clearly structured system of participation of the owner in managing the risks of their business, further development is hopeless. In modern conditions of forming a risk management system, owners are compelled to compromise between the desire to increase the organization's ability to produce new value, and, consequently, increase its cost due to the implementation of risky projects and the need to tolerate the cost of lost opportunities of risk diversification. Improving the effectiveness of the management of ownership risks may also contribute to the revitalization of creditors on implementation claims to inefficient owners, which ultimately will contribute to the efficiency models of ownership control to exclude variants of insolvency. It is obvious that in modern conditions, the success of the model of the ownership of risk management and audit is largely determined by the ability and willingness of the owner to find a compromise between potential opportunities for expanding the firm's ability to create new value through risk and maintaining the current level of new value creation and an acceptable level of risk through the use of models of diversification.Keywords: improving, ownership risks, problem, Russia
Procedia PDF Downloads 351374 Development of the Integrated Quality Management System of Cooked Sausage Products
Authors: Liubov Lutsyshyn, Yaroslava Zhukova
Abstract:
Over the past twenty years, there has been a drastic change in the mode of nutrition in many countries which has been reflected in the development of new products, production techniques, and has also led to the expansion of sales markets for food products. Studies have shown that solution of the food safety problems is almost impossible without the active and systematic activity of organizations directly involved in the production, storage and sale of food products, as well as without management of end-to-end traceability and exchange of information. The aim of this research is development of the integrated system of the quality management and safety assurance based on the principles of HACCP, traceability and system approach with creation of an algorithm for the identification and monitoring of parameters of technological process of manufacture of cooked sausage products. Methodology of implementation of the integrated system based on the principles of HACCP, traceability and system approach during the manufacturing of cooked sausage products for effective provision for the defined properties of the finished product has been developed. As a result of the research evaluation technique and criteria of performance of the implementation and operation of the system of the quality management and safety assurance based on the principles of HACCP have been developed and substantiated. In the paper regularities of influence of the application of HACCP principles, traceability and system approach on parameters of quality and safety of the finished product have been revealed. In the study regularities in identification of critical control points have been determined. The algorithm of functioning of the integrated system of the quality management and safety assurance has also been described and key requirements for the development of software allowing the prediction of properties of finished product, as well as the timely correction of the technological process and traceability of manufacturing flows have been defined. Based on the obtained results typical scheme of the integrated system of the quality management and safety assurance based on HACCP principles with the elements of end-to-end traceability and system approach for manufacture of cooked sausage products has been developed. As a result of the studies quantitative criteria for evaluation of performance of the system of the quality management and safety assurance have been developed. A set of guidance documents for the implementation and evaluation of the integrated system based on the HACCP principles in meat processing plants have also been developed. On the basis of the research the effectiveness of application of continuous monitoring of the manufacturing process during the control on the identified critical control points have been revealed. The optimal number of critical control points in relation to the manufacture of cooked sausage products has been substantiated. The main results of the research have been appraised during 2013-2014 under the conditions of seven enterprises of the meat processing industry and have been implemented at JSC «Kyiv meat processing plant».Keywords: cooked sausage products, HACCP, quality management, safety assurance
Procedia PDF Downloads 248373 Road Systems as Environmental Barriers: An Overview of Roadways in Their Function as Fences for Wildlife Movement
Authors: Rachael Bentley, Callahan Gergen, Brodie Thiede
Abstract:
Roadways have a significant impact on the environment in so far as they function as barriers to wildlife movement, both through road mortality and through resultant road avoidance. Roads have an im-mense presence worldwide, and it is predicted to increase substantially in the next thirty years. As roadways become even more common, it is important to consider their environmental impact, and to mitigate the negative effects which they have on wildlife and wildlife mobility. In a thorough analysis of several related studies, a common conclusion was that roads cause habitat fragmentation, which can lead split populations to evolve differently, for better or for worse. Though some populations adapted positively to roadways, becoming more resistant to road mortality, and more tolerant to noise and chemical contamination, many others experienced maladaptation, either due to chemical contamination in and around their environment, or because of genetic mutations from inbreeding when their population was fragmented too substantially to support a large enough group for healthy genetic exchange. Large mammals were especially susceptible to maladaptation from inbreed-ing, as they require larger areas to roam and therefore require even more space to sustain a healthy population. Regardless of whether a species evolved positively or negatively as a result of their proximity to a road, animals tended to avoid roads, making the genetic diversity from habitat fragmentation an exceedingly prevalent issue in the larger discussion of road ecology. Additionally, the consideration of solu-tions, such as overpasses and underpasses, is crucial to ensuring the long term survival of many wildlife populations. In studies addressing the effectiveness of overpasses and underpasses, it seemed as though animals adjusted well to these sorts of solutions, but strategic place-ment, as well as proper sizing, proper height, shelter from road noise, and other considerations were important in construction. When an underpass or overpass was well-built and well-shielded from human activity, animals’ usage of the structure increased significantly throughout its first five years, thus reconnecting previously divided populations. Still, these structures are costly and they are often unable to fully address certain issues such as light, noise, and contaminants from vehicles. Therefore, the need for further discussion of new, crea-tive solutions remains paramount. Roads are one of the most consistent and prominent features of today’s landscape, but their environmental impacts are largely overlooked. While roads are useful for connecting people, they divide landscapes and animal habitats. Therefore, further research and investment in possible solutions is necessary to mitigate the negative effects which roads have on wildlife mobility and to pre-vent issues from resultant habitat fragmentation.Keywords: fences, habitat fragmentation, roadways, wildlife mobility
Procedia PDF Downloads 181