Search results for: Markov fields
449 A Survey of Mental and Personality Profiles of Malingerer Clients of an Iranian Forensic Medicine Center Based on the Revised NEO Personality Inventory and the Minnesota Multiphasic Personality Inventory Questionnaires
Authors: Morteza Rahbar Taramsari, Arya Mahdavi Baramchi, Mercedeh Enshaei, Ghazaleh Keshavarzi Baramchi
Abstract:
Introduction: Malingering is one of the most challenging issues in the forensic psychology and imposes a heavy financial burden on health care and legal systems. It seems that some mental and personality abnormalities might have a crucial role in developing this condition. Materials and Methods: In this cross-sectional study, we aimed to assess 100 malingering clients of Gilan province general office of forensic medicine, all filled the related questionnaires. The data about some psychometric characteristics were collected through the 71-items version- short form- of Minnesota Multiphasic Personality Inventory (MMPI) questionnaire and the personality traits were assessed by NEO Personality Inventory-Revised (NEO PI-R) - including 240 items- as a reliable and accurate measure of the five domains of personality. Results: The 100 malingering clients (55 males and 45 females) ranged from 23 to 45 (32+/- 5.6) years old. Regarding marital status, 36% were single, 57% were married and 7% were divorced. Almost two-thirds of the participants (64%) were unemployed, 21% were self-employed and the rest of them were employed. The data of MMPI clinical scales revealed that the mean (SD) T score of Hypochondrias (Hs) was 67(9.2), Depression (D) was 87(7.9), Hysteria (Hy) was 74(5.8), Psychopathic Deviate (Pd) was 62(8.5), Masculinity-Feminity (MF) was 76(8.4), Paranoia (Pa) was 62(4.5), Psychasthenia (Pt) was 80(7.9), Schizophrenia (Sc) was 69(6.8), Hypomania (Ma) was 64(5.9)and Social Introversion (Si) was 58(4.3). NEO PI-R test showed five domains of personality. The mean (SD) T score of Neuroticism was 65(9.2), Extraversion was 51(7.9), Openness was 43(5.8), Agreeableness was 35(3.4) and Conscientiousness was 42(4.9). Conclusion: According to MMPI test in our malingering clients, Hypochondriasis (Hs), depression (D), Hysteria (Hy), Muscularity-Feminity (MF), Psychasthenia (Pt) and Schizophrenia (Sc) had high scores (T >= 65) which means pathological range and psychological significance. Based on NEO PI-R test Neuroticism was in high range, on the other hand, Openness, Agreeableness, and Conscientiousness were in low range. Extroversion was in average range. So it seems that malingerers require basic evaluations of different psychological fields. Additional research in this area is needed to provide stronger evidence of the possible positive effects of the mentioned factors on malingering.Keywords: malingerers, mental profile, MMPI, NEO PI-R, personality profile
Procedia PDF Downloads 266448 Data Model to Predict Customize Skin Care Product Using Biosensor
Authors: Ashi Gautam, Isha Shukla, Akhil Seghal
Abstract:
Biosensors are analytical devices that use a biological sensing element to detect and measure a specific chemical substance or biomolecule in a sample. These devices are widely used in various fields, including medical diagnostics, environmental monitoring, and food analysis, due to their high specificity, sensitivity, and selectivity. In this research paper, a machine learning model is proposed for predicting the suitability of skin care products based on biosensor readings. The proposed model takes in features extracted from biosensor readings, such as biomarker concentration, skin hydration level, inflammation presence, sensitivity, and free radicals, and outputs the most appropriate skin care product for an individual. This model is trained on a dataset of biosensor readings and corresponding skin care product information. The model's performance is evaluated using several metrics, including accuracy, precision, recall, and F1 score. The aim of this research is to develop a personalised skin care product recommendation system using biosensor data. By leveraging the power of machine learning, the proposed model can accurately predict the most suitable skin care product for an individual based on their biosensor readings. This is particularly useful in the skin care industry, where personalised recommendations can lead to better outcomes for consumers. The developed model is based on supervised learning, which means that it is trained on a labeled dataset of biosensor readings and corresponding skin care product information. The model uses these labeled data to learn patterns and relationships between the biosensor readings and skin care products. Once trained, the model can predict the most suitable skin care product for an individual based on their biosensor readings. The results of this study show that the proposed machine learning model can accurately predict the most appropriate skin care product for an individual based on their biosensor readings. The evaluation metrics used in this study demonstrate the effectiveness of the model in predicting skin care products. This model has significant potential for practical use in the skin care industry for personalised skin care product recommendations. The proposed machine learning model for predicting the suitability of skin care products based on biosensor readings is a promising development in the skin care industry. The model's ability to accurately predict the most appropriate skin care product for an individual based on their biosensor readings can lead to better outcomes for consumers. Further research can be done to improve the model's accuracy and effectiveness.Keywords: biosensors, data model, machine learning, skin care
Procedia PDF Downloads 97447 Verification of Dosimetric Commissioning Accuracy of Flattening Filter Free Intensity Modulated Radiation Therapy and Volumetric Modulated Therapy Delivery Using Task Group 119 Guidelines
Authors: Arunai Nambi Raj N., Kaviarasu Karunakaran, Krishnamurthy K.
Abstract:
The purpose of this study was to create American Association of Physicist in Medicine (AAPM) Task Group 119 (TG 119) benchmark plans for flattening filter free beam (FFF) deliveries of intensity modulated radiation therapy (IMRT) and volumetric arc therapy (VMAT) in the Eclipse treatment planning system. The planning data were compared with the flattening filter (FF) IMRT & VMAT plan data to verify the dosimetric commissioning accuracy of FFF deliveries. AAPM TG 119 proposed a set of test cases called multi-target, mock prostate, mock head and neck, and C-shape to ascertain the overall accuracy of IMRT planning, measurement, and analysis. We used these test cases to investigate the performance of the Eclipse Treatment planning system for the flattening filter free beam deliveries. For these test cases, we generated two sets of treatment plans, the first plan using 7–9 IMRT fields and a second plan utilizing two arc VMAT technique for both the beam deliveries (6 MV FF, 6MV FFF, 10 MV FF and 10 MV FFF). The planning objectives and dose were set as described in TG 119. The dose prescriptions for multi-target, mock prostate, mock head and neck, and C-shape were taken as 50, 75.6, 50 and 50 Gy, respectively. The point dose (mean dose to the contoured chamber volume) at the specified positions/locations was measured using compact (CC‑13) ion chamber. The composite planar dose and per-field gamma analysis were measured with IMatriXX Evaluation 2D array with OmniPro IMRT Software (version 1.7b). FFF beam deliveries of IMRT and VMAT plans were comparable to flattening filter beam deliveries. Our planning and quality assurance results matched with TG 119 data. AAPM TG 119 test cases are useful to generate FFF benchmark plans. From the obtained data in this study, we conclude that the commissioning of FFF IMRT and FFF VMAT delivery were found within the limits of TG-119 and the performance of the Eclipse treatment planning system for FFF plans were found satisfactorily.Keywords: flattening filter free beams, intensity modulated radiation therapy, task group 119, volumetric modulated arc therapy
Procedia PDF Downloads 146446 Investigation for Pixel-Based Accelerated Aging of Large Area Picosecond Photo-Detectors
Authors: I. Tzoka, V. A. Chirayath, A. Brandt, J. Asaadi, Melvin J. Aviles, Stephen Clarke, Stefan Cwik, Michael R. Foley, Cole J. Hamel, Alexey Lyashenko, Michael J. Minot, Mark A. Popecki, Michael E. Stochaj, S. Shin
Abstract:
Micro-channel plate photo-multiplier tubes (MCP-PMTs) have become ubiquitous and are widely considered potential candidates for next generation High Energy Physics experiments due to their picosecond timing resolution, ability to operate in strong magnetic fields, and low noise rates. A key factor that determines the applicability of MCP-PMTs in their lifetime, especially when they are used in high event rate experiments. We have developed a novel method for the investigation of the aging behavior of an MCP-PMT on an accelerated basis. The method involves exposing a localized region of the MCP-PMT to photons at a high repetition rate. This pixel-based method was inspired by earlier results showing that damage to the photocathode of the MCP-PMT occurs primarily at the site of light exposure and that the surrounding region undergoes minimal damage. One advantage of the pixel-based method is that it allows the dynamics of photo-cathode damage to be studied at multiple locations within the same MCP-PMT under different operating conditions. In this work, we use the pixel-based accelerated lifetime test to investigate the aging behavior of a 20 cm x 20 cm Large Area Picosecond Photo Detector (LAPPD) manufactured by INCOM Inc. at multiple locations within the same device under different operating conditions. We compare the aging behavior of the MCP-PMT obtained from the first lifetime test conducted under high gain conditions to the lifetime obtained at a different gain. Through this work, we aim to correlate the lifetime of the MCP-PMT and the rate of ion feedback, which is a function of the gain of each MCP, and which can also vary from point to point across a large area (400 $cm^2$) MCP. The tests were made possible by the uniqueness of the LAPPD design, which allows independent control of the gain of the chevron stacked MCPs. We will further discuss the implications of our results for optimizing the operating conditions of the detector when used in high event rate experiments.Keywords: electron multipliers (vacuum), LAPPD, lifetime, micro-channel plate photo-multipliers tubes, photoemission, time-of-flight
Procedia PDF Downloads 180445 Solomon Islands Decentralization Efforts
Authors: Samson Viulu, Hugo Hebala, Duddley Kopu
Abstract:
Constituency Development Fund (CDF) is a controversial fund that has existed in the Solomon Islands since the early 90s to date. It is largely controversial because it is directly handled by members of parliament (MPs) of the Solomon Islands legislation chamber. It is commonly described as a political slash fund because only voters of MPs benefit from it to retain loyalty. The CDF was established by a legislative act in 2013; however, it does not have any subsidiary regulations to it, therefore, very weak governance. CDF is purposely to establish development projects in the rural areas of the Solomon Islands to spur economic growth. Although almost USD500M was spent in CDF in the last decade, there has been no growth in the economy of the Solomon Islands; rather, a regress. Solomon Islands has now formulated a first home-grown policy aimed at guiding the overall development of the fifty constituencies, improving delivery mechanisms of the CDF, and strengthening its governance through the regulation of the CDF Act 2013. The Solomon Islands Constituency Development Policy is the first for the country since gaining independence in 1978 and gives strong emphasis on a cross-sectoral approach through effective partnerships and collaborations and decentralizing government services to the isolated rural areas of the country. The new policy is driving the efforts of the political government to decentralize government services to isolated rural communities to encourage the participation of rural dwellers in economic activities. The decentralization will see the establishment of constituency offices within all constituencies and the piloting of townships in constituencies that have met the statutory requirements of the state. It also encourages constituencies to become development agents of the national government than being mere political boundaries. The decentralization will go in line with the establishment of the Solomon Islands Special Economic Zones (SEZ), where investors will be given special privileges and exemptions from government taxes and permits to attract tangible development to occur in rural constituencies. The design and formulation of the new development policy are supported by the UNDP office in the Solomon Islands. The new policy is promoting a reorientation on the allocation of resources more toward the productive and resource sectors, making access to finance easier for entrepreneurs and encouraging growth in rural entrepreneurship in the fields of agriculture, fisheries, down streaming, and tourism across the Solomon Islands. This new policy approach will greatly assist the country to graduate from the least developed countries status in a few years’ time.Keywords: decentralization, constituency development fund, Solomon Islands constituency development policy, partnership, entrepreneurship
Procedia PDF Downloads 87444 Sustainable Valorization of Wine Production Waste: Unlocking the Potential of Grape Pomace and Lees in the Vinho Verde Region
Authors: Zlatina Genisheva, Pedro Ferreira-Santos, Margarida Soares, Cândida Vilarinho, Joana Carvalho
Abstract:
The wine industry produces significant quantities of waste, much of which remains underutilized as a potential raw material. Typically, this waste is either discarded in the fields or incinerated, leading to environmental concerns. By-products of wine production, like lees and grape pomace, are readily available at relatively low costs and hold promise as raw materials for biochemical conversion into valuable products. Reusing these waste materials is crucial, not only for reducing environmental impact but also for enhancing profitability. The Vinhos Verdes demarcated region, the largest wine-producing area in Portugal, has remained relatively stagnant over time. This project aims to offer an alternative income source for producers in the region while also expanding the limited existing research on this area. The main objective of this project is the study of the sustainable valorization of grape pomace and lees from the production of DOC Vinho Verde. Extraction tests were performed to obtain high-value compounds, targeting phenolic compounds from grape pomace and protein-rich extracts from lees. An environmentally friendly technique, microwave extraction, was used for this process. This method is not only efficient but also aligns with the principles of green chemistry, reducing the use of harmful solvents and minimizing energy consumption. The findings from this study have the potential to open new revenue streams for the region’s wine producers while promoting environmental sustainability. The optimal conditions for extracting proteins from lees involve the use of NaOH at 150ºC. Regardless of the solvent employed, the ideal temperature for obtaining extracts rich in polyphenol compounds and exhibiting strong antioxidant activity is also 150ºC. For grape pomace, extracts with a high concentration of polyphenols and significant antioxidant properties were obtained at 210ºC. However, the highest total tannin concentrations were achieved at 150ºC, while the maximum total flavonoid content was obtained at 170ºC.Keywords: antioxidants, circular economy, polyphenol compounds, waste valorization
Procedia PDF Downloads 21443 Efficient Residual Road Condition Segmentation Network Based on Reconstructed Images
Authors: Xiang Shijie, Zhou Dong, Tian Dan
Abstract:
This paper focuses on the application of real-time semantic segmentation technology in complex road condition recognition, aiming to address the critical issue of how to improve segmentation accuracy while ensuring real-time performance. Semantic segmentation technology has broad application prospects in fields such as autonomous vehicle navigation and remote sensing image recognition. However, current real-time semantic segmentation networks face significant technical challenges and optimization gaps in balancing speed and accuracy. To tackle this problem, this paper conducts an in-depth study and proposes an innovative Guided Image Reconstruction Module. By resampling high-resolution images into a set of low-resolution images, this module effectively reduces computational complexity, allowing the network to more efficiently extract features within limited resources, thereby improving the performance of real-time segmentation tasks. In addition, a dual-branch network structure is designed in this paper to fully leverage the advantages of different feature layers. A novel Hybrid Attention Mechanism is also introduced, which can dynamically capture multi-scale contextual information and effectively enhance the focus on important features, thus improving the segmentation accuracy of the network in complex road condition. Compared with traditional methods, the proposed model achieves a better balance between accuracy and real-time performance and demonstrates competitive results in road condition segmentation tasks, showcasing its superiority. Experimental results show that this method not only significantly improves segmentation accuracy while maintaining real-time performance, but also remains stable across diverse and complex road conditions, making it highly applicable in practical scenarios. By incorporating the Guided Image Reconstruction Module, dual-branch structure, and Hybrid Attention Mechanism, this paper presents a novel approach to real-time semantic segmentation tasks, which is expected to further advance the development of this field.Keywords: hybrid attention mechanism, image reconstruction, real-time, road status recognition
Procedia PDF Downloads 25442 Optimal Data Selection in Non-Ergodic Systems: A Tradeoff between Estimator Convergence and Representativeness Errors
Authors: Jakob Krause
Abstract:
Past Financial Crisis has shown that contemporary risk management models provide an unjustified sense of security and fail miserably in situations in which they are needed the most. In this paper, we start from the assumption that risk is a notion that changes over time and therefore past data points only have limited explanatory power for the current situation. Our objective is to derive the optimal amount of representative information by optimizing between the two adverse forces of estimator convergence, incentivizing us to use as much data as possible, and the aforementioned non-representativeness doing the opposite. In this endeavor, the cornerstone assumption of having access to identically distributed random variables is weakened and substituted by the assumption that the law of the data generating process changes over time. Hence, in this paper, we give a quantitative theory on how to perform statistical analysis in non-ergodic systems. As an application, we discuss the impact of a paragraph in the last iteration of proposals by the Basel Committee on Banking Regulation. We start from the premise that the severity of assumptions should correspond to the robustness of the system they describe. Hence, in the formal description of physical systems, the level of assumptions can be much higher. It follows that every concept that is carried over from the natural sciences to economics must be checked for its plausibility in the new surroundings. Most of the probability theory has been developed for the analysis of physical systems and is based on the independent and identically distributed (i.i.d.) assumption. In Economics both parts of the i.i.d. assumption are inappropriate. However, only dependence has, so far, been weakened to a sufficient degree. In this paper, an appropriate class of non-stationary processes is used, and their law is tied to a formal object measuring representativeness. Subsequently, that data set is identified that on average minimizes the estimation error stemming from both, insufficient and non-representative, data. Applications are far reaching in a variety of fields. In the paper itself, we apply the results in order to analyze a paragraph in the Basel 3 framework on banking regulation with severe implications on financial stability. Beyond the realm of finance, other potential applications include the reproducibility crisis in the social sciences (but not in the natural sciences) and modeling limited understanding and learning behavior in economics.Keywords: banking regulation, non-ergodicity, risk management, semimartingale modeling
Procedia PDF Downloads 149441 Design of Identification Based Adaptive Control for Fermentation Process in Bioreactor
Authors: J. Ritonja
Abstract:
The biochemical technology has been developing extremely fast since the middle of the last century. The main reason for such development represents a requirement for large production of high-quality biologically manufactured products such as pharmaceuticals, foods, and beverages. The impact of the biochemical industry on the world economy is enormous. The great importance of this industry also results in intensive development in scientific disciplines relevant to the development of biochemical technology. In addition to developments in the fields of biology and chemistry, which enable to understand complex biochemical processes, development in the field of control theory and applications is also very important. In the paper, the control for the biochemical reactor for the milk fermentation was studied. During the fermentation process, the biophysical quantities must be precisely controlled to obtain the high-quality product. To control these quantities, the bioreactor’s stirring drive and/or heating system can be used. Available commercial biochemical reactors are equipped with open loop or conventional linear closed loop control system. Due to the outstanding parameters variations and the partial nonlinearity of the biochemical process, the results obtained with these control systems are not satisfactory. To improve the fermentation process, the self-tuning adaptive control system was proposed. The use of the self-tuning adaptive control is suggested because the parameters’ variations of the studied biochemical process are very slow in most cases. To determine the linearized mathematical model of the fermentation process, the recursive least square identification method was used. Based on the obtained mathematical model the linear quadratic regulator was tuned. The parameters’ identification and the controller’s synthesis are executed on-line and adapt the controller’s parameters to the fermentation process’ dynamics during the operation. The use of the proposed combination represents the original solution for the control of the milk fermentation process. The purpose of the paper is to contribute to the progress of the control systems for the biochemical reactors. The proposed adaptive control system was tested thoroughly. From the obtained results it is obvious that the proposed adaptive control system assures much better following of the reference signal as a conventional linear control system with fixed control parameters.Keywords: adaptive control, biochemical reactor, linear quadratic regulator, recursive least square identification
Procedia PDF Downloads 126440 Evidence of a Negativity Bias in the Keywords of Scientific Papers
Authors: Kseniia Zviagintseva, Brett Buttliere
Abstract:
Science is fundamentally a problem-solving enterprise, and scientists pay more attention to the negative things, that cause them dissonance and negative affective state of uncertainty or contradiction. While this is agreed upon by philosophers of science, there are few empirical demonstrations. Here we examine the keywords from those papers published by PLoS in 2014 and show with several sentiment analyzers that negative keywords are studied more than positive keywords. Our dataset is the 927,406 keywords of 32,870 scientific articles in all fields published in 2014 by the journal PLOS ONE (collected from Altmetric.com). Counting how often the 47,415 unique keywords are used, we can examine whether those negative topics are studied more than positive. In order to find the sentiment of the keywords, we utilized two sentiment analysis tools, Hu and Liu (2004) and SentiStrength (2014). The results below are for Hu and Liu as these are the less convincing results. The average keyword was utilized 19.56 times, with half of the keywords being utilized only 1 time and the maximum number of uses being 18,589 times. The keywords identified as negative were utilized 37.39 times, on average, with the positive keywords being utilized 14.72 times and the neutral keywords - 19.29, on average. This difference is only marginally significant, with an F value of 2.82, with a p of .05, but one must keep in mind that more than half of the keywords are utilized only 1 time, artificially increasing the variance and driving the effect size down. To examine more closely, we looked at those top 25 most utilized keywords that have a sentiment. Among the top 25, there are only two positive words, ‘care’ and ‘dynamics’, in position numbers 5 and 13 respectively, with all the rest being identified as negative. ‘Diseases’ is the most studied keyword with 8,790 uses, with ‘cancer’ and ‘infectious’ being the second and fourth most utilized sentiment-laden keywords. The sentiment analysis is not perfect though, as the words ‘diseases’ and ‘disease’ are split by taking 1st and 3rd positions. Combining them, they remain as the most common sentiment-laden keyword, being utilized 13,236 times. More than just splitting the words, the sentiment analyzer logs ‘regression’ and ‘rat’ as negative, and these should probably be considered false positives. Despite these potential problems, the effect is apparent, as even the positive keywords like ‘care’ could or should be considered negative, since this word is most commonly utilized as a part of ‘health care’, ‘critical care’ or ‘quality of care’ and generally associated with how to improve it. All in all, the results suggest that negative concepts are studied more, also providing support for the notion that science is most generally a problem-solving enterprise. The results also provide evidence that negativity and contradiction are related to greater productivity and positive outcomes.Keywords: bibliometrics, keywords analysis, negativity bias, positive and negative words, scientific papers, scientometrics
Procedia PDF Downloads 187439 Digital Transformation and Digitalization of Public Administration
Authors: Govind Kumar
Abstract:
The concept of ‘e-governance’ that was brought about by the new wave of reforms, namely ‘LPG’ in the early 1990s, has been enabling governments across the globe to digitally transform themselves. Digital transformation is leading the governments with qualitative decisions, optimization in rational use of resources, facilitation of cost-benefit analyses, and elimination of redundancy and corruption with the help of ICT-based applications interface. ICT-based applications/technologies have enormous potential for impacting positive change in the social lives of the global citizenry. Supercomputers test and analyze millions of drug molecules for developing candidate vaccines to combat the global pandemic. Further, e-commerce portals help distribute and supply household items and medicines, while videoconferencing tools provide a visual interface between the clients and hosts. Besides, crop yields are being maximized with the help of drones and machine learning, whereas satellite data, artificial intelligence, and cloud computing help governments with the detection of illegal mining, tackling deforestation, and managing freshwater resources. Such e-applications have the potential to take governance an extra mile by achieving 5 Es (effective, efficient, easy, empower, and equity) of e-governance and six Rs (reduce, reuse, recycle, recover, redesign and remanufacture) of sustainable development. If such digital transformation gains traction within the government framework, it will replace the traditional administration with the digitalization of public administration. On the other hand, it has brought in a new set of challenges, like the digital divide, e-illiteracy, technological divide, etc., and problems like handling e-waste, technological obsolescence, cyber terrorism, e-fraud, hacking, phishing, etc. before the governments. Therefore, it would be essential to bring in a rightful mixture of technological and humanistic interventions for addressing the above issues. This is on account of the reason that technology lacks an emotional quotient, and the administration does not work like technology. Both are self-effacing unless a blend of technology and a humane face are brought in into the administration. The paper will empirically analyze the significance of the technological framework of digital transformation within the government set up for the digitalization of public administration on the basis of the synthesis of two case studies undertaken from two diverse fields of administration and present a future framework of the study.Keywords: digital transformation, electronic governance, public administration, knowledge framework
Procedia PDF Downloads 101438 Composition Dependence of Ni 2p Core Level Shift in Fe1-xNix Alloys
Authors: Shakti S. Acharya, V. R. R. Medicherla, Rajeev Rawat, Komal Bapna, Deepnarayan Biswas, Khadija Ali, K. Maiti
Abstract:
The discovery of invar effect in 35% Ni concentration Fe1-xNix alloy has stimulated enormous experimental and theoretical research. Elemental Fe and low Ni concentration Fe1-xNix alloys which possess body centred cubic (bcc) crystal structure at ambient temperature and pressure transform to hexagonally close packed (hcp) phase at around 13 GPa. Magnetic order was found to be absent at 11K for Fe92Ni8 alloy when subjected to a high pressure of 26 GPa. The density functional theoretical calculations predicted substantial hyperfine magnetic fields, but were not observed in Mossbaur spectroscopy. The bulk modulus of fcc Fe1-xNix alloys with Ni concentration more than 35%, is found to be independent of pressure. The magnetic moment of Fe is also found be almost same in these alloys from 4 to 10 GPa pressure. Fe1-xNix alloys exhibit a complex microstructure which is formed by a series of complex phase transformations like martensitic transformation, spinodal decomposition, ordering, mono-tectoid reaction, eutectoid reaction at temperatures below 400°C. Despite the existence of several theoretical models the field is still in its infancy lacking full knowledge about the anomalous properties exhibited by these alloys. Fe1-xNix alloys have been prepared by arc melting the high purity constituent metals in argon ambient. These alloys have annealed at around 3000C in vacuum sealed quartz tube for two days to make the samples homogeneous. These alloys have been structurally characterized by x-ray diffraction and were found to exhibit a transition from bcc to fcc for x > 0.3. Ni 2p core levels of the alloys have been measured using high resolution (0.45 eV) x-ray photoelectron spectroscopy. Ni 2p core level shifts to lower binding energy with respect to that of pure Ni metal giving rise to negative core level shifts (CLSs). Measured CLSs exhibit a linear dependence in fcc region (x > 0.3) and were found to deviate slightly in bcc region (x < 0.3). ESCA potential model fails correlate CLSs with site potentials or charges in metallic alloys. CLSs in these alloys occur mainly due to shift in valence bands with composition due to intra atomic charge redistribution.Keywords: arc melting, core level shift, ESCA potential model, valence band
Procedia PDF Downloads 380437 Comparative Analysis of Chemical Composition and Biological Activities of Ajuga genevensis L. in in vitro Culture and Intact Plants
Authors: Naira Sahakyan, Margarit Petrosyan, Armen Trchounian
Abstract:
One of the tasks in contemporary biotechnology, pharmacology and other fields of human activities is to obtain biologically active substances from plants. They are very essential in the treatment of many diseases due to their actually high therapeutic value without visible side effects. However, sometimes the possibility of obtaining the metabolites is limited due to the reduction of wild-growing plants. That is why the plant cell cultures are of great interest as alternative sources of biologically active substances. Besides, during the monitored cultivation, it is possible to obtain substances that are not synthesized by plants in nature. Isolated culture of Ajuga genevensis with high growth activity and ability of regeneration was obtained using MS nutrient medium. The agar-diffusion method showed that aqueous extracts of callus culture revealed high antimicrobial activity towards various gram-positive (Bacillus subtilis A1WT; B. mesentericus WDCM 1873; Staphylococcus aureus WDCM 5233; Staph. citreus WT) and gram-negative (Escherichia coli WKPM M-17; Salmonella typhimurium TA 100) microorganisms. The broth dilution method revealed that the minimal and half maximal inhibitory concentration values against E. coli corresponded to the 70 μg/mL and 140 μg/mL concentration of the extract respectively. According to the photochemiluminescent analysis, callus tissue extracts of leaf and root origin showed higher antioxidant activity than the same quantity of A. genevensis intact plant extract. A. genevensis intact plant and callus culture extracts showed no cytotoxic effect on K-562 suspension cell line of human chronic myeloid leukemia. The GC-MS analysis showed deep differences between the qualitative and quantitative composition of callus culture and intact plant extracts. Hexacosane (11.17%); n-hexadecanoic acid (9.33%); and 2-methoxy-4-vinylphenol (4.28%) were the main components of intact plant extracts. 10-Methylnonadecane (57.0%); methoxyacetic acid, 2-tetradecyl ester (17.75%) and 1-Bromopentadecane (14.55%) were the main components of A. genevensis callus culture extracts. Obtained data indicate that callus culture of A. genevensis can be used as an alternative source of biologically active substances.Keywords: Ajuga genevensis, antibacterial activity, antioxidant activity, callus cultures
Procedia PDF Downloads 298436 Application of the Building Information Modeling Planning Approach to the Factory Planning
Authors: Peggy Näser
Abstract:
Factory planning is a systematic, objective-oriented process for planning a factory, structured into a sequence of phases, each of which is dependent on the preceding phase and makes use of particular methods and tools, and extending from the setting of objectives to the start of production. The digital factory, on the other hand, is the generic term for a comprehensive network of digital models, methods, and tools – including simulation and 3D visualisation – integrated by a continuous data management system. Its aim is the holistic planning, evaluation and ongoing improvement of all the main structures, processes and resources of the real factory in conjunction with the product. Digital factory planning has already become established in factory planning. The application of Building Information Modeling has not yet been established in factory planning but has been used predominantly in the planning of public buildings. Furthermore, this concept is limited to the planning of the buildings and does not include the planning of equipment of the factory (machines, technical equipment) and their interfaces to the building. BIM is a cooperative method of working, in which the information and data relevant to its lifecycle are consistently recorded, managed and exchanged in a transparent communication between the involved parties on the basis of digital models of a building. Both approaches, the planning approach of Building Information Modeling and the methodical approach of the Digital Factory, are based on the use of a comprehensive data model. Therefore it is necessary to examine how the approach of Building Information Modeling can be extended in the context of factory planning in such a way that an integration of the equipment planning, as well as the building planning, can take place in a common digital model. For this, a number of different perspectives have to be investigated: the equipment perspective including the tools used to implement a comprehensive digital planning process, the communication perspective between the planners of different fields, the legal perspective, that the legal certainty in each country and the quality perspective, on which the quality criteria are defined and the planning will be evaluated. The individual perspectives are examined and illustrated in the article. An approach model for the integration of factory planning into the BIM approach, in particular for the integrated planning of equipment and buildings and the continuous digital planning is developed. For this purpose, the individual factory planning phases are detailed in the sense of the integration of the BIM approach. A comprehensive software concept is shown on the tool. In addition, the prerequisites required for this integrated planning are presented. With the help of the newly developed approach, a better coordination between equipment and buildings is to be achieved, the continuity of the digital factory planning is improved, the data quality is improved and expensive implementation errors are avoided in the implementation.Keywords: building information modeling, digital factory, digital planning, factory planning
Procedia PDF Downloads 269435 Detection of Powdery Mildew Disease in Strawberry Using Image Texture and Supervised Classifiers
Authors: Sultan Mahmud, Qamar Zaman, Travis Esau, Young Chang
Abstract:
Strawberry powdery mildew (PM) is a serious disease that has a significant impact on strawberry production. Field scouting is still a major way to find PM disease, which is not only labor intensive but also almost impossible to monitor disease severity. To reduce the loss caused by PM disease and achieve faster automatic detection of the disease, this paper proposes an approach for detection of the disease, based on image texture and classified with support vector machines (SVMs) and k-nearest neighbors (kNNs). The methodology of the proposed study is based on image processing which is composed of five main steps including image acquisition, pre-processing, segmentation, features extraction and classification. Two strawberry fields were used in this study. Images of healthy leaves and leaves infected with PM (Sphaerotheca macularis) disease under artificial cloud lighting condition. Colour thresholding was utilized to segment all images before textural analysis. Colour co-occurrence matrix (CCM) was introduced for extraction of textural features. Forty textural features, related to a physiological parameter of leaves were extracted from CCM of National television system committee (NTSC) luminance, hue, saturation and intensity (HSI) images. The normalized feature data were utilized for training and validation, respectively, using developed classifiers. The classifiers have experimented with internal, external and cross-validations. The best classifier was selected based on their performance and accuracy. Experimental results suggested that SVMs classifier showed 98.33%, 85.33%, 87.33%, 93.33% and 95.0% of accuracy on internal, external-I, external-II, 4-fold cross and 5-fold cross-validation, respectively. Whereas, kNNs results represented 90.0%, 72.00%, 74.66%, 89.33% and 90.3% of classification accuracy, respectively. The outcome of this study demonstrated that SVMs classified PM disease with a highest overall accuracy of 91.86% and 1.1211 seconds of processing time. Therefore, overall results concluded that the proposed study can significantly support an accurate and automatic identification and recognition of strawberry PM disease with SVMs classifier.Keywords: powdery mildew, image processing, textural analysis, color co-occurrence matrix, support vector machines, k-nearest neighbors
Procedia PDF Downloads 122434 Concepts of Creation and Destruction as Cognitive Instruments in World View Study
Authors: Perizat Balkhimbekova
Abstract:
Evolutionary changes in cognitive world view taking place in the last decades are followed by changes in perception of the key concepts which are related to the certain lingua-cultural sphere. Also, such concepts reflect the person’s attitude to essential processes in the sphere of concepts, e.g. the opposite operations like creation and destruction. These changes in people’s life and thinking are displayed in a language world view. In order to open the maintenance of mental structures and concepts we should use language means as observable results of people’s cognitive activity. Semantics of words, free phrases and idioms should be considered as an authoritative source of information concerning concepts. The regularized set of concepts in people consciousness forms the sphere of concepts. Cognitive linguistics widely discusses the sphere of concepts as its crucial category defining it as the field of knowledge which is made of concepts. It is considered that a sphere of concepts comprises the various types of association and forms conceptual fields. As a material for the given research, the data from Russian National Corpus and British National Corpus were used. In is necessary to point out that data provided by computational studies, are intrinsic and verifiable; so that we have used them in order to get the reliable results. The procedure of study was based on such techniques as extracting of the context containing concepts of creation|destruction from the Russian National Corpus (RNC), and British National Corpus (BNC); analyzing and interpreting of those context on the basis of cognitive approach; finding of correspondence between the given concepts in the Russian and English world view. The key problem of our study is to find the correspondence between the elements of world view represented by opposite concepts such as creation and destruction. Findings: The concept of "destruction" indicates a process which leads to full or partial destruction of an object. In other words, it is a loss of the object primary essence: structures, properties, distinctive signs and its initial integrity. The concept of "creation", on the contrary, comprises positive characteristics, represents the activity aimed at improvement of the certain object, at the creation of ideal models of the world. On the other hand, destruction is represented much more widely in RNC than creation (1254 cases of the first concept by comparison to 192 cases for the second one). Our hypothesis consists in the antinomy represented by the aforementioned concepts. Being opposite both in respect of semantics and pragmatics, and from the point of view of axiology, they are at the same time complementary and interrelated concepts.Keywords: creation, destruction, concept, world view
Procedia PDF Downloads 346433 The Role of Dialogue in Shared Leadership and Team Innovative Behavior Relationship
Authors: Ander Pomposo
Abstract:
Purpose: The aim of this study was to investigate the impact that dialogue has on the relationship between shared leadership and innovative behavior and the importance of dialogue in innovation. This study wants to contribute to the literature by providing theorists and researchers a better understanding of how to move forward in the studies of moderator variables in the relationship between shared leadership and team outcomes such as innovation. Methodology: A systematic review of the literature, originally adopted from the medical sciences but also used in management and leadership studies, was conducted to synthesize research in a systematic, transparent and reproducible manner. A final sample of 48 empirical studies was scientifically synthesized. Findings: Shared leadership gives a better solution to team management challenges and goes beyond the classical, hierarchical, or vertical leadership models based on the individual leader approach. One of the outcomes that emerge from shared leadership is team innovative behavior. To intensify the relationship between shared leadership and team innovative behavior, and understand when is more effective, the moderating effects of other variables in this relationship should be examined. This synthesis of the empirical studies revealed that dialogue is a moderator variable that has an impact on the relationship between shared leadership and team innovative behavior when leadership is understood as a relational process. Dialogue is an activity between at least two speech partners trying to fulfill a collective goal and is a way of living open to people and ideas through interaction. Dialogue is productive when team members engage relationally with one another. When this happens, participants are more likely to take responsibility for the tasks they are involved and for the relationships they have with others. In this relational engagement, participants are likely to establish high-quality connections with a high degree of generativity. This study suggests that organizations should facilitate the dialogue of team members in shared leadership which has a positive impact on innovation and offers a more adaptive framework for the leadership that is needed in teams working in complex work tasks. These results uncover the necessity of more research on the role that dialogue plays in contributing to important organizational outcomes such as innovation. Case studies describing both best practices and obstacles of dialogue in team innovative behavior are necessary to gain a more detailed insight into the field. It will be interesting to see how all these fields of research evolve and are implemented in dialogue practices in the organizations that use team-based structures to deal with uncertainty, fast-changing environments, globalization and increasingly complex work.Keywords: dialogue, innovation, leadership, shared leadership, team innovative behavior
Procedia PDF Downloads 183432 Collaborative Data Refinement for Enhanced Ionic Conductivity Prediction in Garnet-Type Materials
Authors: Zakaria Kharbouch, Mustapha Bouchaara, F. Elkouihen, A. Habbal, A. Ratnani, A. Faik
Abstract:
Solid-state lithium-ion batteries have garnered increasing interest in modern energy research due to their potential for safer, more efficient, and sustainable energy storage systems. Among the critical components of these batteries, the electrolyte plays a pivotal role, with LLZO garnet-based electrolytes showing significant promise. Garnet materials offer intrinsic advantages such as high Li-ion conductivity, wide electrochemical stability, and excellent compatibility with lithium metal anodes. However, optimizing ionic conductivity in garnet structures poses a complex challenge, primarily due to the multitude of potential dopants that can be incorporated into the LLZO crystal lattice. The complexity of material design, influenced by numerous dopant options, requires a systematic method to find the most effective combinations. This study highlights the utility of machine learning (ML) techniques in the materials discovery process to navigate the complex range of factors in garnet-based electrolytes. Collaborators from the materials science and ML fields worked with a comprehensive dataset previously employed in a similar study and collected from various literature sources. This dataset served as the foundation for an extensive data refinement phase, where meticulous error identification, correction, outlier removal, and garnet-specific feature engineering were conducted. This rigorous process substantially improved the dataset's quality, ensuring it accurately captured the underlying physical and chemical principles governing garnet ionic conductivity. The data refinement effort resulted in a significant improvement in the predictive performance of the machine learning model. Originally starting at an accuracy of 0.32, the model underwent substantial refinement, ultimately achieving an accuracy of 0.88. This enhancement highlights the effectiveness of the interdisciplinary approach and underscores the substantial potential of machine learning techniques in materials science research.Keywords: lithium batteries, all-solid-state batteries, machine learning, solid state electrolytes
Procedia PDF Downloads 61431 Broadband Ultrasonic and Rheological Characterization of Liquids Using Longitudinal Waves
Authors: M. Abderrahmane Mograne, Didier Laux, Jean-Yves Ferrandis
Abstract:
Rheological characterizations of complex liquids like polymer solutions present an important scientific interest for a lot of researchers in many fields as biology, food industry, chemistry. In order to establish master curves (elastic moduli vs frequency) which can give information about microstructure, classical rheometers or viscometers (such as Couette systems) are used. For broadband characterization of the sample, temperature is modified in a very large range leading to equivalent frequency modifications applying the Time Temperature Superposition principle. For many liquids undergoing phase transitions, this approach is not applicable. That is the reason, why the development of broadband spectroscopic methods around room temperature becomes a major concern. In literature many solutions have been proposed but, to our knowledge, there is no experimental bench giving the whole rheological characterization for frequencies about a few Hz (Hertz) to many MHz (Mega Hertz). Consequently, our goal is to investigate in a nondestructive way in very broadband frequency (A few Hz – Hundreds of MHz) rheological properties using longitudinal ultrasonic waves (L waves), a unique experimental bench and a specific container for the liquid: a test tube. More specifically, we aim to estimate the three viscosities (longitudinal, shear and bulk) and the complex elastic moduli (M*, G* and K*) respectively longitudinal, shear and bulk moduli. We have decided to use only L waves conditioned in two ways: bulk L wave in the liquid or guided L waves in the tube test walls. In this paper, we will present first results for very low frequencies using the ultrasonic tracking of a falling ball in the test tube. This will lead to the estimation of shear viscosity from a few mPa.s to a few Pa.s (Pascal second). Corrections due to the small dimensions of the tube will be applied and discussed regarding the size of the falling ball. Then the use of bulk L wave’s propagation in the liquid and the development of a specific signal processing in order to assess longitudinal velocity and attenuation will conduct to the longitudinal viscosity evaluation in the MHz frequency range. At last, the first results concerning the propagation, the generation and the processing of guided compressional waves in the test tube walls will be discussed. All these approaches and results will be compared to standard methods available and already validated in our lab.Keywords: nondestructive measurement for liquid, piezoelectric transducer, ultrasonic longitudinal waves, viscosities
Procedia PDF Downloads 265430 Nitrogen Fixation of Soybean Approaches for Enhancing under Saline and Water Stress Conditions
Authors: Ayman El Sabagh, AbdElhamid Omar, Dekoum Assaha, Khair Mohammad Youldash, Akihiro Ueda, Celaleddin Barutçular, Hirofumi Saneoka
Abstract:
Drought and salinity stress are a worldwide problem, constraining global crop production seriously. Hence, soybean is susceptible to yield loss from water deficit and salinity stress. Therefore, different approaches have been suggested to solve these issues. Osmoprotectants play an important role in protection the plants from various environmental stresses. Moreover, organic fertilization has several beneficial effects on agricultural fields. Presently, efforts to maximize nitrogen fixation in soybean are critical because of widespread increase in soil degradation in Egypt. Therefore, a greenhouse research was conducted at plant nutritional physiology laboratory, Hiroshima University, Japan for assessing the impact of exogenous osmoregulators and compost application in alleviating the adverse effects of salinity and water stress on soybean. Treatments was included (i) water stress treatments (different soil moisture levels consisting of (100%, 75%, and 50% of field water holding capacity), (ii) salinity concentrations (0 and 15 mM) were applied in fully developed trifoliolate leaf node (V1), (iii) compost treatments (0 and 24 t ha-1) and (iv) the exogenous, proline and glycine betaine concentrations (0 mM and 25 mM) for each, was applied at two growth stages (V1 and R1). The seeds of soybean cultivar Giza 111, was sown into basin from wood (length10 meter, width 50cm, height 50cm and depth 350cm) containing a soil mixture of granite regosol soil and perlite (2:1 v/v). The nitrogen-fixing activity was estimated by using gas chromatography and all measurements were made in three replicates. The results showed that water deficit and salinity stress reduced biological nitrogen fixation and specific nodule activity than normal irrigation conditions. Exogenous osmoprotectants were improved biological nitrogen fixation and specific nodule activity as well as, applying of compost led to improving many of biological nitrogen fixation and specific nodule activity with superiority than stress conditions. The combined application compost fertilizer and exogenous osmoprotectants were more effective in alleviating the adverse effect of stress to improve biological nitrogen fixation and specific nodule activity of Soybean.Keywords: a biotic stress, biological nitrogen fixation, compost, osmoprotectants, specific nodule activity, soybean
Procedia PDF Downloads 309429 Reasons for Lack of an Ideal Disinfectant after Dental Treatments
Authors: Ilma Robo, Saimir Heta, Rialda Xhizdari, Kers Kapaj
Abstract:
Background: The ideal disinfectant for surfaces, instruments, air, skin, both in dentistry and in the fields of medicine, does not exist.This is for the sole reason that all the characteristics of the ideal disinfectant cannot be contained in one; these are the characteristics that if one of them is emphasized, it will conflict with the other. A disinfectant must be stable, not be affected by changes in the environmental conditions where it stands, which means that it should not be affected by an increase in temperature or an increase in the humidity of the environment. Both of these elements contradict the other element of the idea of an ideal disinfectant, as they disrupt the solubility ratios of the base substance of the disinfectant versus the diluent. Material and methods: The study aims to extract the constant of each disinfectant/antiseptic used during dental disinfection protocols, accompanied by the side effects of the surface of the skin or mucosa where it is applied in the role of antiseptic. In the end, attempts were made to draw conclusions about the best possible combination for disinfectants after a dental procedure, based on the data extracted from the basic literature required during the development of the pharmacology module, as a module in the formation of a dentist, against data published in the literature. Results: The sensitivity of the disinfectant to changes in the atmospheric conditions of the environment where it is kept is a known fact. The care against this element is always accompanied by the advice on the application of the specific disinfectant, in order to have the desired clinical result. The constants of disinfectants according to the classification based on the data collected and presented are for alcohols 70-120, glycols 0.2, aldehydes 30-200, phenols 15-60, acids 100, povidone iodine halogens 5-75, hypochlorous acid halogens 150, sodium hypochlorite halogens 30-35, oxidants 18-60, metals 0.2-10. The part of halogens should be singled out, where specific results were obtained according to the representatives of this class, since it is these representatives that find scope for clinical application in dentistry. Conclusions: The search for the "ideal", in the conditions where its defining criteria are also established, not only for disinfectants but also for any medication or pharmaceutical product, is an ongoing search, without any definitive results. In this mine of data in the published literature if there is something fixed, calculable, such as the specific constant for disinfectants, the search for the ideal is more concrete. During the disinfection protocols, different disinfectants are applied since the field of action is different, including water, air, aspiration devices, tools, disinfectants used in full accordance with the production indications.Keywords: disinfectant, constant, ideal, side effects
Procedia PDF Downloads 71428 Approaches to Valuing Ecosystem Services in Agroecosystems From the Perspectives of Ecological Economics and Agroecology
Authors: Sandra Cecilia Bautista-Rodríguez, Vladimir Melgarejo
Abstract:
Climate change, loss of ecosystems, increasing poverty, increasing marginalization of rural communities and declining food security are global issues that require urgent attention. In this regard, a great deal of research has focused on how agroecosystems respond to these challenges as they provide ecosystem services (ES) that lead to higher levels of resilience, adaptation, productivity and self-sufficiency. Hence, the valuing of ecosystem services plays an important role in the decision-making process for the design and management of agroecosystems. This paper aims to define the link between ecosystem service valuation methods and ES value dimensions in agroecosystems from ecological economics and agroecology. The method used to identify valuation methodologies was a literature review in the fields of Agroecology and Ecological Economics, based on a strategy of information search and classification. The conceptual framework of the work is based on the multidimensionality of value, considering the social, ecological, political, technological and economic dimensions. Likewise, the valuation process requires consideration of the ecosystem function associated with ES, such as regulation, habitat, production and information functions. In this way, valuation methods for ES in agroecosystems can integrate more than one value dimension and at least one ecosystem function. The results allow correlating the ecosystem functions with the ecosystem services valued, and the specific tools or models used, the dimensions and valuation methods. The main methodologies identified are multi-criteria valuation (1), deliberative - consultative valuation (2), valuation based on system dynamics modeling (3), valuation through energy or biophysical balances (4), valuation through fuzzy logic modeling (5), valuation based on agent-based modeling (6). Amongst the main conclusions, it is highlighted that the system dynamics modeling approach has a high potential for development in valuation processes, due to its ability to integrate other methods, especially multi-criteria valuation and energy and biophysical balances, to describe through causal cycles the interrelationships between ecosystem services, the dimensions of value in agroecosystems, thus showing the relationships between the value of ecosystem services and the welfare of communities. As for methodological challenges, it is relevant to achieve the integration of tools and models provided by different methods, to incorporate the characteristics of a complex system such as the agroecosystem, which allows reducing the limitations in the processes of valuation of ES.Keywords: ecological economics, agroecosystems, ecosystem services, valuation of ecosystem services
Procedia PDF Downloads 125427 The ‘Accompanying Spouse Dependent Visa Status’: Challenges and Constraints Faced by Zimbabwean Immigrant Women in Integration into South Africa’s Formal Labour Market
Authors: Rujeko Samanthia Chimukuche
Abstract:
Introduction: Transboundary migration at both regional and continental levels has become the defining feature of the 21st century. The recent global migration crisis due to economic strife and war brings back to the fore an old age problem, but with fresh challenges. Migration and forced displacement are issues that require long-term solutions. In South Africa, for example, whilst much attention has been placed on xenophobic attacks and other issues at the nexus of immigrant and indigenous communities, the limited focus has been placed on the integration, specifically formal labour integration of immigrant communities and the gender inequalities that are prevalent. Despite noble efforts by South Africa, hosting several immigrants, several challenges arise in integrating the migrants into society as it is often difficult to harmonize the interests of indigenous communities and those of foreign nationals. This research study has aimed to fill in the gaps by analyzing how stringent immigration and visa regulations prevent skilled migrant women spouses from employment, which often results in several societal vices, including domestic abuse, minimum or no access to important services such as healthcare, education, social welfare among others. Methods: Using a qualitative approach, the study analyzed South Africa migration and labour policies in terms of mainstreaming the gender needs of skilled migrant women. Secondly, the study highlighted the migratory experiences and constraints of skilled Zimbabwean women migrant spouses in South Africa labour integration. The experiences of these women have shown the gender inequalities of the migratory policies. Thirdly, Zimbabwean women's opportunities and/or challenges in integration into the South African formal labour market were explored. Lastly, practical interventions to support the integration of skilled migrant women spouses into South Africa’s formal labour market were suggested. Findings: Key findings show that gender dynamics are pivotal in migration patterns and the mainstreaming of gender in migration policies. This study, therefore, contributed to the fields of gender and migration by examining ways in which gender rights of skilled migrant women spouses can be incorporated in labour integration policy making.Keywords: accompanying spouse visa, gender-migration, labour-integration, Zimbabwean women
Procedia PDF Downloads 120426 Photoinduced Energy and Charge Transfer in InP Quantum Dots-Polymer/Metal Composites for Optoelectronic Devices
Authors: Akanksha Singh, Mahesh Kumar, Shailesh N. Sharma
Abstract:
Semiconductor quantum dots (QDs) such as CdSe, CdS, InP, etc. have gained significant interest in the recent years due to its application in various fields such as LEDs, solar cells, lasers, biological markers, etc. The interesting feature of the QDs is their tunable band gap. The size of the QDs can be easily varied by varying the synthesis parameters which change the band gap. One of the limitations with II-VI semiconductor QDs is their biological application. The use of cadmium makes them unsuitable for biological applications. III-V QD such as InP overcomes this problem as they are structurally robust because of the covalent bonds which do not allow the ions to leak. Also, InP QDs has large Bohr radii which increase the window for the quantum confinement effect. The synthesis of InP QDs is difficult and time consuming. Authors have synthesized InP using a novel, quick synthesis method which utilizes trioctylphosphine as a source of phosphorus. In this work, authors have made InP composites with P3HT(Poly(3-hexylthiophene-2,5-diyl))polymer(organic-inorganic hybrid material) and gold nanoparticles(metal-semiconductor composites). InP-P3HT shows FRET phenomenon whereas InP-Au shows charge transfer mechanism. The synthesized InP QDs has an absorption band at 397 nm and PL peak position at 491 nm. The band gap of the InP QDs is 2.46 eV as compared to the bulk band gap of InP i.e. 1.35 eV. The average size of the QDs is around 3-4 nm. In order to protect the InP core, a shell of wide band gap material i.e. ZnS is coated on the top of InP core. InP-P3HT composites were made in order to study the charge transfer/energy transfer phenomenon between them. On adding aliquots of P3HT to InP QDs solution, the P3HT PL increases which can be attributed to the dominance of Förster energy transfer between InP QDs (donor) P3HT polymer (acceptor). There is a significant spectral overlap between the PL spectra of InP QDs and absorbance spectra of P3HT. But in the case of InP-Au nanocomposites, significant charge transfer was seen from InP QDs to Au NPs. When aliquots of Au NPs were added to InP QDs, a decrease in the PL of the InP QDs was observed. This is due to the charge transfer from the InP QDs to the Au NPs. In the case of metal semiconductor composites, the enhancement and quenching of QDs depend on the size of the QD and the distance between the QD and the metal NP. These two composites have different phenomenon between donor and acceptor and hence can be utilized for two different applications. The InP-P3HT composite can be utilized for LED devices due to enhancement in the PL emission (FRET). The InP-Au can be utilized efficiently for photovoltaic application owing to the successful charge transfer between InP-Au NPs.Keywords: charge transfer, FRET, gold nanoparticles, InP quantum dots
Procedia PDF Downloads 148425 Measuring Self-Regulation and Self-Direction in Flipped Classroom Learning
Authors: S. A. N. Danushka, T. A. Weerasinghe
Abstract:
The diverse necessities of instruction could be addressed effectively with the support of new dimensions of ICT integrated learning such as blended learning –which is a combination of face-to-face and online instruction which ensures greater flexibility in student learning and congruity of course delivery. As blended learning has been the ‘new normality' in education, many experimental and quasi-experimental research studies provide ample of evidence on its successful implementation in many fields of studies, but it is hard to justify whether blended learning could work similarly in the delivery of technology-teacher development programmes (TTDPs). The present study is bound with the particular research uncertainty, and having considered existing research approaches, the study methodology was set to decide the efficient instructional strategies for flipped classroom learning in TTDPs. In a quasi-experimental pre-test and post-test design with a mix-method research approach, the major study objective was tested with two heterogeneous samples (N=135) identified in a virtual learning environment in a Sri Lankan university. Non-randomized informal ‘before-and-after without control group’ design was employed, and two data collection methods, identical pre-test and post-test and Likert-scale questionnaires were used in the study. Selected two instructional strategies, self-directed learning (SDL) and self-regulated learning (SRL), were tested in an appropriate instructional framework with two heterogeneous samples (pre-service and in-service teachers). Data were statistically analyzed, and an efficient instructional strategy was decided via t-test, ANOVA, ANCOVA. The effectiveness of the two instructional strategy implementation models was decided via multiple linear regression analysis. ANOVA (p < 0.05) shows that age, prior-educational qualifications, gender, and work-experiences do not impact on learning achievements of the two diverse groups of learners through the instructional strategy is changed. ANCOVA (p < 0.05) analysis shows that SDL is efficient for two diverse groups of technology-teachers than SRL. Multiple linear regression (p < 0.05) analysis shows that the staged self-directed learning (SSDL) model and four-phased model of motivated self-regulated learning (COPES Model) are efficient in the delivery of course content in flipped classroom learning.Keywords: COPES model, flipped classroom learning, self-directed learning, self-regulated learning, SSDL model
Procedia PDF Downloads 200424 The Automatisation of Dictionary-Based Annotation in a Parallel Corpus of Old English
Authors: Ana Elvira Ojanguren Lopez, Javier Martin Arista
Abstract:
The aims of this paper are to present the automatisation procedure adopted in the implementation of a parallel corpus of Old English, as well as, to assess the progress of automatisation with respect to tagging, annotation, and lemmatisation. The corpus consists of an aligned parallel text with word-for-word comparison Old English-English that provides the Old English segment with inflectional form tagging (gloss, lemma, category, and inflection) and lemma annotation (spelling, meaning, inflectional class, paradigm, word-formation and secondary sources). This parallel corpus is intended to fill a gap in the field of Old English, in which no parallel and/or lemmatised corpora are available, while the average amount of corpus annotation is low. With this background, this presentation has two main parts. The first part, which focuses on tagging and annotation, selects the layouts and fields of lexical databases that are relevant for these tasks. Most information used for the annotation of the corpus can be retrieved from the lexical and morphological database Nerthus and the database of secondary sources Freya. These are the sources of linguistic and metalinguistic information that will be used for the annotation of the lemmas of the corpus, including morphological and semantic aspects as well as the references to the secondary sources that deal with the lemmas in question. Although substantially adapted and re-interpreted, the lemmatised part of these databases draws on the standard dictionaries of Old English, including The Student's Dictionary of Anglo-Saxon, An Anglo-Saxon Dictionary, and A Concise Anglo-Saxon Dictionary. The second part of this paper deals with lemmatisation. It presents the lemmatiser Norna, which has been implemented on Filemaker software. It is based on a concordance and an index to the Dictionary of Old English Corpus, which comprises around three thousand texts and three million words. In its present state, the lemmatiser Norna can assign lemma to around 80% of textual forms on an automatic basis, by searching the index and the concordance for prefixes, stems and inflectional endings. The conclusions of this presentation insist on the limits of the automatisation of dictionary-based annotation in a parallel corpus. While the tagging and annotation are largely automatic even at the present stage, the automatisation of alignment is pending for future research. Lemmatisation and morphological tagging are expected to be fully automatic in the near future, once the database of secondary sources Freya and the lemmatiser Norna have been completed.Keywords: corpus linguistics, historical linguistics, old English, parallel corpus
Procedia PDF Downloads 213423 The Impact of Dog-Assisted Wellbeing Intervention on Student Motivation and Affective Engagement in the Primary and Secondary School Setting
Authors: Yvonne Howard
Abstract:
This project currently under development is centered around current learning processes, including a thorough literature review and ongoing practical experiences gained as a deputy head in a school. These daily experiences with students engaging in animal-assisted interventions and the school therapy dog form a strong base for this research. The primary objective of this research is to comprehensively explore the impact of dog-assisted well-being interventions on student motivation and affective engagement within primary and secondary school settings. The educational domain currently encounters a significant challenge due to the lack of substantial research in this area. Despite the perceived positive outcomes of such interventions being acknowledged and shared in various settings, the evidence supporting their effectiveness in an educational context remains limited. This study aims to bridge the gap in the research and shed light on the potential benefits of dog-assisted well-being interventions in promoting student motivation and affective engagement. The significance of this topic recognizes that education is not solely confined to academic achievement but encompasses the overall well-being and emotional development of students. Over recent years, there has been a growing interest in animal-assisted interventions, particularly in healthcare settings. This interest has extended to the educational context. While the effectiveness of these interventions in these areas has been explored in other fields, the educational sector lacks comprehensive research in this regard. Through a systematic and thorough research methodology, this study seeks to contribute valuable empirical data to the field, providing evidence to support informed decision-making regarding the implementation of dog-assisted well-being interventions in schools. This research will utilize a mixed-methods design, combining qualitative and quantitative measures to assess the research objectives. The quantitative phase will include surveys and standardized scales to measure student motivation and affective engagement, while the qualitative phase will involve interviews and observations to gain in-depth insights from students, teachers, and other stakeholders. The findings will contribute evidence-based insights, best practices, and practical guidelines for schools seeking to incorporate dog-assisted interventions, ultimately enhancing student well-being and improving educational outcomes.Keywords: therapy dog, wellbeing, engagement, motivation, AAI, intervention, school
Procedia PDF Downloads 79422 Towards Modern Approaches of Intelligence Measurement for Clinical and Educational Practices
Authors: Alena Kulikova, Tatjana Kanonire
Abstract:
Intelligence research is one of the oldest fields of psychology. Many factors have made a research on intelligence, defined as reasoning and problem solving [1, 2], a very acute and urgent problem. Thus, it has been repeatedly shown that intelligence is a predictor of academic, professional, and social achievement in adulthood (for example, [3]); Moreover, intelligence predicts these achievements better than any other trait or ability [4]. The individual level, a comprehensive assessment of intelligence is a necessary criterion for the diagnosis of various mental conditions. For example, it is a necessary condition for psychological, medical and pedagogical commissions when deciding on educational needs and the most appropriate educational programs for school children. Assessment of intelligence is crucial in clinical psychodiagnostic and needs high-quality intelligence measurement tools. Therefore, it is not surprising that the development of intelligence tests is an essential part of psychological science and practice. Many modern intelligence tests have a long history and have been used for decades, for example, the Stanford-Binet test or the Wechsler test. However, the vast majority of these tests are based on the classic linear test structure, in which all respondents receive all tasks (see, for example, a critical review by [5]). This understanding of the testing procedure is a legacy of the pre-computer era, in which blank testing was the only diagnostic procedure available [6] and has some significant limitations that affect the reliability of the data obtained [7] and increased time costs. Another problem with measuring IQ is that classical line-structured tests do not fully allow to measure respondent's intellectual progress [8], which is undoubtedly a critical limitation. Advances in modern psychometrics allow for avoiding the limitations of existing tools. However, as in any rapidly developing industry, at the moment, psychometrics does not offer ready-made and straightforward solutions and requires additional research. In our presentation we would like to discuss the strengths and weaknesses of the current approaches to intelligence measurement and highlight “points of growth” for creating a test in accordance with modern psychometrics. Whether it is possible to create the instrument that will use all achievements of modern psychometric and remain valid and practically oriented. What would be the possible limitations for such an instrument? The theoretical framework and study design to create and validate the original Russian comprehensive computer test for measuring the intellectual development in school-age children will be presented.Keywords: Intelligence, psychometrics, psychological measurement, computerized adaptive testing, multistage testing
Procedia PDF Downloads 80421 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation
Authors: Jonathan Gong
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning
Procedia PDF Downloads 131420 College Faculty Perceptions of Instructional Strategies That Are Effective for Students with Dyslexia
Authors: Samantha R. Dutra
Abstract:
There are many issues that students face in college, such as academic-based struggles, financial issues, family responsibilities, and vocational problems. Students with dyslexia struggle even more with these problems compared to other students. This qualitative study examines faculty perceptions of instructing students with dyslexia. This study is important to the human services and post-secondary educational fields due to the increase in disabled students enrolled in college. This study is also substantial because of the reported bias faced by students with dyslexia and their academic failure. When students with LDs such as dyslexia experience bias, discrimination, and isolation, they are more apt to not seek accommodations, lack communication with faculty, and are more likely to drop out or fail. College students with dyslexia often take longer to complete their post-secondary education and are more likely to withdraw or drop out without earning a degree. Faculty attitudes and academic cultures are major barriers to the success and use of accommodations as well as modified instruction for students with disabilities, which leads to student success. Faculty members are often uneducated or misinformed regarding students with dyslexia. More importantly, many faculty members are unaware of the many ethical and legal implications that they face regarding accommodating students with dyslexia. Instructor expectations can generally be defined as the understanding and perceptions of students regarding their academic success. Skewed instructor expectations can affect how instructors interact with their students and can also affect student success. This is true for students with dyslexia in that instructors may have lower and biased expectations of these students and, therefore, directly impact students’ academic successes and failures. It is vital to understand how instructor attitudes affect the academic achievement of dyslexic students. This study will examine faculty perceptions of instructing students with dyslexia and faculty attitudes towards accommodations and institutional support. The literature concludes that students with dyslexia have many deficits and several learning needs. Furthermore, these are the students with the highest dropout and failure rates, as well as the lowest retention rates. Disabled students generally have many reasons why accommodations and supports just do not help. Some research suggests that accommodations do help students and show positive outcomes. Many improvements need to be made between student support service personnel, faculty, and administrators regarding providing access and adequate supports for students with dyslexia. As the research also suggests, providing more efficient and effective accommodations may increase positive student as well as faculty attitudes in college, and may improve student outcomes overall.Keywords: dyslexia, faculty perception, higher education, learning disability
Procedia PDF Downloads 139