Search results for: response spectrum method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 23567

Search results for: response spectrum method

14327 Large Eddy Simulations for Flow Blurring Twin-Fluid Atomization Concept Using Volume of Fluid Method

Authors: Raju Murugan, Pankaj S. Kolhe

Abstract:

The present study is mainly focusing on the numerical simulation of Flow Blurring (FB) twin fluid injection concept was proposed by Ganan-Calvo, which involves back flow atomization based on global bifurcation of liquid and gas streams, thus creating two-phase flow near the injector exit. The interesting feature of FB injector spray is an insignificant effect of variation in atomizing air to liquid ratio (ALR) on a spray cone angle. Besides, FB injectors produce a nearly uniform spatial distribution of mean droplet diameter and are least susceptible to variation in thermo-physical properties of fuels, making it a perfect candidate for fuel flexible combustor development. The FB injector working principle has been realized through experimental flow visualization techniques only. The present study explores potential of ANSYS Fluent based Large Eddy Simulation(LES) with volume of fluid (VOF) method to investigate two-phase flow just upstream of injector dump plane and spray quality immediate downstream of injector dump plane. Note that, water and air represent liquid and gas phase in all simulations and ALR is varied by changing the air mass flow rate alone. Preliminary results capture two phase flow just upstream of injector dump plane and qualitative agreement is observed with the available experimental literature.

Keywords: flow blurring twin fluid atomization, large eddy simulation, volume of fluid, air to liquid ratio

Procedia PDF Downloads 198
14326 Improving Indoor Air Quality by Increasing Bio-Based Negative Air Ion Release

Authors: Shuye Jiang, Ali Ma, Srinivasan Ramachandran

Abstract:

Indoor air quality could be improved through traditional air purifiers. However, they may not be environmental products. Here, a bio-based method was employed to improve indoor air quality by increasing negative air ion (NAI) release from ornamental plants. A total of 60 plant species has been screened by evaluating their ability to release NAIs, from which four candidates were selected to further study. All of them are from the Dracaena or fabids clade. These four candidates were then subjected to survey their ability to reduce the concentration of particulate matter with diameter of 2.5 or 10 microns (PM2.5 and PM10) in the growth chamber. High concentrations of PM2.5 and PM10 were artificially generated by burning a stick of incense for 2 minutes in the closed growth chamber (80cm length × 80cm width × 80cm height), in which the PM2.5 and PM10 concentration were generally around 500 µg/m3 and 1500 µg/m3, respectively. Both PM2.5 and PM10 were naturally reduced to 410 and 670, respectively after two hours in case that no plants were placed inside the chamber. Interestingly, these two sizes of particulars were reduced to 170 µg/m3 and 210 µg/m3, respectively after two hours when plants were placed to the chamber. It took 4 hours for the plants to reduce particular concentration to acceptable level at less than 55 µg/m3 for both PM2.5 and PM10, respectively. However, the PM2.5 and PM10 concentration were still above 200 µg/m3 and 300 µg/m3, respectively after 4 hours in the growth chamber without any plants. These results suggest the contribution of plants to the particulate deposition. However, all of these data are preliminary and the results may be updated by further studies. In addition, the roles of plants in absorbing indoor formaldehyde have also been explored and their absorbing ability is being improved by optimizing their growth conditions and treating with various exogenous agents. Thus, our preliminary studies provide an alternative strategy to improve indoor air quality.

Keywords: bio-based method, indoor air, negative air ion, particulate matter

Procedia PDF Downloads 153
14325 Innovation in Information Technology Services: Framework to Improve the Effectiveness and Efficiency of Information Technology Service Management Processes, Projects and Decision Support Management

Authors: Pablo Cardozo Herrera

Abstract:

In a dynamic market of Information Technology (IT) Service and with high quality demands and high performance requirements in decreasing costs, it is imperative that IT companies invest organizational effort in order to increase the effectiveness of their Information Technology Service Management (ITSM) processes through the improvement of ITSM project management and through solid support to the strategic decision-making process of IT directors. In this article, the author presents an analysis of common issues of IT companies around the world, with strategic needs of information unmet that provoke their ITSM processes and projects management that do not achieve the effectiveness and efficiency expected of their results. In response to the issues raised, the author proposes a framework consisting of an innovative theoretical framework model of ITSM management and a technological solution aligned to the Information Technology Infrastructure Library (ITIL) good practices guidance and ISO/IEC 20000-1 requirements. The article describes a research that proves the proposed framework is able to integrate, manage and coordinate in a holistic way, measurable and auditable, all ITSM processes and projects of IT organization and utilize the effectiveness assessment achieved for their strategic decision-making process increasing the process maturity level and improving the capacity of an efficient management.

Keywords: innovation in IT services, ITSM processes, ITIL and ISO/IEC 20000-1, IT service management, IT service excellence

Procedia PDF Downloads 379
14324 A Study of The Contrasts and Cultural Commonalities of the Hazara and Uzbek Peoples of Afghanistan

Authors: Sadullah Rahmani

Abstract:

Legends, stories, beliefs and traditions in every nation represent the collective dreams, secrets and aspirations of a nation and on the other hand, the foundation of their collective memory; What generally forms the foundation of the culture of any nation has undergone changes and transformations due to the passage of time and changes in political, religious and social conditions. Afghanistan is one of the richest countries in terms of cultural diversity. This country is home to people of different languages, ethnicities and religions. The purpose of this article is to analyze the contrasts and cultural commonalities between two ethnic groups in Afghanistan, namely the Hazara and Uzbek peoples. This research was done with qualitative method and structured interview tool. The method of data analysis is content analysis. In order to explain the intercultural sensitivities of the two groups, Milton Bennett's intercultural sensitivities measures have been used. Based on the theory of intercultural sensitivities, the development of communication is an important factor in reducing intercultural sensitivities. In this research, 8 people from the Hazara and Uzbek tribes were interviewed. Various factors such as customs and manners, music, language, art, lifestyle, etc. have been examined in the article. These factors can contribute to cultural differences and commonalities between the Hazara and Uzbek peoples. The results of this research show that according to Bennett's theory, there are less cultural sensitivities between the Hazara and Uzbek peoples of Afghanistan, especially in matters of marriage, language, economic poverty, being discriminated against, and work relationships; But cultural sensitivities are more in many other cases such as education, religion and the formation of cultural communities.

Keywords: Keywords: Uzbek, language, culture, religion, Hazara.

Procedia PDF Downloads 20
14323 Automatic and High Precise Modeling for System Optimization

Authors: Stephanie Chen, Mitja Echim, Christof Büskens

Abstract:

To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.

Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization

Procedia PDF Downloads 387
14322 Automatic Adult Age Estimation Using Deep Learning of the ResNeXt Model Based on CT Reconstruction Images of the Costal Cartilage

Authors: Ting Lu, Ya-Ru Diao, Fei Fan, Ye Xue, Lei Shi, Xian-e Tang, Meng-jun Zhan, Zhen-hua Deng

Abstract:

Accurate adult age estimation (AAE) is a significant and challenging task in forensic and archeology fields. Attempts have been made to explore optimal adult age metrics, and the rib is considered a potential age marker. The traditional way is to extract age-related features designed by experts from macroscopic or radiological images followed by classification or regression analysis. Those results still have not met the high-level requirements for practice, and the limitation of using feature design and manual extraction methods is loss of information since the features are likely not designed explicitly for extracting information relevant to age. Deep learning (DL) has recently garnered much interest in imaging learning and computer vision. It enables learning features that are important without a prior bias or hypothesis and could be supportive of AAE. This study aimed to develop DL models for AAE based on CT images and compare their performance to the manual visual scoring method. Chest CT data were reconstructed using volume rendering (VR). Retrospective data of 2500 patients aged 20.00-69.99 years were obtained between December 2019 and September 2021. Five-fold cross-validation was performed, and datasets were randomly split into training and validation sets in a 4:1 ratio for each fold. Before feeding the inputs into networks, all images were augmented with random rotation and vertical flip, normalized, and resized to 224×224 pixels. ResNeXt was chosen as the DL baseline due to its advantages of higher efficiency and accuracy in image classification. Mean absolute error (MAE) was the primary parameter. Independent data from 100 patients acquired between March and April 2022 were used as a test set. The manual method completely followed the prior study, which reported the lowest MAEs (5.31 in males and 6.72 in females) among similar studies. CT data and VR images were used. The radiation density of the first costal cartilage was recorded using CT data on the workstation. The osseous and calcified projections of the 1 to 7 costal cartilages were scored based on VR images using an eight-stage staging technique. According to the results of the prior study, the optimal models were the decision tree regression model in males and the stepwise multiple linear regression equation in females. Predicted ages of the test set were calculated separately using different models by sex. A total of 2600 patients (training and validation sets, mean age=45.19 years±14.20 [SD]; test set, mean age=46.57±9.66) were evaluated in this study. Of ResNeXt model training, MAEs were obtained with 3.95 in males and 3.65 in females. Based on the test set, DL achieved MAEs of 4.05 in males and 4.54 in females, which were far better than the MAEs of 8.90 and 6.42 respectively, for the manual method. Those results showed that the DL of the ResNeXt model outperformed the manual method in AAE based on CT reconstruction of the costal cartilage and the developed system may be a supportive tool for AAE.

Keywords: forensic anthropology, age determination by the skeleton, costal cartilage, CT, deep learning

Procedia PDF Downloads 58
14321 Techno-Apocalypse in Christian End-Time Literature

Authors: Sean O'Callaghan

Abstract:

Around 2011/2012, a whole new genre of Christian religious writing began to emerge, focused on the role of advanced technologies, particularly the GRIN technologies (Genetics, Robotics, Information Technology and Nanotechnology), in bringing about a techno-apocalypse, leading to catastrophic events which would usher in the end of the world. This genre, at first niche, has now begun to grow in significance in many quarters of the more fundamentalist and biblically literalist branches of evangelicalism. It approaches science and technology with more than extreme skepticism. It accuses transhumanists of being in league with satanic powers and a satanic agenda and contextualizes transhumanist scientific progress in terms of its service to what it believes to be a soon to come Antichrist figure. The genre has moved beyond literature and videos about its message can be found on YouTube and other forums, where many of the presentations there get well over a quarter of a million views. This paper will examine the genre and its genesis, referring to the key figures involved in spreading the anti-intellectualist and anti-scientific message. It will demonstrate how this genre of writing is similar in many respects to other forms of apocalyptic writing which have emerged in the twentieth and twenty-first centuries, all in response to both scientific and political events which are interpreted in the light of biblical prophecy. It will also set the genre in the context of a contemporary pre-occupation with conspiracy theory. The conclusions of the research conducted in this field by the author are that it does a grave disservice to both the scientific and Christian audiences which it targets, by misrepresenting scientific advances and by creating a hermeneutic of suspicion which makes it impossible for Christians to place their trust in scientific claims.

Keywords: antichrist, catastrophic, Christian, techno-apocalypse

Procedia PDF Downloads 189
14320 Photovoltaic Performance of AgInSe2-Conjugated Polymer Hybrid Systems

Authors: Dinesh Pathaka, Tomas Wagnera, J. M. Nunzib

Abstract:

We investigated blends of MdPVV.PCBM.AIS for photovoltaic application. AgInSe2 powder was synthesized by sealing and heating the stoichiometric constituents in evacuated quartz tube ampule. Fine grinded AIS powder was dispersed in MD-MOPVV and PCBM with and without surfactant. Different concentrations of these particles were suspended in the polymer solutions and spin casted onto ITO glass. Morphological studies have been performed by atomic force microscopy and optical microscopy. The blend layers were also investigated by various techniques like XRD, UV-VIS optical spectroscopy, AFM, PL, after a series of various optimizations with polymers/concentration/deposition/ suspension/surfactants etc. XRD investigation of blend layers shows clear evidence of AIS dispersion in polymers. Diode behavior and cell parameters also revealed it. Bulk heterojunction hybrid photovoltaic device Ag/MoO3/MdPVV.PCBM.AIS/ZnO/ITO was fabricated and tested with standard solar simulator and device characterization system. The best performance and photovoltaic parameters we obtained was an open-circuit voltage of about Voc 0.54 V and a photocurrent of Isc 117 micro A and an efficiency of 0.2 percent using a white light illumination intensity of 23 mW/cm2. Our results are encouraging for further research on the fourth generation inorganic organic hybrid bulk heterojunction photovoltaics for energy. More optimization with spinning rate/thickness/solvents/deposition rates for active layers etc. need to be explored for improved photovoltaic response of these bulk heterojunction devices.

Keywords: thin films, photovoltaic, hybrid systems, heterojunction

Procedia PDF Downloads 260
14319 A Study on Exploring and Prioritizing Critical Risks in Construction Project Assessment

Authors: A. Swetha

Abstract:

This study aims to prioritize and explore critical risks in construction project assessment, employing the Weighted Average Index method and Principal Component Analysis (PCA). Through extensive literature review and expert interviews, project assessment risk factors were identified across Budget and Cost Management Risk, Schedule and Time Management Risk, Scope and Planning Risk, Safety and Regulatory Compliance Risk, Resource Management Risk, Communication and Stakeholder Management Risk, and Environmental and Sustainability Risk domains. A questionnaire was distributed to stakeholders involved in construction activities in Hyderabad, India, with 180 completed responses analyzed using the Weighted Average Index method to prioritize risk factors. Subsequently, PCA was used to understand relationships between these factors and uncover underlying patterns. Results highlighted dependencies on critical resources, inadequate risk assessment, cash flow constraints, and safety concerns as top priorities, while factors like currency exchange rate fluctuations and delayed information dissemination ranked lower but remained significant. These insights offer valuable guidance for stakeholders to mitigate risks effectively and enhance project outcomes. By adopting systematic risk assessment and management approaches, construction projects in Hyderabad and beyond can navigate challenges more efficiently, ensuring long-term viability and resilience.

Keywords: construction project assessment risk factor, risk prioritization, weighted average index, principal component analysis, project risk factors

Procedia PDF Downloads 13
14318 Effects of Work Load and Surface Acting on Emotional Exhaustion and Work Satisfaction of Social Worker Students: Chinese Indigenous Ren-Qing Shi-Ku Trait as Moderator

Authors: Chung-Kwei Wang, Kuo-Ying Lo

Abstract:

The study is aimed to examine main and moderation effect of Chinese traditional social wisdom ‘Ren-qing Shi-kuon' the adjustment of social worker students during their practicum. Ren-qing Shi-ku as a social wisdom has been emphasized by collective-oriented Chinese society for thousand years. Based on interview and literature review, we operationalized the concept as four factors, including ‘harmonious interaction’, ‘understanding and tolerance’, ‘empathetic communication’ and ‘rule abiding’. We administer the scale to 96 social worker senior students before their summer practicums begins and collect their response on emotion labor, emotional exhaustion, work load, work satisfaction. We also ask their supervisors rated their performance on empathy, interpersonal relationships, performance on practicum and their Ren-qing Shi-ku performance. Results indicated that self-ratings of students on Ren-qing Shi-ku scale are correlated with rating from their supervisors. Students who have higher Ren-qing Shi-ku have better adjustment and higher ratings from their supervisor. Ren-qing Shi-ku also moderate effects of surface acting labor and work load on emotional exhaustion and work satisfaction. However, Ren-qing Shi-ku seems more beneficial under low work load situations. The finding of this study suggested traditional social skill training might be very effective for social service providers in a collective-oriented culture.

Keywords: emotion labor, ren-qing shi-ku, emotional exhaustion, work satisfaction and performance

Procedia PDF Downloads 478
14317 Estimation of Asphalt Pavement Surfaces Using Image Analysis Technique

Authors: Mohammad A. Khasawneh

Abstract:

Asphalt concrete pavements gradually lose their skid resistance causing safety problems especially under wet conditions and high driving speeds. In order to enact the actual field polishing and wearing process of asphalt pavement surfaces in a laboratory setting, several laboratory-scale accelerated polishing devices were developed by different agencies. To mimic the actual process, friction and texture measuring devices are needed to quantify surface deterioration at different polishing intervals that reflect different stages of the pavement life. The test could still be considered lengthy and to some extent labor-intensive. Therefore, there is a need to come up with another method that can assist in investigating the bituminous pavement surface characteristics in a practical and time-efficient test procedure. The purpose of this paper is to utilize a well-developed image analysis technique to characterize asphalt pavement surfaces without the need to use conventional friction and texture measuring devices in an attempt to shorten and simplify the polishing procedure in the lab. Promising findings showed the possibility of using image analysis in lieu of the labor-sensitive-variable-in-nature friction and texture measurements. It was found that the exposed aggregate surface area of asphalt specimens made from limestone and gravel aggregates produced solid evidence of the validity of this method in describing asphalt pavement surfaces. Image analysis results correlated well with the British Pendulum Numbers (BPN), Polish Values (PV) and Mean Texture Depth (MTD) values.

Keywords: friction, image analysis, polishing, statistical analysis, texture

Procedia PDF Downloads 293
14316 DWT-SATS Based Detection of Image Region Cloning

Authors: Michael Zimba

Abstract:

A duplicated image region may be subjected to a number of attacks such as noise addition, compression, reflection, rotation, and scaling with the intention of either merely mating it to its targeted neighborhood or preventing its detection. In this paper, we present an effective and robust method of detecting duplicated regions inclusive of those affected by the various attacks. In order to reduce the dimension of the image, the proposed algorithm firstly performs discrete wavelet transform, DWT, of a suspicious image. However, unlike most existing copy move image forgery (CMIF) detection algorithms operating in the DWT domain which extract only the low frequency sub-band of the DWT of the suspicious image thereby leaving valuable information in the other three sub-bands, the proposed algorithm simultaneously extracts features from all the four sub-bands. The extracted features are not only more accurate representation of image regions but also robust to additive noise, JPEG compression, and affine transformation. Furthermore, principal component analysis-eigenvalue decomposition, PCA-EVD, is applied to reduce the dimension of the features. The extracted features are then sorted using the more computationally efficient Radix Sort algorithm. Finally, same affine transformation selection, SATS, a duplication verification method, is applied to detect duplicated regions. The proposed algorithm is not only fast but also more robust to attacks compared to the related CMIF detection algorithms. The experimental results show high detection rates.

Keywords: affine transformation, discrete wavelet transform, radix sort, SATS

Procedia PDF Downloads 216
14315 Security Design of Root of Trust Based on RISC-V

Authors: Kang Huang, Wanting Zhou, Shiwei Yuan, Lei Li

Abstract:

Since information technology develops rapidly, the security issue has become an increasingly critical for computer system. In particular, as cloud computing and the Internet of Things (IoT) continue to gain widespread adoption, computer systems need to new security threats and attacks. The Root of Trust (RoT) is the foundation for providing basic trusted computing, which is used to verify the security and trustworthiness of other components. Design a reliable Root of Trust and guarantee its own security are essential for improving the overall security and credibility of computer systems. In this paper, we discuss the implementation of self-security technology based on the RISC-V Root of Trust at the hardware level. To effectively safeguard the security of the Root of Trust, researches on security safeguard technology on the Root of Trust have been studied. At first, a lightweight and secure boot framework is proposed as a secure mechanism. Secondly, two kinds of memory protection mechanism are built to against memory attacks. Moreover, hardware implementation of proposed method has been also investigated. A series of experiments and tests have been carried on to verify to effectiveness of the proposed method. The experimental results demonstrated that the proposed approach is effective in verifying the integrity of the Root of Trust’s own boot rom, user instructions, and data, ensuring authenticity and enabling the secure boot of the Root of Trust’s own system. Additionally, our approach provides memory protection against certain types of memory attacks, such as cache leaks and tampering, and ensures the security of root-of-trust sensitive information, including keys.

Keywords: root of trust, secure boot, memory protection, hardware security

Procedia PDF Downloads 176
14314 A Questionnaire-Based Survey: Therapists Response towards Upper Limb Disorder Learning Tool

Authors: Noor Ayuni Che Zakaria, Takashi Komeda, Cheng Yee Low, Kaoru Inoue, Fazah Akhtar Hanapiah

Abstract:

Previous studies have shown that there are arguments regarding the reliability and validity of the Ashworth and Modified Ashworth Scale towards evaluating patients diagnosed with upper limb disorders. These evaluations depended on the raters’ experiences. This initiated us to develop an upper limb disorder part-task trainer that is able to simulate consistent upper limb disorders, such as spasticity and rigidity signs, based on the Modified Ashworth Scale to improve the variability occurring between raters and intra-raters themselves. By providing consistent signs, novice therapists would be able to increase training frequency and exposure towards various levels of signs. A total of 22 physiotherapists and occupational therapists participated in the study. The majority of the therapists agreed that with current therapy education, they still face problems with inter-raters and intra-raters variability (strongly agree 54%; n = 12/22, agree 27%; n = 6/22) in evaluating patients’ conditions. The therapists strongly agreed (72%; n = 16/22) that therapy trainees needed to increase their frequency of training; therefore believe that our initiative to develop an upper limb disorder training tool will help in improving the clinical education field (strongly agree and agree 63%; n = 14/22).

Keywords: upper limb disorder, clinical education tool, inter/intra-raters variability, spasticity, modified Ashworth scale

Procedia PDF Downloads 299
14313 A Study of Emotional Intelligence and Adjustment of Senior Secondary School Students in District Karnal, Haryana, India

Authors: Rooma Rani

Abstract:

The education is really important for the improvement of physical and mental well-being of the school students. It is used to express inner potential, acquire knowledge, develop skills, shape habits, attitudes, values, belief, etc. along with providing strengths and resilience to people to changing situations and allowing them to develop all those capacities which will enable individual to control surrounding environment. Education has a significant effect on the behavior of individuals which helps us in the new situations of everyday life. Educating the child is directing the child’s capacities, attitudes interest, urges, and needs into the most desirable channels. We are the part of 21st century and now a day emotional intelligence is considered more important than intelligence in the success of a person. Success depends on several intelligences and on the control of emotions too. Emotional Intelligence, like general intelligence is the product of one’s heredity and its interaction with his environmental forces. There are certain methods evolved in modern researches. Keeping in view the nature and purpose of the study, the descriptive survey method is preferred. This method is one of the important methods in education research because it describes the current position of the phenomenon under study. The term descriptive survey is generally used for the type of research which proposes to condition of practices of the present time. In the present study, a systematically random sampling method was used to select a representative sample. 50 students were selected from 2 schools. Out of 50 students, 25 were boys and 25 were girls. In the study, a) it has been found a significant difference in the level of adjustment between male and female students; b) it has been found a non-significant difference in the level of emotional intelligence between male and female students; c) it has been found a non-significant relationship between adjustment and emotional intelligence among male students; d) it has been found a significant relationship between adjustment and emotional intelligence among male students. The results of the study indicated that amongst the students those who possess high scores on emotional intelligence tests are high in level of adjustment. Measures should be adopted to improve and sustain the emotional intelligence level of students throughout their studies. Adolescent students are prone to many problems like physical, social and psychological. They need a congenial home atmosphere so that they grow into full-fledged citizens of our country. After understanding these, it helps in the development of personality which leads to a better learning situation and better thinking capacities, in turn, enhances adjustment and achievement along with a better perception of self.

Keywords: adjustment, education, emotional intelligence, students

Procedia PDF Downloads 119
14312 The Effect of Cooling Tower Fan on the Performance of the Chiller Plant

Authors: Ankitsinh Chauhan, Vimal Patel, A. D. Parekh, Ishant patil

Abstract:

This study delves into the crucial influence of cooling tower fan operation on the performance of a chiller plant, with a specific focus on the Chiller Plant at SVNIT. Continuous operation of the chiller plant led to unexpected damage to the cooling tower's belt drive, rendering the cooling tower fan non-operational. Consequently, the efficiency of heat transfer in the condenser was significantly impaired. In response, we analyzed and calculated several vital parameters, including the Coefficient of Performance (COP), heat rejection in the condenser (Qc), work required for the compressor (Wc), and heat absorbed by the refrigerant in the evaporator (Qe). Our findings revealed that in the absence of the cooling tower fan, relying solely on natural convection, the COP of the chiller plant reached a minimum value of 5.49. However, after implementing a belt drive to facilitate forced convection for the cooling tower fan, the COP of the chiller plant experienced a noteworthy improvement, reaching approximately 6.27. Additionally, the utilization of forced convection resulted in an impressive reduction of 8.9% in compressor work, signifying enhanced energy efficiency. This study underscores the critical role of cooling tower fan operation in optimizing chiller plant performance, with practical implications for energy-efficient HVAC systems. It highlights the potential benefits of employing forced convection mechanisms, such as belt drives, to ensure efficient heat transfer in the condenser, ultimately contributing to improved energy utilization and reduced operational costs in cooling.

Keywords: cooling tower, chiller Plant, cooling tower fan, energy efficiency, VCRS.

Procedia PDF Downloads 16
14311 Development of an Auxetic Tissue Implant

Authors: Sukhwinder K. Bhullar, M. B. G. Jun

Abstract:

The developments in biomedical industry have demanded the development of biocompatible, high performance materials to meet higher engineering specifications. The general requirements of such materials are to provide a combination of high stiffness and strength with significant weight savings, resistance to corrosion, chemical resistance, low maintenance, and reduced costs. Auxetic materials which come under the category of smart materials offer huge potential through measured enhancements in mechanical properties. Unique deformation mechanism, providing cushioning on indentation, automatically adjustable with its strength and thickness in response to forces and having memory returns to its neutral state on dissipation of stresses make them good candidate in biomedical industry. As simple extension and compression of tissues is of fundamental importance in biomechanics, therefore, to study the elastic behaviour of auxetic soft tissues implant is targeted in this paper. Therefore development and characterization of auxetic soft tissue implant is studied in this paper. This represents a real life configuration where soft tissue such as meniscus in knee replacement, ligaments and tendons often are taken as transversely isotropic. Further, as composition of alternating polydisperse blocks of soft and stiff segments combined with excellent biocompatibility make polyurethanes one of the most promising synthetic biomaterials. Hence selecting auxetic polyurathylene foam functional characterization is performed and compared with conventional polyurathylene foam.

Keywords: auxetic materials, deformation mechanism, enhanced mechanical properties, soft tissues

Procedia PDF Downloads 450
14310 Mixed Integer Programming-Based One-Class Classification Method for Process Monitoring

Authors: Younghoon Kim, Seoung Bum Kim

Abstract:

One-class classification plays an important role in detecting outlier and abnormality from normal observations. In the previous research, several attempts were made to extend the scope of application of the one-class classification techniques to statistical process control problems. For most previous approaches, such as support vector data description (SVDD) control chart, the design of the control limits is commonly based on the assumption that the proportion of abnormal observations is approximately equal to an expected Type I error rate in Phase I process. Because of the limitation of the one-class classification techniques based on convex optimization, we cannot make the proportion of abnormal observations exactly equal to expected Type I error rate: controlling Type I error rate requires to optimize constraints with integer decision variables, but convex optimization cannot satisfy the requirement. This limitation would be undesirable in theoretical and practical perspective to construct effective control charts. In this work, to address the limitation of previous approaches, we propose the one-class classification algorithm based on the mixed integer programming technique, which can solve problems formulated with continuous and integer decision variables. The proposed method minimizes the radius of a spherically shaped boundary subject to the number of normal data to be equal to a constant value specified by users. By modifying this constant value, users can exactly control the proportion of normal data described by the spherically shaped boundary. Thus, the proportion of abnormal observations can be made theoretically equal to an expected Type I error rate in Phase I process. Moreover, analogous to SVDD, the boundary can be made to describe complex structures by using some kernel functions. New multivariate control chart applying the effectiveness of the algorithm is proposed. This chart uses a monitoring statistic to characterize the degree of being an abnormal point as obtained through the proposed one-class classification. The control limit of the proposed chart is established by the radius of the boundary. The usefulness of the proposed method was demonstrated through experiments with simulated and real process data from a thin film transistor-liquid crystal display.

Keywords: control chart, mixed integer programming, one-class classification, support vector data description

Procedia PDF Downloads 164
14309 Progress in Combining Image Captioning and Visual Question Answering Tasks

Authors: Prathiksha Kamath, Pratibha Jamkhandi, Prateek Ghanti, Priyanshu Gupta, M. Lakshmi Neelima

Abstract:

Combining Image Captioning and Visual Question Answering (VQA) tasks have emerged as a new and exciting research area. The image captioning task involves generating a textual description that summarizes the content of the image. VQA aims to answer a natural language question about the image. Both these tasks include computer vision and natural language processing (NLP) and require a deep understanding of the content of the image and semantic relationship within the image and the ability to generate a response in natural language. There has been remarkable growth in both these tasks with rapid advancement in deep learning. In this paper, we present a comprehensive review of recent progress in combining image captioning and visual question-answering (VQA) tasks. We first discuss both image captioning and VQA tasks individually and then the various ways in which both these tasks can be integrated. We also analyze the challenges associated with these tasks and ways to overcome them. We finally discuss the various datasets and evaluation metrics used in these tasks. This paper concludes with the need for generating captions based on the context and captions that are able to answer the most likely asked questions about the image so as to aid the VQA task. Overall, this review highlights the significant progress made in combining image captioning and VQA, as well as the ongoing challenges and opportunities for further research in this exciting and rapidly evolving field, which has the potential to improve the performance of real-world applications such as autonomous vehicles, robotics, and image search.

Keywords: image captioning, visual question answering, deep learning, natural language processing

Procedia PDF Downloads 59
14308 Polyphytopharmaca Improving Asthma Control Test Value, Biomarker (Eosinophils and Malondialdehyde): Quasi Experimental Test in Patients with Asthma

Authors: Andri Andri, Susanthy Djajalaksana, Iin Noor Chozin

Abstract:

Background: Despite advances in asthma therapies, a proportion of patients with asthma continue to have difficulty in gaining adequate asthma control. Complex immunological mechanisms and oxidative stress affect this condition, including the role of malondialdehyde (MDA) as a marker of inflammation. This research aimed to determine the effect of polyphytopharmaca administration on the value of asthma control test (ACT), blood eosinophils level and markers of MDA serum inflammation in patients with asthma. Method: Quasi experimental approach was conducted toward 15 stable asthma patients who were not fully controlled in outpatient pulmonary clinic, Public Hospital of Dr. Saiful Anwar Malang. Assessments of ACT values, eosinophil levels, and serum MDA levels were carried out before and after administration of polyphytopharmaca which contained a combination of 100 mg Nigella sativa extract, Kleinhovia hospita 100 mg, Curcuma xanthorrhiza 75 mg, and Ophiocephalus striatus 100 mg, three times daily with two capsules for 12 weeks. The ACT value was determined by the researcher by asking the patient directly, blood eosinophil levels were calculated by analyzing blood type counts, and serum MDA levels were detected by the qPCR method. Result: There was a significant enhancement of ACT value (18.07 ± 2.57 to 22.06 ± 1.83, p = 0.001) (from 60% uncontrolled ACT to 93.3% controlled ACT), a significant decrease in blood eosinophils levels (653.15 ± 276.15 pg/mL to 460.66 ± 202.04 pg/mL, p = 0.038), and decreased serum MDA levels (109.64 ± 53.77 ng / ml to 78.68 ± 64.92 ng/ml, p = 0.156). Conclusion: Administration of polyphytopharmaca can increase ACT value, decrease blood eosinophils levels and reduce MDA serum in stable asthma patients who are not fully controlled.

Keywords: asthma control test, eosinophils levels, malondialdehyde, polyphytopharmaca

Procedia PDF Downloads 105
14307 The Information-Seeking Behaviour of Kuwaiti Judges (KJs)

Authors: Essam Mansour

Abstract:

The key purpose of this study is to show information-seeking behaviour of Kuwaiti Judges (KJs). Being one of the few studies about the information needs and information-seeking behaviour conducted in Arab and developing countries, this study is a pioneer one among many studies conducted in information seeking, especially with this significant group of information users. The authors tried to investigate this seeking behavior in terms of KJs' thoughts, perceptions, motivations, techniques, preferences, tools and barriers met when seeking information. The authors employed a questionnaire, with a response rate 77.2 percent. This study showed that most of KJs were likely to be older, educated and with a work experience ranged from new to old experience. There is a statistically reliable significant difference between KJs' demographic characteristics and some sources of information, such as books, encyclopedias, references and mass media. KJs were using information moderately to make a decision, to be in line with current events, to collect statistics and to make a specific/general research. The office and home were the most frequent location KJs were accessing information from. KJs' efficiency level of the English language is described to be moderately good, and a little number of them confirmed that their efficiency level of French was not bad. The assistance provided by colleagues, followed by consultants, translators, sectaries and librarians were found to be most strong types of assistance needed when seeking information. Mobile apps, followed by PCs, information networks (the Internet) and information databases were the highest technology tool used by KJs. Printed materials, followed by non-printed and audiovisual materials were the most preferred information formats KJs use. The use of languages, the recency of information and the place of information, the deficit role of the library to deliver information were at least significant barriers to KJs when seeking information.

Keywords: information users, information-seeking behaviour, information needs, judges, Kuwait

Procedia PDF Downloads 294
14306 Monocytic Paraoxonase 2 (PON 2) Lactonase Activity Is Related to Myocardial Infarction

Authors: Mukund Ramchandra Mogarekar, Pankaj Kumar, Shraddha V. More

Abstract:

Background: Total cholesterol (TC), low-density lipoprotein cholesterol (LDL-C), very low-density lipoprotein cholesterol (VLDL-C), Apo B, and lipoprotein(a) was found as atherogenic factors while high-density lipoprotein cholesterol (HDL-C) was anti-atherogenic. Methods and Results: The study group consists of 40 MI subjects as cases and 40 healthy as controls. Monocytic PON 2 Lactonase (LACT) activity was measured by using Dihydrocoumarine (DHC) as substrate. Phenotyping was done by method of Mogarekar MR et al, serum AOPP by modified method of Witko-Sarsat V et al and Apo B by Turbidimetric immunoassay. PON 2 LACT activities were significantly lower (p< 0.05) and AOPPs & Apo B were higher in MI subjects (p> 0.05). Trimodal distribution of QQ, QR & RR phenotypes of study population showed no significant difference among cases and controls (p> 0.05). Univariate binary logistic regression analysis showed independent association of TC, HDL, LDL, AOPP, Apo B, and PON 2 LACT activity with MI and multiple forward binary logistic regression showed PON 2 LACT activity and serum Apo B as an independent predictor of MI. Conclusions- Decrease in PON 2 LACT activity in MI subjects than in controls suggests increased oxidative stress in MI which is reflected by significantly increased AOPP and Apo B. PON 1 polymorphism of QQ, QR and RR showed no significant difference in protection against MI. Univariate and multiple forward binary logistic regression showed PON 2 LACT activity and serum Apo B as an independent predictor of MI.

Keywords: advanced oxidation protein products, apolipoprotein-B, myocardial infarction, paraoxonase 2 lactonase

Procedia PDF Downloads 226
14305 Soil-Structure Interaction Models for the Reinforced Foundation System – A State-of-the-Art Review

Authors: Ashwini V. Chavan, Sukhanand S. Bhosale

Abstract:

Challenges of weak soil subgrade are often resolved either by stabilization or reinforcing it. However, it is also practiced to reinforce the granular fill to improve the load-settlement behavior of over weak soil strata. The inclusion of reinforcement in the engineered granular fill provided a new impetus for the development of enhanced Soil-Structure Interaction (SSI) models, also known as mechanical foundation models or lumped parameter models. Several researchers have been working in this direction to understand the mechanism of granular fill-reinforcement interaction and the response of weak soil under the application of load. These models have been developed by extending available SSI models such as the Winkler Model, Pasternak Model, Hetenyi Model, Kerr Model etc., and are helpful to visualize the load-settlement behavior of a physical system through 1-D and 2-D analysis considering beam and plate resting on the foundation respectively. Based on the literature survey, these models are categorized as ‘Reinforced Pasternak Model,’ ‘Double Beam Model,’ ‘Reinforced Timoshenko Beam Model,’ and ‘Reinforced Kerr Model.’ The present work reviews the past 30+ years of research in the field of SSI models for reinforced foundation systems, presenting the conceptual development of these models systematically and discussing their limitations. Special efforts are taken to tabulate the parameters and their significance in the load-settlement analysis, which may be helpful in future studies for the comparison and enhancement of results and findings of physical models.

Keywords: geosynthetics, mathematical modeling, reinforced foundation, soil-structure interaction, ground improvement, soft soil

Procedia PDF Downloads 110
14304 Graphene Metamaterials Supported Tunable Terahertz Fano Resonance

Authors: Xiaoyong He

Abstract:

The manipulation of THz waves is still a challenging task due to lack of natural materials interacted with it strongly. Designed by tailoring the characters of unit cells (meta-molecules), the advance of metamaterials (MMs) may solve this problem. However, because of Ohmic and radiation losses, the performance of MMs devices is subjected to the dissipation and low quality factor (Q-factor). This dilemma may be circumvented by Fano resonance, which arises from the destructive interference between a bright continuum mode and dark discrete mode (or a narrow resonance). Different from symmetric Lorentz spectral curve, Fano resonance indicates a distinct asymmetric line-shape, ultrahigh quality factor, steep variations in spectrum curves. Fano resonance is usually realized through symmetry breaking. However, if concentric double rings (DR) are placed closely to each other, the near-field coupling between them gives rise to two hybridized modes (bright and narrowband dark modes) because of the local asymmetry, resulting into the characteristic Fano line shape. Furthermore, from the practical viewpoint, it is highly desirable requirement that to achieve the modulation of Fano spectral curves conveniently, which is an important and interesting research topics. For current Fano systems, the tunable spectral curves can be realized by adjusting the geometrical structural parameters or magnetic fields biased the ferrite-based structure. But due to limited dispersion properties of active materials, it is still a tough work to tailor Fano resonance conveniently with the fixed structural parameters. With the favorable properties of extreme confinement and high tunability, graphene is a strong candidate to achieve this goal. The DR-structure possesses the excitation of so-called “trapped modes,” with the merits of simple structure and high quality of resonances in thin structures. By depositing graphene circular DR on the SiO2/Si/ polymer substrate, the tunable Fano resonance has been theoretically investigated in the terahertz regime, including the effects of graphene Fermi level, structural parameters and operation frequency. The results manifest that the obvious Fano peak can be efficiently modulated because of the strong coupling between incident waves and graphene ribbons. As Fermi level increases, the peak amplitude of Fano curve increases, and the resonant peak position shifts to high frequency. The amplitude modulation depth of Fano curves is about 30% if Fermi level changes in the scope of 0.1-1.0 eV. The optimum gap distance between DR is about 8-12 μm, where the value of figure of merit shows a peak. As the graphene ribbon width increases, the Fano spectral curves become broad, and the resonant peak denotes blue shift. The results are very helpful to develop novel graphene plasmonic devices, e.g. sensors and modulators.

Keywords: graphene, metamaterials, terahertz, tunable

Procedia PDF Downloads 333
14303 The Usage of Bridge Estimator for Hegy Seasonal Unit Root Tests

Authors: Huseyin Guler, Cigdem Kosar

Abstract:

The aim of this study is to propose Bridge estimator for seasonal unit root tests. Seasonality is an important factor for many economic time series. Some variables may contain seasonal patterns and forecasts that ignore important seasonal patterns have a high variance. Therefore, it is very important to eliminate seasonality for seasonal macroeconomic data. There are some methods to eliminate the impacts of seasonality in time series. One of them is filtering the data. However, this method leads to undesired consequences in unit root tests, especially if the data is generated by a stochastic seasonal process. Another method to eliminate seasonality is using seasonal dummy variables. Some seasonal patterns may result from stationary seasonal processes, which are modelled using seasonal dummies but if there is a varying and changing seasonal pattern over time, so the seasonal process is non-stationary, deterministic seasonal dummies are inadequate to capture the seasonal process. It is not suitable to use seasonal dummies for modeling such seasonally nonstationary series. Instead of that, it is necessary to take seasonal difference if there are seasonal unit roots in the series. Different alternative methods are proposed in the literature to test seasonal unit roots, such as Dickey, Hazsa, Fuller (DHF) and Hylleberg, Engle, Granger, Yoo (HEGY) tests. HEGY test can be also used to test the seasonal unit root in different frequencies (monthly, quarterly, and semiannual). Another issue in unit root tests is the lag selection. Lagged dependent variables are added to the model in seasonal unit root tests as in the unit root tests to overcome the autocorrelation problem. In this case, it is necessary to choose the lag length and determine any deterministic components (i.e., a constant and trend) first, and then use the proper model to test for seasonal unit roots. However, this two-step procedure might lead size distortions and lack of power in seasonal unit root tests. Recent studies show that Bridge estimators are good in selecting optimal lag length while differentiating nonstationary versus stationary models for nonseasonal data. The advantage of this estimator is the elimination of the two-step nature of conventional unit root tests and this leads a gain in size and power. In this paper, the Bridge estimator is proposed to test seasonal unit roots in a HEGY model. A Monte-Carlo experiment is done to determine the efficiency of this approach and compare the size and power of this method with HEGY test. Since Bridge estimator performs well in model selection, our approach may lead to some gain in terms of size and power over HEGY test.

Keywords: bridge estimators, HEGY test, model selection, seasonal unit root

Procedia PDF Downloads 319
14302 Interfacial Reactions between Aromatic Polyamide Fibers and Epoxy Matrix

Authors: Khodzhaberdi Allaberdiev

Abstract:

In order to understand the interactions on the interface polyamide fibers and epoxy matrix in fiber- reinforced composites were investigated industrial aramid fibers: armos, svm, terlon using individual epoxy matrix components, epoxies: diglycidyl ether of bisphenol A (DGEBA), three- and diglycidyl derivatives of m, p-amino-, m, p-oxy-, o, m,p-carboxybenzoic acids, the models: curing agent, aniline and the compound, that depict of the structure the primary addition reaction the amine to the epoxy resin, N-di (oxyethylphenoxy) aniline. The chemical structure of the surface of untreated and treated polyamide fibers analyzed using Fourier transform infrared spectroscopy (FTIR). The impregnation of fibers with epoxy matrix components and N-di (oxyethylphenoxy) aniline has been carried out by heating 150˚C (6h). The optimum fiber loading is at 65%.The result a thermal treatment is the covalent bonds formation , derived from a combined of homopolymerization and crosslinking mechanisms in the interfacial region between the epoxy resin and the surface of fibers. The reactivity of epoxy resins on interface in microcomposites (MC) also depends from processing aids treated on surface of fiber and the absorbance moisture. The influences these factors as evidenced by the conversion of epoxy groups values in impregnated with DGEBA of the terlons: industrial, dried (in vacuum) and purified samples: 5.20 %, 4.65% and 14.10%, respectively. The same tendency for svm and armos fibers is observed. The changes in surface composition of these MC were monitored by X-ray photoelectron spectroscopy (XPS). In the case of the purified fibers, functional groups of fibers act as well as a catalyst and curing agent of epoxy resin. It is found that the value of the epoxy groups conversion for reinforced formulations depends on aromatic polyamides nature and decreases in the order: armos >svm> terlon. This difference is due of the structural characteristics of fibers. The interfacial interactions also examined between polyglycidyl esters substituted benzoic acids and polyamide fibers in the MC. It is found that on interfacial interactions these systems influences as well as the structure and the isomerism of epoxides. The IR-spectrum impregnated fibers with aniline showed that the polyamide fibers appreciably with aniline do not react. FTIR results of treated fibers with N-di (oxyethylphenoxy) aniline fibers revealed dramatically changes IR-characteristic of the OH groups of the amino alcohol. These observations indicated hydrogen bondings and covalent interactions between amino alcohol and functional groups of fibers. This result also confirms appearance of the exo peak on Differential Scanning Calorimetry (DSC) curve of the MC. Finally, the theoretical evaluation non-covalent interactions between individual epoxy matrix components and fibers has been performed using the benzanilide and its derivative contaning the benzimidazole moiety as a models of terlon and svm,armos, respectively. Quantum-topological analysis also demonstrated the existence hydrogen bond between amide group of models and epoxy matrix components.All the results indicated that on the interface polyamide fibers and epoxy matrix exist not only covalent, but and non-covalent the interactions during the preparation of MC.

Keywords: epoxies, interface, modeling, polyamide fibers

Procedia PDF Downloads 255
14301 Empirical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;

Procedia PDF Downloads 62
14300 A Sensor Placement Methodology for Chemical Plants

Authors: Omid Ataei Nia, Karim Salahshoor

Abstract:

In this paper, a new precise and reliable sensor network methodology is introduced for unit processes and operations using the Constriction Coefficient Particle Swarm Optimization (CPSO) method. CPSO is introduced as a new search engine for optimal sensor network design purposes. Furthermore, a Square Root Unscented Kalman Filter (SRUKF) algorithm is employed as a new data reconciliation technique to enhance the stability and accuracy of the filter. The proposed design procedure incorporates precision, cost, observability, reliability together with importance-of-variables (IVs) as a novel measure in Instrumentation Criteria (IC). To the best of our knowledge, no comprehensive approach has yet been proposed in the literature to take into account the importance of variables in the sensor network design procedure. In this paper, specific weight is assigned to each sensor, measuring a process variable in the sensor network to indicate the importance of that variable over the others to cater to the ultimate sensor network application requirements. A set of distinct scenarios has been conducted to evaluate the performance of the proposed methodology in a simulated Continuous Stirred Tank Reactor (CSTR) as a highly nonlinear process plant benchmark. The obtained results reveal the efficacy of the proposed method, leading to significant improvement in accuracy with respect to other alternative sensor network design approaches and securing the definite allocation of sensors to the most important process variables in sensor network design as a novel achievement.

Keywords: constriction coefficient PSO, importance of variable, MRMSE, reliability, sensor network design, square root unscented Kalman filter

Procedia PDF Downloads 147
14299 The Impact of Developing an Educational Unit in the Light of Twenty-First Century Skills in Developing Language Skills for Non-Arabic Speakers: A Proposed Program for Application to Students of Educational Series in Regular Schools

Authors: Erfan Abdeldaim Mohamed Ahmed Abdalla

Abstract:

The era of the knowledge explosion in which we live requires us to develop educational curricula quantitatively and qualitatively to adapt to the twenty-first-century skills of critical thinking, problem-solving, communication, cooperation, creativity, and innovation. The process of developing the curriculum is as significant as building it; in fact, the development of curricula may be more difficult than building them. And curriculum development includes analyzing needs, setting goals, designing the content and educational materials, creating language programs, developing teachers, applying for programmes in schools, monitoring and feedback, and then evaluating the language programme resulting from these processes. When we look back at the history of language teaching during the twentieth century, we find that developing the delivery method is the most crucial aspect of change in language teaching doctrines. The concept of delivery method in teaching is a systematic set of teaching practices based on a specific theory of language acquisition. This is a key consideration, as the process of development must include all the curriculum elements in its comprehensive sense: linguistically and non-linguistically. The various Arabic curricula provide the student with a set of units, each unit consisting of a set of linguistic elements. These elements are often not logically arranged, and more importantly, they neglect essential points and highlight other less important ones. Moreover, the educational curricula entail a great deal of monotony in the presentation of content, which makes it hard for the teacher to select adequate content; so that the teacher often navigates among diverse references to prepare a lesson and hardly finds the suitable one. Similarly, the student often gets bored when learning the Arabic language and fails to fulfill considerable progress in it. Therefore, the problem is not related to the lack of curricula, but the problem is the development of the curriculum with all its linguistic and non-linguistic elements in accordance with contemporary challenges and standards for teaching foreign languages. The Arabic library suffers from a lack of references for curriculum development. In this paper, the researcher investigates the elements of development, such as the teacher, content, methods, objectives, evaluation, and activities. Hence, a set of general guidelines in the field of educational development were reached. The paper highlights the need to identify weaknesses in educational curricula, decide the twenty-first-century skills that must be employed in Arabic education curricula, and the employment of foreign language teaching standards in current Arabic Curricula. The researcher assumes that the series of teaching Arabic to speakers of other languages in regular schools do not address the skills of the twenty-first century, which is what the researcher tries to apply in the proposed unit. The experimental method is the method of this study. It is based on two groups: experimental and control. The development of an educational unit will help build suitable educational series for students of the Arabic language in regular schools, in which twenty-first-century skills and standards for teaching foreign languages will be addressed and be more useful and attractive to students.

Keywords: curriculum, development, Arabic language, non-native, skills

Procedia PDF Downloads 65
14298 An Investigation of the Association between Pathological Personality Dimensions and Emotion Dysregulation among Virtual Network Users: The Mediating Role of Cyberchondria Behaviors

Authors: Mehdi Destani, Asghar Heydari

Abstract:

Objective: The present study aimed to investigate the association between pathological personality dimensions and emotion dysregulation through the mediating role of Cyberchondria behaviors among users of virtual networks. Materials and methods: A descriptive–correlational research method was used in this study, and the statistical population consisted of all people active on social network sites in 2020. The sample size was 300 people who were selected through Convenience Sampling. Data collection was carried out in a survey method using online questionnaires, including the "Difficulties in Emotion Regulation Scale" (DERS), Personality Inventory for DSM-5 Brief Form (PID-5-BF), and Cyberchondria Severity Scale Brief Form (CSS-12). Data analysis was conducted using Pearson's Correlation Coefficient and Structural Equation Modeling (SEM). Findings: Findings suggested that pathological personality dimensions and Cyberchondria behaviors have a positive and significant association with emotion dysregulation (p<0.001). The presented model had a good fit with the data. The variable “pathological personality dimensions” with an overall effect (p<0.001, β=0.658), a direct effect (p<0.001, β=0.528), and an indirect mediating effect through Cyberchondria Behaviors (p<.001), β=0.130), accounted for emotion dysregulation among virtual network users. Conclusion: The research findings showed a necessity to pay attention to the pathological personality dimensions as a determining variable and Cyberchondria behaviors as a mediator in the vulnerability of users of social network sites to emotion dysregulation.

Keywords: cyberchondria, emotion dysregulation, pathological personality dimensions, social networks

Procedia PDF Downloads 84