Search results for: complexity by interaction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5432

Search results for: complexity by interaction

2522 N Doped Multiwall Carbon Nanotubes Growth over a Ni Catalyst Substrate

Authors: Angie Quevedo, Juan Bussi, Nestor Tancredi, Juan Fajardo-Díaz, Florentino López-Urías, Emilio Muñóz-Sandoval

Abstract:

In this work, we study the carbon nanotubes (CNTs) formation by catalytic chemical vapor deposition (CCVD) over a catalyst with 20 % of Ni supported over La₂Zr₂O₇ (Ni20LZO). The high C solubility of Ni made it one of the most used in CNTs synthesis. Nevertheless, Ni presents also sintering and coalescence at high temperature. These troubles can be reduced by choosing a suitable support. We propose La₂Zr₂O₇ as for this matter since the incorporation of Ni by co-precipitation and calcination at 900 °C allows a good dispersion and interaction of the active metal (in the oxidized form, NiO) with this support. The CCVD was performed using 1 g of Ni20LZO at 950 °C during 30 min in Ar:H₂ atmosphere (2.5 L/min). The precursor, benzylamine, was added by a nebulizer-sprayer. X ray diffraction study shows the phase separation of NiO and La₂Zr₂O₇ after the calcination and the reduction to Ni after the synthesis. Raman spectra show D and G bands with a ID/IG ratio of 0.75. Elemental study verifies the incorporation of 1% of N. Thermogravimetric analysis shows the oxidation process start at around 450 °C. Future studies will determine the application potential of the samples.

Keywords: N doped carbon nanotubes, catalytic chemical vapor deposition, nickel catalyst, bimetallic oxide

Procedia PDF Downloads 154
2521 Introduction to Various Innovative Techniques Suggested for Seismic Hazard Assessment

Authors: Deepshikha Shukla, C. H. Solanki, Mayank K. Desai

Abstract:

Amongst all the natural hazards, earthquakes have the potential for causing the greatest damages. Since the earthquake forces are random in nature and unpredictable, the quantification of the hazards becomes important in order to assess the hazards. The time and place of a future earthquake are both uncertain. Since earthquakes can neither be prevented nor be predicted, engineers have to design and construct in such a way, that the damage to life and property are minimized. Seismic hazard analysis plays an important role in earthquake design structures by providing a rational value of input parameter. In this paper, both mathematical, as well as computational methods adopted by researchers globally in the past five years, will be discussed. Some mathematical approaches involving the concepts of Poisson’s ratio, Convex Set Theory, Empirical Green’s Function, Bayesian probability estimation applied for seismic hazard and FOSM (first-order second-moment) algorithm methods will be discussed. Computational approaches and numerical model SSIFiBo developed in MATLAB to study dynamic soil-structure interaction problem is discussed in this paper. The GIS-based tool will also be discussed which is predominantly used in the assessment of seismic hazards.

Keywords: computational methods, MATLAB, seismic hazard, seismic measurements

Procedia PDF Downloads 334
2520 A Machine Learning Approach for Detecting and Locating Hardware Trojans

Authors: Kaiwen Zheng, Wanting Zhou, Nan Tang, Lei Li, Yuanhang He

Abstract:

The integrated circuit industry has become a cornerstone of the information society, finding widespread application in areas such as industry, communication, medicine, and aerospace. However, with the increasing complexity of integrated circuits, Hardware Trojans (HTs) implanted by attackers have become a significant threat to their security. In this paper, we proposed a hardware trojan detection method for large-scale circuits. As HTs introduce physical characteristic changes such as structure, area, and power consumption as additional redundant circuits, we proposed a machine-learning-based hardware trojan detection method based on the physical characteristics of gate-level netlists. This method transforms the hardware trojan detection problem into a machine-learning binary classification problem based on physical characteristics, greatly improving detection speed. To address the problem of imbalanced data, where the number of pure circuit samples is far less than that of HTs circuit samples, we used the SMOTETomek algorithm to expand the dataset and further improve the performance of the classifier. We used three machine learning algorithms, K-Nearest Neighbors, Random Forest, and Support Vector Machine, to train and validate benchmark circuits on Trust-Hub, and all achieved good results. In our case studies based on AES encryption circuits provided by trust-hub, the test results showed the effectiveness of the proposed method. To further validate the method’s effectiveness for detecting variant HTs, we designed variant HTs using open-source HTs. The proposed method can guarantee robust detection accuracy in the millisecond level detection time for IC, and FPGA design flows and has good detection performance for library variant HTs.

Keywords: hardware trojans, physical properties, machine learning, hardware security

Procedia PDF Downloads 141
2519 Control of a Quadcopter Using Genetic Algorithm Methods

Authors: Mostafa Mjahed

Abstract:

This paper concerns the control of a nonlinear system using two different methods, reference model and genetic algorithm. The quadcopter is a nonlinear unstable system, which is a part of aerial robots. It is constituted by four rotors placed at the end of a cross. The center of this cross is occupied by the control circuit. Its motions are governed by six degrees of freedom: three rotations around 3 axes (roll, pitch and yaw) and the three spatial translations. The control of such system is complex, because of nonlinearity of its dynamic representation and the number of parameters, which it involves. Numerous studies have been developed to model and stabilize such systems. The classical PID and LQ correction methods are widely used. If the latter represent the advantage to be simple because they are linear, they reveal the drawback to require the presence of a linear model to synthesize. It also implies the complexity of the established laws of command because the latter must be widened on all the domain of flight of these quadcopter. Note that, if the classical design methods are widely used to control aeronautical systems, the Artificial Intelligence methods as genetic algorithms technique receives little attention. In this paper, we suggest comparing two PID design methods. Firstly, the parameters of the PID are calculated according to the reference model. In a second phase, these parameters are established using genetic algorithms. By reference model, we mean that the corrected system behaves according to a reference system, imposed by some specifications: settling time, zero overshoot etc. Inspired from the natural evolution of Darwin's theory advocating the survival of the best, John Holland developed this evolutionary algorithm. Genetic algorithm (GA) possesses three basic operators: selection, crossover and mutation. We start iterations with an initial population. Each member of this population is evaluated through a fitness function. Our purpose is to correct the behavior of the quadcopter around three axes (roll, pitch and yaw) with 3 PD controllers. For the altitude, we adopt a PID controller.

Keywords: quadcopter, genetic algorithm, PID, fitness, model, control, nonlinear system

Procedia PDF Downloads 423
2518 Assessing the Financial Impact of Federal Benefit Program Enrollment on Low-income Households

Authors: Timothy Scheinert, Eliza Wright

Abstract:

Background: Link Health is a Boston-based non-profit leveraging in-person and digital platforms to promote health equity. Its primary aim is to financially support low-income individuals through enrollment in federal benefit programs. This study examines the monetary impact of enrollment in several benefit programs. Methodologies: Approximately 17,000 individuals have been screened for eligibility via digital outreach, community events, and in-person clinics. Enrollment and financial distributions are evaluated across programs, including the Affordable Connectivity Program (ACP), Lifeline, LIHEAP, Transitional Aid to Families with Dependent Children (TAFDC), and the Supplemental Nutrition Assistance Program (SNAP). Major Findings: A total of 1,895 individuals have successfully applied, collectively distributing an estimated $1,288,152.00 in aid. The largest contributors to this sum include: ACP: 1,149 enrollments, $413,640 distributed annually. Child Care Financial Assistance (CCFA): 15 enrollments, $240,000 distributed annually. Lifeline: 602 enrollments, $66,822 distributed annually. LIHEAP: 25 enrollments, $48,750 distributed annually. SNAP: 41 enrollments, $123,000 distributed annually. TAFDC: 21 enrollments, $341,760 distributed annually. Conclusions: These results highlight the role of targeted outreach and effective enrollment processes in promoting access to federal benefit programs. High enrollment rates in ACP and Lifeline demonstrate a considerable need for affordable broadband and internet services. Programs like CCFA and TAFDC, despite lower enrollment numbers, provide sizable support per individual. This analysis advocates for continued funding of federal benefit programs. Future efforts can be made to develop screening tools that identify eligibility for multiple programs and reduce the complexity of enrollment.

Keywords: benefits, childcare, connectivity, equity, nutrition

Procedia PDF Downloads 18
2517 Decision Making in Medicine and Treatment Strategies

Authors: Kamran Yazdanbakhsh, Somayeh Mahmoudi

Abstract:

Three reasons make good use of the decision theory in medicine: 1. Increased medical knowledge and their complexity makes it difficult treatment information effectively without resorting to sophisticated analytical methods, especially when it comes to detecting errors and identify opportunities for treatment from databases of large size. 2. There is a wide geographic variability of medical practice. In a context where medical costs are, at least in part, by the patient, these changes raise doubts about the relevance of the choices made by physicians. These differences are generally attributed to differences in estimates of probabilities of success of treatment involved, and differing assessments of the results on success or failure. Without explicit criteria for decision, it is difficult to identify precisely the sources of these variations in treatment. 3. Beyond the principle of informed consent, patients need to be involved in decision-making. For this, the decision process should be explained and broken down. A decision problem is to select the best option among a set of choices. The problem is what is meant by "best option ", or know what criteria guide the choice. The purpose of decision theory is to answer this question. The systematic use of decision models allows us to better understand the differences in medical practices, and facilitates the search for consensus. About this, there are three types of situations: situations certain, risky situations, and uncertain situations: 1. In certain situations, the consequence of each decision are certain. 2. In risky situations, every decision can have several consequences, the probability of each of these consequences is known. 3. In uncertain situations, each decision can have several consequences, the probability is not known. Our aim in this article is to show how decision theory can usefully be mobilized to meet the needs of physicians. The decision theory can make decisions more transparent: first, by clarifying the data systematically considered the problem and secondly by asking a few basic principles should guide the choice. Once the problem and clarified the decision theory provides operational tools to represent the available information and determine patient preferences, and thus assist the patient and doctor in their choices.

Keywords: decision making, medicine, treatment strategies, patient

Procedia PDF Downloads 575
2516 Enhancing Coping Strategies of Student: A Case Study of 'Choice Theory' Group Counseling

Authors: Warakorn Supwirapakorn

Abstract:

The purpose of this research was to study the effects of choice theory in group counseling on coping strategies of students. The sample consisted of 16 students at a boarding school, who had the lowest score on the coping strategies. The sample was divided into two groups by random assignment and then were assigned into the experimental group and the control group, with eight members each. The instruments were the Adolescent Coping Scale and choice theory group counseling program. The data collection procedure was divided into three phases: The pre-test, the post-test, and the follow-up. The data were analyzed by repeated measure analysis of variance: One between-subjects and one within-subjects. The results revealed that the interaction between the methods and the duration of the experiment was found statistically significant at 0.05 level. The students in the experimental group demonstrated significantly higher at 0.05 level on coping strategies score in both the post-test and the follow-up than in the pre-test and the control group. No significant difference was found on coping strategies during the post-test phase and the follow-up phase of the experimental group.

Keywords: coping strategies, choice theory, group counseling, boarding school

Procedia PDF Downloads 208
2515 An Unsupervised Domain-Knowledge Discovery Framework for Fake News Detection

Authors: Yulan Wu

Abstract:

With the rapid development of social media, the issue of fake news has gained considerable prominence, drawing the attention of both the public and governments. The widespread dissemination of false information poses a tangible threat across multiple domains of society, including politics, economy, and health. However, much research has concentrated on supervised training models within specific domains, their effectiveness diminishes when applied to identify fake news across multiple domains. To solve this problem, some approaches based on domain labels have been proposed. By segmenting news to their specific area in advance, judges in the corresponding field may be more accurate on fake news. However, these approaches disregard the fact that news records can pertain to multiple domains, resulting in a significant loss of valuable information. In addition, the datasets used for training must all be domain-labeled, which creates unnecessary complexity. To solve these problems, an unsupervised domain knowledge discovery framework for fake news detection is proposed. Firstly, to effectively retain the multidomain knowledge of the text, a low-dimensional vector for each news text to capture domain embeddings is generated. Subsequently, a feature extraction module utilizing the unsupervisedly discovered domain embeddings is used to extract the comprehensive features of news. Finally, a classifier is employed to determine the authenticity of the news. To verify the proposed framework, a test is conducted on the existing widely used datasets, and the experimental results demonstrate that this method is able to improve the detection performance for fake news across multiple domains. Moreover, even in datasets that lack domain labels, this method can still effectively transfer domain knowledge, which can educe the time consumed by tagging without sacrificing the detection accuracy.

Keywords: fake news, deep learning, natural language processing, multiple domains

Procedia PDF Downloads 88
2514 Efficient Compact Micro Dielectric Barrier Discharge (DBD) Plasma Reactor for Ozone Generation for Industrial Application in Liquid and Gas Phase Systems

Authors: D. Kuvshinov, A. Siswanto, J. Lozano-Parada, W. Zimmerman

Abstract:

Ozone is well known as a powerful fast reaction rate oxidant. The ozone based processes produce no by-product left as a non-reacted ozone returns back to the original oxygen molecule. Therefore an application of ozone is widely accepted as one of the main directions for a sustainable and clean technologies development. There are number of technologies require ozone to be delivered to specific points of a production network or reactors construction. Due to space constrains, high reactivity and short life time of ozone the use of ozone generators even of a bench top scale is practically limited. This requires development of mini/micro scale ozone generator which can be directly incorporated into production units. Our report presents a feasibility study of a new micro scale rector for ozone generation (MROG). Data on MROG calibration and indigo decomposition at different operation conditions are presented. At selected operation conditions with residence time of 0.25 s the process of ozone generation is not limited by reaction rate and the amount of ozone produced is a function of power applied. It was shown that the MROG is capable to produce ozone at voltage level starting from 3.5kV with ozone concentration of 5.28E-6 (mol/L) at 5kV. This is in line with data presented on numerical investigation for a MROG. It was shown that in compare to a conventional ozone generator, MROG has lower power consumption at low voltages and atmospheric pressure. The MROG construction makes it applicable for emerged and dry systems. With a robust compact design MROG can be used as incorporated unit for production lines of high complexity.

Keywords: dielectric barrier discharge (DBD), micro reactor, ozone, plasma

Procedia PDF Downloads 330
2513 Interrogating Bishwas: Reimagining a Christian Neighbourhood in Kolkata, India

Authors: Abhijit Dasgupta

Abstract:

This paper explores the everyday lives of the Christians residing in a Bengali Christian neighborhood in Kolkata, termed here as the larger Christian para (para meaning neighborhood in Bengali). Through ethnography and reading of secondary sources, the paper discerns how various Christians across denominations – Protestants, Catholics and Pentecostals implicate the role of bishwas (faith and belief) in their interpersonal neighborhood relations. The paper attempts to capture the role of bishwas in producing, transforming and revising the meaning of 'neighbourhood' and 'neighbours' and puts forward the argument of the neighbourhood as a theological product. By interrogating and interpreting bishwas through everyday theological discussions and reflections, the paper examines and analyses the ways everyday theology becomes an essential source of power and knowledge for the Bengali Christians in reimagining their neighbourhood compared to the nearby Hindu neighbourhoods. Borrowing literature from everyday theology, faith and belief, the paper reads and analyses various interpretations of theological knowledge across denominations to probe the prominence of bishwas within the Christian community and its role in creating a difference in their place of dwelling. The paper argues that the meaning of neighbourhood is revisited through prayers, sermons and biblical verses. At the same time, the divisions and fissures are seen among Protestants and Catholics and also among native Bengali Protestants and non-native Protestant pastors, which informs us about the complexity of theology in constituting everyday life. Thus, the paper addresses theology's role in creating an ethical Christian neighbourhood amidst everyday tensions and hostilities of diverse religious persuasions. At the same time, it looks into the processes through which multiple theological knowledge leads to schism and interdenominational hostilities. By attempting to answer these questions, the paper brings out Christians' negotiation with the neighbourhood.

Keywords: anthropology, bishwas, christianity, neighbourhood, theology

Procedia PDF Downloads 83
2512 Characterization of Chest Pain in Patients Consulting to the Emergency Department of a Health Institution High Level of Complexity during 2014-2015, Medellin, Colombia

Authors: Jorge Iván Bañol-Betancur, Lina María Martínez-Sánchez, María de los Ángeles Rodríguez-Gázquez, Estefanía Bahamonde-Olaya, Ana María Gutiérrez-Tamayo, Laura Isabel Jaramillo-Jaramillo, Camilo Ruiz-Mejía, Natalia Morales-Quintero

Abstract:

Acute chest pain is a distressing sensation between the diaphragm and the base of the neck and it represents a diagnostic challenge for any physician in the emergency department. Objective: To establish the main clinical and epidemiological characteristics of patients who present with chest pain to the emergency department in a private clinic from the city of Medellin, during 2014-2015. Methods: Cross-sectional retrospective observational study. Population and sample were patients who consulted for chest pain in the emergency department who met the eligibility criteria. The information was analyzed in SPSS program vr.21; qualitative variables were described through relative frequencies, and the quantitative through mean and standard deviation ‬or medians according to their distribution in the study population. Results: A total of 231 patients were evaluated, the mean age was 49.5 ± 19.9 years, 56.7% were females. The most frequent pathological antecedents were hypertension 35.5%, diabetes 10,8%, dyslipidemia 10.4% and coronary disease 5.2%. Regarding pain features, in 40.3% of the patients the pain began abruptly, in 38.2% it had a precordial location, for 20% of the cases physical activity acted as a trigger, and 60.6% was oppressive. Costochondritis was the most common cause of chest pain among patients with an established etiologic diagnosis, representing the 18.2%. Conclusions: Although the clinical features of pain reported coincide with the clinical presentation of an acute coronary syndrome, the most common cause of chest pain in study population was costochondritis instead, indicating that it is a differential diagnostic in the approach of patients with pain acute chest.

Keywords: acute coronary syndrome, chest pain, epidemiology, osteochondritis

Procedia PDF Downloads 336
2511 Green-synthesized of Selenium Nanoparticles Using Garlic Extract and Their Application for Rapid Detection of Salicylic Acid in Milk

Authors: Kashif Jabbar

Abstract:

Milk adulteration is a global concern, and the current study was plan to synthesize Selenium nanoparticles by green method using plant extract of garlic, Allium Sativum, and to characterize Selenium nanoparticles through different analytical techniques and to apply Selenium nanoparticles as fast and easy technique for the detection of salicylic acid in milk. The highly selective, sensitive, and quick interference green synthesis-based sensing of possible milk adulterants i.e., salicylic acid, has been reported here. Salicylic acid interacts with nanoparticles through strong bonding interactions, hence resulting in an interruption within the formation of selenium nanoparticles which is confirmed by UV-VIS spectroscopy, scanning electron microscopy, and x-ray diffraction. This interaction in the synthesis of nanoparticles resulted in transmittance wavelength that decrease with the increasing amount of salicylic acid, showing strong binding of selenium nanoparticles with adulterant, thereby permitting in-situ fast detection of salicylic acid from milk having a limit of detection at 10-3 mol and linear coefficient correlation of 0.9907. Conclusively, it can be draw that colloidal selenium could be synthesize successfully by garlic extract in order to serve as a probe for fast and cheap testing of milk adulteration.

Keywords: adulteration, green synthesis, selenium nanoparticles, salicylic acid, aggregation

Procedia PDF Downloads 80
2510 Electrical and Structural Properties of Polyaniline-Fullerene Nanocomposite

Authors: M. Nagaraja, H. M. Mahesh, K. Rajanna, M. Z. Kurian, J. Manjanna

Abstract:

In recent years, composites of conjugated polymers with fullerenes (C60) has attracted considerable scientific and technological attention in the field of organic electronics because they possess a novel combination of electrical, optical, ferromagnetic, mechanical and sensor properties. These properties represent major advances in the design of organic electronic devices. With the addition of C60 in the conjugated polymer matrix, the primary photo-excitation of the conjugated polymer undergoes an ultrafast electron transfer, and it has been demonstrated that fullerene molecules may serve as efficient electron acceptors in polymeric solar cells. The present paper includes the systematic studies on the effect of electrical, structural and sensor properties of polyaniline (PANI) matrix by the presence of C60. Polyaniline-fullerene (PANI/C60) composite is prepared by the introduction of fullerene during polymerization of aniline with ammonium persulfate and dodechyl benzene sulfonic acid as oxidant and dopant respectively. FTIR spectroscopy indicated the interaction between PANI and C60. X-ray diffraction proved the formation of a PANI/C60 complex. SEM image shows the highly branched chain structure of the PANI in the presence of C60. The conductivity of the PANI/C60 was found to be more than ten orders of magnitude over the pure PANI.

Keywords: conductivity, fullerene, nanocomposite, polyaniline

Procedia PDF Downloads 213
2509 Comparison of Different Artificial Intelligence-Based Protein Secondary Structure Prediction Methods

Authors: Jamerson Felipe Pereira Lima, Jeane Cecília Bezerra de Melo

Abstract:

The difficulty and cost related to obtaining of protein tertiary structure information through experimental methods, such as X-ray crystallography or NMR spectroscopy, helped raising the development of computational methods to do so. An approach used in these last is prediction of tridimensional structure based in the residue chain, however, this has been proved an NP-hard problem, due to the complexity of this process, explained by the Levinthal paradox. An alternative solution is the prediction of intermediary structures, such as the secondary structure of the protein. Artificial Intelligence methods, such as Bayesian statistics, artificial neural networks (ANN), support vector machines (SVM), among others, were used to predict protein secondary structure. Due to its good results, artificial neural networks have been used as a standard method to predict protein secondary structure. Recent published methods that use this technique, in general, achieved a Q3 accuracy between 75% and 83%, whereas the theoretical accuracy limit for protein prediction is 88%. Alternatively, to achieve better results, support vector machines prediction methods have been developed. The statistical evaluation of methods that use different AI techniques, such as ANNs and SVMs, for example, is not a trivial problem, since different training sets, validation techniques, as well as other variables can influence the behavior of a prediction method. In this study, we propose a prediction method based on artificial neural networks, which is then compared with a selected SVM method. The chosen SVM protein secondary structure prediction method is the one proposed by Huang in his work Extracting Physico chemical Features to Predict Protein Secondary Structure (2013). The developed ANN method has the same training and testing process that was used by Huang to validate his method, which comprises the use of the CB513 protein data set and three-fold cross-validation, so that the comparative analysis of the results can be made comparing directly the statistical results of each method.

Keywords: artificial neural networks, protein secondary structure, protein structure prediction, support vector machines

Procedia PDF Downloads 614
2508 Standard Model-Like Higgs Decay into Displaced Heavy Neutrino Pairs in U(1)' Models

Authors: E. Accomando, L. Delle Rose, S. Moretti, E. Olaiya, C. Shepherd-Themistocleous

Abstract:

Heavy sterile neutrinos are almost ubiquitous in the class of Beyond Standard Model scenarios aimed at addressing the puzzle that emerged from the discovery of neutrino flavour oscillations, hence the need to explain their masses. In particular, they are necessary in a U(1)’ enlarged Standard Model (SM). We show that these heavy neutrinos can be rather long-lived producing distinctive displaced vertices and tracks. Indeed, depending on the actual decay length, they can decay inside a Large Hadron Collider (LHC) detector far from the main interaction point and can be identified in the inner tracking system or the muon chambers, emulated here through the Compact Muon Solenoid (CMS) detector parameters. Among the possible production modes of such heavy neutrino, we focus on their pair production mechanism in the SM Higgs decay, eventually yielding displaced lepton signatures following the heavy neutrino decays into weak gauge bosons. By employing well-established triggers available for the CMS detector and using the data collected by the end of the LHC Run 2, these signatures would prove to be accessible with negligibly small background. Finally, we highlight the importance that the exploitation of new triggers, specifically, displaced tri-lepton ones, could have for this displaced vertex search.

Keywords: beyond the standard model, displaced vertex, Higgs physics, neutrino physics

Procedia PDF Downloads 140
2507 The Impact of Level and Consequence of Service Co-Recovery on Post-Recovery Satisfaction and Repurchase Intent

Authors: Chia-Ching Tsai

Abstract:

In service delivery, interpersonal interaction is the key to customer satisfaction, and apparently, the factor of human is critical in service delivery. Besides, customers quite care about the consequences of co-recovery. Thus, this research focuses on service failure caused by other customers and uses a 2x2 factorial design to investigate the impact of consequence and level of service co-recovery on post-recovery satisfaction and repurchase intent. 150 undergraduates were recruited as participants, and assigned to one of the four cells randomly. Every participant was requested to read the scenario and then rated the post-recovery satisfaction and repurchase intent. The results show that under the condition of failed co-recovery, level of co-recovery has no effect on post-recovery satisfaction, while under the condition of successful co-recovery, high-level co-recovery causes significantly higher post-recovery satisfaction than low-level co-recovery. Moreover, post-recovery satisfaction has significantly positive impact on repurchase intent. In the system of service delivery, customers interact with other customers frequently. Therefore, comparing with the literature, this research focuses on the service failure caused by other customers. This research also supplies a better understanding of customers’ view on consequences of different levels of co-recovery, which is helpful for the practitioners to make use of co-recovery.

Keywords: service failure, service co-recovery, consequence of co-recovery, level of co-recovery, post-recovery satisfaction, repurchase intent

Procedia PDF Downloads 415
2506 21st Century Provocation: Modern Slavery, the Implications for Individuals on the Autism Spectrum

Authors: Christina Surmei

Abstract:

Autism Spectrum Disorder (ASD) is defined as a diverse range of developmental conditions that affect an individual’s functionality. ASD is not linear, and individuals can present with deficits in social interaction, communication, and demonstrate limited, repetitive patterns of behaviour, interests, or activities. These characteristics may be observed in a variety of ways and range from mild to severe. ASD may include autism disorder, pervasive developmental disorder not otherwise specified, Asperger’s, or other related pervasive developmental disorders. Modern slavery is defined as 'situations of exploitation that a person cannot refuse or leave because of threats, violence, coercion, and abuse of power or deception'. A review of the literature investigated the prevalence of research regarding ASD and modern slavery. Two universal search engines and five online journals were used as the apparatuses of inquiry. The results revealed two editorials, one study, and one act, totaling four publications attesting to ASD and modern slavery as a joint entity. This is representative of a vast absence of research. However, as individual entities research on autism and modern slavery is in a general high occurrence. This paper has identified a significant gap in research on ASD and modern slavery, and initiates the dialogue to unpack a significant global issue in society today.

Keywords: autism spectrum, education, modern slavery, support

Procedia PDF Downloads 163
2505 Numerical Simulation of Supersonic Gas Jet Flows and Acoustics Fields

Authors: Lei Zhang, Wen-jun Ruan, Hao Wang, Peng-Xin Wang

Abstract:

The source of the jet noise is generated by rocket exhaust plume during rocket engine testing. A domain decomposition approach is applied to the jet noise prediction in this paper. The aerodynamic noise coupling is based on the splitting into acoustic sources generation and sound propagation in separate physical domains. Large Eddy Simulation (LES) is used to simulate the supersonic jet flow. Based on the simulation results of the flow-fields, the jet noise distribution of the sound pressure level is obtained by applying the Ffowcs Williams-Hawkings (FW-H) acoustics equation and Fourier transform. The calculation results show that the complex structures of expansion waves, compression waves and the turbulent boundary layer could occur due to the strong interaction between the gas jet and the ambient air. In addition, the jet core region, the shock cell and the sound pressure level of the gas jet increase with the nozzle size increasing. Importantly, the numerical simulation results of the far-field sound are in good agreement with the experimental measurements in directivity.

Keywords: supersonic gas jet, Large Eddy Simulation(LES), acoustic noise, Ffowcs Williams-Hawkings(FW-H) equations, nozzle size

Procedia PDF Downloads 406
2504 Comparison of Support Vector Machines and Artificial Neural Network Classifiers in Characterizing Threatened Tree Species Using Eight Bands of WorldView-2 Imagery in Dukuduku Landscape, South Africa

Authors: Galal Omer, Onisimo Mutanga, Elfatih M. Abdel-Rahman, Elhadi Adam

Abstract:

Threatened tree species (TTS) play a significant role in ecosystem functioning and services, land use dynamics, and other socio-economic aspects. Such aspects include ecological, economic, livelihood, security-based, and well-being benefits. The development of techniques for mapping and monitoring TTS is thus critical for understanding the functioning of ecosystems. The advent of advanced imaging systems and supervised learning algorithms has provided an opportunity to classify TTS over fragmenting landscape. Recently, vegetation maps have been produced using advanced imaging systems such as WorldView-2 (WV-2) and robust classification algorithms such as support vectors machines (SVM) and artificial neural network (ANN). However, delineation of TTS in a fragmenting landscape using high resolution imagery has widely remained elusive due to the complexity of the species structure and their distribution. Therefore, the objective of the current study was to examine the utility of the advanced WV-2 data for mapping TTS in the fragmenting Dukuduku indigenous forest of South Africa using SVM and ANN classification algorithms. The results showed the robustness of the two machine learning algorithms with an overall accuracy (OA) of 77.00% (total disagreement = 23.00%) for SVM and 75.00% (total disagreement = 25.00%) for ANN using all eight bands of WV-2 (8B). This study concludes that SVM and ANN classification algorithms with WV-2 8B have the potential to classify TTS in the Dukuduku indigenous forest. This study offers relatively accurate information that is important for forest managers to make informed decisions regarding management and conservation protocols of TTS.

Keywords: artificial neural network, threatened tree species, indigenous forest, support vector machines

Procedia PDF Downloads 509
2503 Meet Automotive Software Safety and Security Standards Expectations More Quickly

Authors: Jean-François Pouilly

Abstract:

This study addresses the growing complexity of embedded systems and the critical need for secure, reliable software. Traditional cybersecurity testing methods, often conducted late in the development cycle, struggle to keep pace. This talk explores how formal methods, integrated with advanced analysis tools, empower C/C++ developers to 1) Proactively address vulnerabilities and bugs, which includes formal methods and abstract interpretation techniques to identify potential weaknesses early in the development process, reducing the reliance on penetration and fuzz testing in later stages. 2) Streamline development by focusing on bugs that matter, with close to no false positives and catching flaws earlier, the need for rework and retesting is minimized, leading to faster development cycles, improved efficiency and cost savings. 3) Enhance software dependability which includes combining static analysis using abstract interpretation with full context sensitivity, with hardware memory awareness allows for a more comprehensive understanding of potential vulnerabilities, leading to more dependable and secure software. This approach aligns with industry best practices (ISO2626 or ISO 21434) and empowers C/C++ developers to deliver robust, secure embedded systems that meet the demands of today's and tomorrow's applications. We will illustrate this approach with the TrustInSoft analyzer to show how it accelerates verification for complex cases, reduces user fatigue, and improves developer efficiency, cost-effectiveness, and software cybersecurity. In summary, integrating formal methods and sound Analyzers enhances software reliability and cybersecurity, streamlining development in an increasingly complex environment.

Keywords: safety, cybersecurity, ISO26262, ISO24434, formal methods

Procedia PDF Downloads 11
2502 Narrative Constructs and Environmental Engagement: A Textual Analysis of Climate Fiction’s Role in Shaping Sustainability Consciousness

Authors: Dean J. Hill

Abstract:

This paper undertakes the task of conducting an in-depth textual analysis of the cli-fi genre. It examines how writing in the genre contributes to expressing and facilitating the articulation of environmental consciousness through the form of narrative. The paper begins by situating cli-fi within the literary continuum of ecological narratives and identifying the unique textual characteristics and thematic preoccupations of this area. The paper unfolds how cli-fi transforms the esoteric nature of climate science into credible narrative forms by drawing on language use, metaphorical constructs, and narrative framing. It also involves how descriptive and figurative language in the description of nature and disaster makes climate change so vivid and emotionally resonant. The work also points out the dialogic nature of cli-fi, whereby the characters and the narrators experience inner disputes in the novel regarding the ethical dilemma of environmental destruction, thus demanding the readers challenge and re-evaluate their standpoints on sustainability and ecological responsibilities. The paper proceeds with analysing the feature of narrative voice and its role in eliciting empathy, as well as reader involvement with the ecological material. In looking at how different narratorial perspectives contribute to the emotional and cognitive reaction of the reader to text, this study demonstrates the profound power of perspective in developing intimacy with the dominating concerns. Finally, the emotional arc of cli-fi narratives, running its course over themes of loss, hope, and resilience, is analysed in relation to how these elements function to marshal public feeling and discourse into action around climate change. Therefore, we can say that the complexity of the text in the cli-fi not only shows the hard edge of the reality of climate change but also influences public perception and behaviour toward a more sustainable future.

Keywords: cli-fi genre, ecological narratives, emotional arc, narrative voice, public perception

Procedia PDF Downloads 29
2501 Effects of Nitrogen and Arsenic on Antioxidant Enzyme Activities and Photosynthetic Pigments in Safflower (Carthamus tinctorius L.)

Authors: Mostafa Heidari

Abstract:

Nitrogen fertilization has played a significant role in increasing crop yield, and solving problems of hunger and malnutrition worldwide. However, excessive of heavy metals such as arsenic can interfere on growth and reduced grain yield. In order to investigate the effects of different concentrations of arsenic and nitrogen fertilizer on photosynthetic pigments and antioxidant enzyme activities in safflower (cv. Goldasht), a factorial plot experiment as randomized complete block design with three replication was conducted in university of Zabol. Arsenic treatment included: A1= control or 0, A2=30, A3=60 and A4=90 mg. kg-1 soil from the Na2HASO4 source and three nitrogen levels including W1=75, W2=150 and W3=225 kg.ha-1 from urea source. Results showed that, arsenic had a significant effect on the activity of antioxidant enzymes. By increasing arsenic levels from A1 to A4, the activity of ascorbate peroxidase (APX) and gayacol peroxidase (GPX) increased and catalase (CAT) was decreased. In this study, arsenic had no significant on chlorophyll a, b and cartoneid content. Nitrogen and interaction between arsenic and nitrogen treatment, except APX, had significant effect on CAT and GPX. The highest GPX activity was obtained at A4N3 treatment. Nitrogen increased the content of chlorophyll a, b and cartoneid.

Keywords: arsenic, physiological parameters, oxidative enzymes, nitrogen

Procedia PDF Downloads 436
2500 Organization of the Purchasing Function for Innovation

Authors: Jasna Prester, Ivana Rašić Bakarić, Božidar Matijević

Abstract:

Various prominent scholars and substantial practitioner-oriented literature on innovation orientation have shown positive effects on firm performance. There is a myriad of factors that influence and enhance innovation but it has been found in the literature that new product innovations accounted for an average of 14 percent of sales revenues for all firms. If there is one thing that has changed in innovation management during the last decade, it is the growing reliance on external partners. As a consequence, a new task for purchasing arises, as firms need to understand which suppliers actually do have high potential contributing to the innovativeness of the firm and which do not. Purchasing function in an organization is extremely important as it deals on an average of 50% or more of a firm's expenditures. In the nineties the purchasing department was largely seen as a transaction-oriented, clerical function but today purchasing integration provides a formal interface mechanism between purchasing and other firm functions that services other functions within the company. Purchasing function has to be organized differently to enable firm innovation potential. However, innovations are inherently risky. There are behavioral risk (that some partner will take advantage of the other party), technological risk in terms of complexity of products and processes of manufacturing and incoming materials and finally market risks, which in fact judge the value of the innovation. These risks are investigated in this work since it has been found in the literature that the higher the technological risk, higher will be the centralization of the purchasing function as an interface with other supply chain members. Most researches on organization of purchasing function were done by case study analysis of innovative firms. This work actually tends to prove or discard results found in the literature based on case study method. A large data set of 1493 companies, from 25 countries collected in the GMRG 4 survey served as a basis for analysis.

Keywords: purchasing function organization, innovation, technological risk, GMRG 4 survey

Procedia PDF Downloads 478
2499 Impulsivity and Nutritional Restrictions in BED

Authors: Jaworski Mariusz, Owczarek Krzysztof, Adamus Mirosława

Abstract:

Binge eating disorder (BED) is one of the three main eating disorders, beside anorexia and bulimia nervosa. BED is characterized by a loss of control over the quantity of food consumed and the lack of the compensatory behaviors, such as induced vomiting or purging. Studies highlight that certain personality traits may contribute to the severity of symptoms in the ED. The aim of this study is to analyze the relationship between psychological variables (Impulsivity and Urgency) and Nutritional restrictions in BED. The study included two groups. The first group consisted of 35 women with BED aged 18 to 28. The control group - 35 women without ED aged 18 to 28. ED-1 questionnaire was used in a study to assess the severity of impulsivity, urgency and nutritional restrictions. The obtained data were standardized. Statistical analyzes were performed using SPSS 21 software. The severity of impulsivity was higher in patients with BED than the control group. The relation between impulsivity and nutritional restrictions in BED was observed, only taking into consideration the relationship of these variables with the level of urgency. However, if the severity of urgency in this relationship is skipped, the relationship between impulsivity and nutritional restrictions will not occur. Impulsivity has a negative relationship with the level of urgency. This study suggests the need to analyze the interaction between impulsivity and urgency, and their relationship with dietary behavior in BED, especially nutritional restrictions. Analysis of single isolated features may give erroneous results.

Keywords: binge eating disorder, impulsivity, nutritional restrictions, urgency

Procedia PDF Downloads 464
2498 Pre-Service Science Teachers’ Attitudes about Teaching Science Courses at the Faculty of Education, Lebanese University: An Exploratory Case Study

Authors: Suzanne El Takach

Abstract:

The research study explored pre-service teachers’ attitudes towards 6 courses taught in 3rd till 6th semesters at the Faculty of Education, Lebanese University, during the academic year 2015-2016. They assessed science teaching courses that are essential for teacher preparation for Science at the primary and elementary level. These courses were: Action Research I and II in Teaching Science, New trends in Teaching Science, Teaching Science I and II for the elementary level and Teaching Science for Early Childhood Education. Qualitative and Quantitative Data were gathered from a) a survey questionnaire consisting of 23 closed-ended items; some were of Likert scale type, that aimed at collecting students’ opinions on courses, in terms of teaching, assessment and class interaction (N=102 respondents) and b) a second questionnaire of 10 questions was disseminated on a sample of 39 students in their last semester in science and Mathematics, in order to know more about students’ skills gained, suggestions for new courses and improvement. Students were satisfied with science teaching courses and they have admitted that they gained a good pedagogical content knowledge, such as, lesson planning, students’ misconceptions, and use of various teaching and assessment strategies.

Keywords: assessment in higher education, LMD program, pre-service teachers’ attitudes, pre-PCK skills

Procedia PDF Downloads 144
2497 The Effect of Cognitively-Induced Self-Construal and Direct Behavioral Mimicry on Prosocial Behavior

Authors: Czar Matthew Gerard Dayday, Danielle Marie Estrera, Philippe Jefferson Galban, Gabrielle Marie Heredia

Abstract:

The study aimed to examine the effects of self-construal and direct mimicry on prosocial behavior. The study made use of a 2 (Self-construal: independent or interdependent) x 2 (Mimicry: mimicry or non-mimicry) between subjects factorial design where effects of self-construal was cognitively-induced through a story with varying pronouns (We, Us, Ourselves vs. Me, I, Myself), and prosocial behavior was measured with the amount of money donated to a fabricated advocacy. The research was conducted with a convenience sampling comprised of 88 undergraduate students (58 Females, 33 Males) aged 16 to 26 years olds from the University of the Philippines, Diliman. Results from the experiment show that both factors do not have significant main effects on prosocial behavior. Additionally, their interaction also does not have a significant effect to prosocial behavior with No Mimicry x Independent ranking highest in amount of money donated and Mimicry x Interdependent ranking lowest. These results can be attributed to multiple factors, which include the collectivist orientation and sense of kapwa of Filipinos, a role reversal in the methodology and the lack of Chameleon Effect, and a weak priming of self-construal with respect to self-relatedness.

Keywords: behavior, mimicry, prosocial, self-construal

Procedia PDF Downloads 273
2496 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer

Procedia PDF Downloads 132
2495 Biophilic Design Strategies: Four Case-Studies from Northern Europe

Authors: Carmen García Sánchez

Abstract:

The UN's 17 Sustainable Development Goals – specifically the nº 3 and nº 11- urgently call for new architectural design solutions at different design scales to increase human contact with nature in the health and wellbeing promotion of primarily urban communities. The discipline of Interior Design offers an important alternative to large-scale nature-inclusive actions which are not always possible due to space limitations. These circumstances provide an immense opportunity to integrate biophilic design, a complex emerging and under-developed approach that pursues sustainable design strategies for increasing the human-nature connection through the experience of the built environment. Biophilic design explores the diverse ways humans are inherently inclined to affiliate with nature, attach meaning to and derive benefit from the natural world. It represents a biological understanding of architecture which categorization is still in progress. The internationally renowned Danish domestic architecture built in the 1950´s and early 1960´s - a golden age of Danish modern architecture - left a leading legacy that has greatly influenced the domestic sphere and has further led the world in terms of good design and welfare. This study examines how four existing post-war domestic buildings establish a dialogue with nature and her variations over time. The case-studies unveil both memorable and unique biophilic resources through sophisticated and original design expressions, where transformative processes connect the users to the natural setting and reflect fundamental ways in which they attach meaning to the place. In addition, fascinating analogies in terms of this nature interaction with particular traditional Japanese architecture inform the research. They embody prevailing lessons for our time today. The research methodology is based on a thorough literature review combined with a phenomenological analysis into how these case-studies contribute to the connection between humans and nature, after conducting fieldwork throughout varying seasons to document understanding in nature transformations multi-sensory perception (via sight, touch, sound, smell, time and movement) as a core research strategy. The cases´ most outstanding features have been studied attending the following key parameters: 1. Space: 1.1. Relationships (itineraries); 1.2. Measures/scale; 2. Context: Context: Landscape reading in different weather/seasonal conditions; 3. Tectonic: 3.1. Constructive joints, elements assembly; 3.2. Structural order; 4. Materiality: 4.1. Finishes, 4.2. Colors; 4.3. Tactile qualities; 5. Daylight interplay. Departing from an artistic-scientific exploration this groundbreaking study provides sustainable practical design strategies, perspectives, and inspiration to boost humans´ contact with nature through the experience of the interior built environment. Some strategies are associated with access to outdoor space or require ample space, while others can thrive in a dense urban context without direct access to the natural environment. The objective is not only to produce knowledge, but to phase in biophilic design in the built environment, expanding its theory and practice into a new dimension. Its long-term vision is to efficiently enhance the health and well-being of urban communities through daily interaction with Nature.

Keywords: sustainability, biophilic design, architectural design, interior design, nature, Danish architecture, Japanese architecture

Procedia PDF Downloads 93
2494 Integration of Polarization States and Color Multiplexing through a Singular Metasurface

Authors: Tarik Sipahi

Abstract:

Photonics research continues to push the boundaries of optical science, and the development of metasurface technology has emerged as a transformative force in this domain. The work presents the intricacies of a unified metasurface design tailored for efficient polarization and color control in optical systems. The proposed unified metasurface serves as a singular, nanoengineered optical element capable of simultaneous polarization modulation and color encoding. Leveraging principles from metamaterials and nanophotonics, this design allows for unprecedented control over the behavior of light at the subwavelength scale. The metasurface's spatially varying architecture enables seamless manipulation of both polarization states and color wavelengths, paving the way for a paradigm shift in optical system design. The advantages of this unified metasurface are diverse and impactful. By consolidating functions that traditionally require multiple optical components, the design streamlines optical systems, reducing complexity and enhancing overall efficiency. This approach is particularly promising for applications where compactness, weight considerations, and multifunctionality are crucial. Furthermore, the proposed unified metasurface design not only enhances multifunctionality but also addresses key challenges in optical system design, offering a versatile solution for applications demanding compactness and lightweight structures. The metasurface's capability to simultaneously manipulate polarization and color opens new possibilities in diverse technological fields. The research contributes to the evolution of optical science by showcasing the transformative potential of metasurface technology, emphasizing its role in reshaping the landscape of optical system architectures. This work represents a significant step forward in the ongoing pursuit of pushing the boundaries of photonics, providing a foundation for future innovations in compact and efficient optical devices.

Keywords: metasurface, nanophotonics, optical system design, polarization control

Procedia PDF Downloads 49
2493 Simple and Scalable Thermal-Assisted Bar-Coating Process for Perovskite Solar Cell Fabrication in Open Atmosphere

Authors: Gizachew Belay Adugna

Abstract:

Perovskite solar cells (PSCs) shows rapid development as an emerging photovoltaic material; however, the fast device degradation due to the organic nature, mainly hole transporting material (HTM) and lack of robust and reliable upscaling process for photovoltaic module hindered its commercialization. Herein, HTM molecules with/without fluorine-substituted cyclopenta[2,1-b;3,4-b’]dithiophene derivatives (HYC-oF, HYC-mF, and HYC-H) were developed for PSCs application. The fluorinated HTM molecules exhibited better hole mobility and overall charge extraction in the devices mainly due to strong molecular interaction and packing in the film. Thus, the highest power conversion efficiency (PCE) of 19.64% with improved long stability was achieved for PSCs based on HYC-oF HTM. Moreover, the fluorinated HYC-oF demonstrated excellent film processability in a larger-area substrate (10 cm×10 cm) prepared sequentially with the absorption perovskite underlayer via a scalable bar coating process in ambient air and owned a higher PCE of 18.49% compared to the conventional spiro-OMeTAD (17.51%). The result demonstrates a facile development of HTM towards stable and efficient PSCs for future industrial-scale PV modules.

Keywords: perovskite solar cells, upscaling film coating, power conversion efficiency, solution processing

Procedia PDF Downloads 67