Search results for: Poisson generalized linear model
842 Aesthetics and Semiotics in Theatre Performance
Authors: Păcurar Diana Istina
Abstract:
Structured in three chapters, the article attempts an X-ray of the theatrical aesthetics, correctly understood through the emotions generated in the intimate structure of the spectator that precedes the triggering of the viewer’s perception and not through the superposition, unfortunately common, of the notion of aesthetics with the style in which a theater show is built. The first chapter contains a brief history of the appearance of the word aesthetic, the formulation of definitions for this new term, as well as its connections with the notions of semiotics, in particular with the perception of the message transmitted. Starting with Aristotle and Plato, and reaching Magritte, their interventions should not be interpreted in the sense that the two scientific concepts can merge into one discipline. The perception that is the object of everyone’s analysis, the understanding of meaning, the decoding of the messages sent, and the triggering of feelings that culminate in pleasure, shaping the aesthetic vision, are some elements that keep semiotics and aesthetics distinct, even though they share many methods of analysis. The compositional processes of aesthetic representation and symbolic formation are analyzed in the second part of the paper from perspectives that include or do not include historical, cultural, social, and political processes. Aesthetics and the organization of its symbolic process are treated, taking into account expressive activity. The last part of the article explores the notion of aesthetics in applied theater, more specifically in the theater show. Taking the postmodern approach that aesthetics applies to the creation of an artifact and the reception of that artifact, the intervention of these elements in the theatrical system must be emphasized –that is, the analysis of the problems arising in the stages of the creation, presentation, and reception, by the public, of the theater performance. The aesthetic process is triggered involuntarily, simultaneously, or before the moment when people perceive the meaning of the messages transmitted by the work of art. The finding of this fact makes the mental process of aesthetics similar or related to that of semiotics. No matter how perceived individually, beauty, the mechanism of production can be reduced to two. The first step presents similarities to Peirce’s model, but the process between signified and signified additionally stimulates the related memory of the evaluation of beauty, adding to the meanings related to the signification itself. Then, the second step, a process of comparison, is followed, in which one examines whether the object being looked at matches the accumulated memory of beauty. Therefore, even though aesthetics is derived from the conceptual part, the judgment of beauty and, more than that, moral judgment come to be so important to the social activities of human beings that it evolves as a visible process independent of other conceptual contents.Keywords: aesthetics, semiotics, symbolic composition, subjective joints, signifying, signified
Procedia PDF Downloads 109841 Analysis and Design Modeling for Next Generation Network Intrusion Detection and Prevention System
Authors: Nareshkumar Harale, B. B. Meshram
Abstract:
The continued exponential growth of successful cyber intrusions against today’s businesses has made it abundantly clear that traditional perimeter security measures are no longer adequate and effective. We evolved the network trust architecture from trust-untrust to Zero-Trust, With Zero Trust, essential security capabilities are deployed in a way that provides policy enforcement and protection for all users, devices, applications, data resources, and the communications traffic between them, regardless of their location. Information exchange over the Internet, in spite of inclusion of advanced security controls, is always under innovative, inventive and prone to cyberattacks. TCP/IP protocol stack, the adapted standard for communication over network, suffers from inherent design vulnerabilities such as communication and session management protocols, routing protocols and security protocols are the major cause of major attacks. With the explosion of cyber security threats, such as viruses, worms, rootkits, malwares, Denial of Service attacks, accomplishing efficient and effective intrusion detection and prevention is become crucial and challenging too. In this paper, we propose a design and analysis model for next generation network intrusion detection and protection system as part of layered security strategy. The proposed system design provides intrusion detection for wide range of attacks with layered architecture and framework. The proposed network intrusion classification framework deals with cyberattacks on standard TCP/IP protocol, routing protocols and security protocols. It thereby forms the basis for detection of attack classes and applies signature based matching for known cyberattacks and data mining based machine learning approaches for unknown cyberattacks. Our proposed implemented software can effectively detect attacks even when malicious connections are hidden within normal events. The unsupervised learning algorithm applied to network audit data trails results in unknown intrusion detection. Association rule mining algorithms generate new rules from collected audit trail data resulting in increased intrusion prevention though integrated firewall systems. Intrusion response mechanisms can be initiated in real-time thereby minimizing the impact of network intrusions. Finally, we have shown that our approach can be validated and how the analysis results can be used for detecting and protection from the new network anomalies.Keywords: network intrusion detection, network intrusion prevention, association rule mining, system analysis and design
Procedia PDF Downloads 226840 The Moderating Role of Test Anxiety in the Relationships Between Self-Efficacy, Engagement, and Academic Achievement in College Math Courses
Authors: Yuqing Zou, Chunrui Zou, Yichong Cao
Abstract:
Previous research has revealed relationships between self-efficacy (SE), engagement, and academic achievement among students in Western countries, but these relationships remain unknown in college math courses among college students in China. In addition, previous research has shown that test anxiety has a direct effect on engagement and academic achievement. However, how test anxiety affects the relationships between SE, engagement, and academic achievement is still unknown. In this study, the authors aimed to explore the mediating roles of behavioral engagement (BE), emotional engagement (EE), and cognitive engagement (CE) in the association between SE and academic achievement and the moderating role of test anxiety in college math courses. Our hypotheses are that the association between SE and academic achievement was mediated by engagement and that test anxiety played a moderating role in the association. To explore the research questions, the authors collected data through self-reported surveys among 147 students at a northwestern university in China. Self-reported surveys were used to collect data. The motivated strategies for learning questionnaire (MSLQ) (Pintrich, 1991), the metacognitive strategies questionnaire (Wolters, 2004), and the engagement versus disaffection with learning scale (Skinner et al., 2008) were used to assess SE, CE, and BE and EE, respectively. R software was used to analyze the data. The main analyses used were reliability and validity analysis of scales, descriptive statistics analysis of measured variables, correlation analysis, regression analysis, and structural equation modeling (SEM) analysis and moderated mediation analysis to look at the structural relationships between variables at the same time. The SEM analysis indicated that student SE was positively related to BE, EE, and CE and academic achievement. BE, EE, and CE were all positively associated with academic achievement. That is, as the authors expected, higher levels of SE led to higher levels of BE, EE, and CE, and greater academic achievement. Higher levels of BE, EE, and CE led to greater academic achievement. In addition, the moderated mediation analysis found that the path of SE to academic achievement in the model was as significant as expected, as was the moderating effect of test anxiety in the SE-Achievement association. Specifically, test anxiety was found to moderate the association between SE and BE, the association between SE and CE, and the association between EE and Achievement. The authors investigated possible mediating effects of BE, EE, and CE in the associations between SE and academic achievement, and all indirect effects were found to be significant. As for the magnitude of mediations, behavioral engagement was the most important mediator in the SE-Achievement association. This study has implications for college teachers, educators, and students in China regarding ways to promote academic achievement in college math courses, including increasing self-efficacy and engagement and lessening test anxiety toward math.Keywords: academic engagement, self-efficacy, test anxiety, academic achievement, college math courses, behavioral engagement, cognitive engagement, emotional engagement
Procedia PDF Downloads 92839 Criteria to Access Justice in Remote Criminal Trial Implementation
Authors: Inga Žukovaitė
Abstract:
This work aims to present postdoc research on remote criminal proceedings in court in order to streamline the proceedings and, at the same time, ensure the effective participation of the parties in criminal proceedings and the court's obligation to administer substantive and procedural justice. This study tests the hypothesis that remote criminal proceedings do not in themselves violate the fundamental principles of criminal procedure; however, their implementation must ensure the right of the parties to effective legal remedies and a fair trial and, only then, must address the issues of procedural economy, speed and flexibility/functionality of the application of technologies. In order to ensure that changes in the regulation of criminal proceedings are in line with fair trial standards, this research will provide answers to the questions of what conditions -first of all, legal and only then organisational- are required for remote criminal proceedings to ensure respect for the parties and enable their effective participation in public proceedings, to create conditions for quality legal defence and its accessibility, to give a correct impression to the party that they are heard and that the court is impartial and fair. It also seeks to present the results of empirical research in the courts of Lithuania that was made by using the interview method. The research will serve as a basis for developing a theoretical model for remote criminal proceedings in the EU to ensure a balance between the intention to have innovative, cost-effective, and flexible criminal proceedings and the positive obligation of the State to ensure the rights of participants in proceedings to just and fair criminal proceedings. Moreover, developments in criminal proceedings also keep changing the image of the court itself; therefore, in the paper will create preconditions for future research on the impact of remote criminal proceedings on the trust in courts. The study aims at laying down the fundamentals for theoretical models of a remote hearing in criminal proceedings and at making recommendations for the safeguarding of human rights, in particular the rights of the accused, in such proceedings. The following criteria are relevant for the remote form of criminal proceedings: the purpose of judicial instance, the legal position of participants in proceedings, their vulnerability, and the nature of required legal protection. The content of the study consists of: 1. Identification of the factual and legal prerequisites for a decision to organise the entire criminal proceedings by remote means or to carry out one or several procedural actions by remote means 2. After analysing the legal regulation and practice concerning the application of the elements of remote criminal proceedings, distinguish the main legal safeguards for protection of the rights of the accused to ensure: (a) the right of effective participation in a court hearing; (b) the right of confidential consultation with the defence counsel; (c) the right of participation in the examination of evidence, in particular material evidence, as well as the right to question witnesses; and (d) the right to a public trial.Keywords: remote criminal proceedings, fair trial, right to defence, technology progress
Procedia PDF Downloads 71838 Prediction of Sound Transmission Through Framed Façade Systems
Authors: Fangliang Chen, Yihe Huang, Tejav Deganyar, Anselm Boehm, Hamid Batoul
Abstract:
With growing population density and further urbanization, the average noise level in cities is increasing. Excessive noise is not only annoying but also leads to a negative impact on human health. To deal with the increasing city noise, environmental regulations bring up higher standards on acoustic comfort in buildings by mitigating the noise transmission from building envelope exterior to interior. Framed window, door and façade systems are the leading choice for modern fenestration construction, which provides demonstrated quality of weathering reliability, environmental efficiency, and installation ease. The overall sound insulation of such systems depends both on glasses and frames, where glass usually covers the majority of the exposed surfaces, thus it is the main source of sound energy transmission. While frames in modern façade systems become slimmer for aesthetic appearance, which contribute to a minimal percentage of exposed surfaces. Nevertheless, frames might provide substantial transmission paths for sound travels through because of much less mass crossing the path, thus becoming more critical in limiting the acoustic performance of the whole system. There are various methodologies and numerical programs that can accurately predict the acoustic performance of either glasses or frames. However, due to the vast variance of size and dimension between frame and glass in the same system, there is no satisfactory theoretical approach or affordable simulation tool in current practice to access the over acoustic performance of a whole façade system. For this reason, laboratory test turns out to be the only reliable source. However, laboratory test is very time consuming and high costly, moreover different lab might provide slightly different test results because of varieties of test chambers, sample mounting, and test operations, which significantly constrains the early phase design of framed façade systems. To address this dilemma, this study provides an effective analytical methodology to predict the acoustic performance of framed façade systems, based on vast amount of acoustic test results on glass, frame and the whole façade system consist of both. Further test results validate the current model is able to accurately predict the overall sound transmission loss of a framed system as long as the acoustic behavior of the frame is available. Though the presented methodology is mainly developed from façade systems with aluminum frames, it can be easily extended to systems with frames of other materials such as steel, PVC or wood.Keywords: city noise, building facades, sound mitigation, sound transmission loss, framed façade system
Procedia PDF Downloads 59837 Using Chatbots to Create Situational Content for Coursework
Authors: B. Bricklin Zeff
Abstract:
This research explores the development and application of a specialized chatbot tailored for a nursing English course, with a primary objective of augmenting student engagement through situational content and responsiveness to key expressions and vocabulary. Introducing the chatbot, elucidating its purpose, and outlining its functionality are crucial initial steps in the research study, as they provide a comprehensive foundation for understanding the design and objectives of the specialized chatbot developed for the nursing English course. These elements establish the context for subsequent evaluations and analyses, enabling a nuanced exploration of the chatbot's impact on student engagement and language learning within the nursing education domain. The subsequent exploration of the intricate language model development process underscores the fusion of scientific methodologies and artistic considerations in this application of artificial intelligence (AI). Tailored for educators and curriculum developers in nursing, practical principles extending beyond AI and education are considered. Some insights into leveraging technology for enhanced language learning in specialized fields are addressed, with potential applications of similar chatbots in other professional English courses. The overarching vision is to illuminate how AI can transform language learning, rendering it more interactive and contextually relevant. The presented chatbot is a tangible example, equipping educators with a practical tool to enhance their teaching practices. Methodologies employed in this research encompass surveys and discussions to gather feedback on the chatbot's usability, effectiveness, and potential improvements. The chatbot system was integrated into a nursing English course, facilitating the collection of valuable feedback from participants. Significant findings from the study underscore the chatbot's effectiveness in encouraging more verbal practice of target expressions and vocabulary necessary for performance in role-play assessment strategies. This outcome emphasizes the practical implications of integrating AI into language education in specialized fields. This research holds significance for educators and curriculum developers in the nursing field, offering insights into integrating technology for enhanced English language learning. The study's major findings contribute valuable perspectives on the practical impact of the chatbot on student interaction and verbal practice. Ultimately, the research sheds light on the transformative potential of AI in making language learning more interactive and contextually relevant, particularly within specialized domains like nursing.Keywords: chatbot, nursing, pragmatics, role-play, AI
Procedia PDF Downloads 63836 Sweepline Algorithm for Voronoi Diagram of Polygonal Sites
Authors: Dmitry A. Koptelov, Leonid M. Mestetskiy
Abstract:
Voronoi Diagram (VD) of finite set of disjoint simple polygons, called sites, is a partition of plane into loci (for each site at the locus) – regions, consisting of points that are closer to a given site than to all other. Set of polygons is a universal model for many applications in engineering, geoinformatics, design, computer vision, and graphics. VD of polygons construction usually done with a reduction to task of constructing VD of segments, for which there are effective O(n log n) algorithms for n segments. Preprocessing – constructing segments from polygons’ sides, and postprocessing – polygon’s loci construction by merging the loci of the sides of each polygon are also included in reduction. This approach doesn’t take into account two specific properties of the resulting segment sites. Firstly, all this segments are connected in pairs in the vertices of the polygons. Secondly, on the one side of each segment lies the interior of the polygon. The polygon is obviously included in its locus. Using this properties in the algorithm for VD construction is a resource to reduce computations. The article proposes an algorithm for the direct construction of VD of polygonal sites. Algorithm is based on sweepline paradigm, allowing to effectively take into account these properties. The solution is performed based on reduction. Preprocessing is the constructing of set of sites from vertices and edges of polygons. Each site has an orientation such that the interior of the polygon lies to the left of it. Proposed algorithm constructs VD for set of oriented sites with sweepline paradigm. Postprocessing is a selecting of edges of this VD formed by the centers of empty circles touching different polygons. Improving the efficiency of the proposed sweepline algorithm in comparison with the general Fortune algorithm is achieved due to the following fundamental solutions: 1. Algorithm constructs only such VD edges, which are on the outside of polygons. Concept of oriented sites allowed to avoid construction of VD edges located inside the polygons. 2. The list of events in sweepline algorithm has a special property: the majority of events are connected with “medium” polygon vertices, where one incident polygon side lies behind the sweepline and the other in front of it. The proposed algorithm processes such events in constant time and not in logarithmic time, as in the general Fortune algorithm. The proposed algorithm is fully implemented and tested on a large number of examples. The high reliability and efficiency of the algorithm is also confirmed by computational experiments with complex sets of several thousand polygons. It should be noted that, despite the considerable time that has passed since the publication of Fortune's algorithm in 1986, a full-scale implementation of this algorithm for an arbitrary set of segment sites has not been made. The proposed algorithm fills this gap for an important special case - a set of sites formed by polygons.Keywords: voronoi diagram, sweepline, polygon sites, fortunes' algorithm, segment sites
Procedia PDF Downloads 174835 In Vitro Evaluation of a Chitosan-Based Adhesive to Treat Bone Fractures
Authors: Francisco J. Cedano, Laura M. Pinzón, Camila I. Castro, Felipe Salcedo, Juan P. Casas, Juan C. Briceño
Abstract:
Complex fractures located in articular surfaces are challenging to treat and their reduction with conventional treatments could compromise the functionality of the affected limb. An adhesive material to treat those fractures is desirable for orthopedic surgeons. This adhesive must be biocompatible and have a high adhesion to bone surface in an aqueous environment. The proposed adhesive is based on chitosan, given its adhesive and biocompatibility properties. Chitosan is mixed with calcium carbonate and hydroxyapatite, which contribute to structural support and a gel like behavior, and glutaraldehyde is used as a cross-linking agent to keep the adhesive mechanical performance in aqueous environment. This work aims to evaluate the rheological, adhesion strength and biocompatibility properties of the proposed adhesive using in vitro tests. The gelification process of the adhesive was monitored by oscillatory rheometry in an ARG-2 TA Instruments rheometer, using a parallel plate geometry of 22 mm and a gap of 1 mm. Time sweep experiments were conducted at 1 Hz frequency, 1% strain and 37°C from 0 to 2400 s. Adhesion strength is measured using a butt joint test with bovine cancellous bone fragments as substrates. The test is conducted at 5 min, 20min and 24 hours after curing the adhesive under water at 37°C. Biocompatibility is evaluated by a cytotoxicity test in a fibroblast cell culture using MTT assay and SEM. Rheological results concluded that the average gelification time of the adhesive is 820±107 s, also it reaches storage modulus magnitudes up to 106 Pa; The adhesive show solid-like behavior. Butt joint test showed 28.6 ± 9.2 kPa of tensile bond strength for the adhesive cured for 24 hours. Also there was no significant difference in adhesion strength between 20 minutes and 24 hours. MTT showed 70 ± 23 % of active cells at sixth day of culture, this percentage is estimated respect to a positive control (only cells with culture medium and bovine serum). High vacuum SEM observation permitted to localize and study the morphology of fibroblasts presented in the adhesive. All captured fibroblasts presented in SEM typical flatted structure with filopodia growth attached to adhesive surface. This project reports an adhesive based on chitosan that is biocompatible due to high active cells presented in MTT test and these results were correlated using SEM. Also, it has adhesion properties in conditions that model the clinical application, and the adhesion strength do not decrease between 5 minutes and 24 hours.Keywords: bioadhesive, bone adhesive, calcium carbonate, chitosan, hydroxyapatite, glutaraldehyde
Procedia PDF Downloads 320834 Modeling Diel Trends of Dissolved Oxygen for Estimating the Metabolism in Pristine Streams in the Brazilian Cerrado
Authors: Wesley A. Saltarelli, Nicolas R. Finkler, Adriana C. P. Miwa, Maria C. Calijuri, Davi G. F. Cunha
Abstract:
The metabolism of the streams is an indicator of ecosystem disturbance due to the influences of the catchment on the structure of the water bodies. The study of the respiration and photosynthesis allows the estimation of energy fluxes through the food webs and the analysis of the autotrophic and heterotrophic processes. We aimed at evaluating the metabolism in streams located in the Brazilian savannah, Cerrado (Sao Carlos, SP), by determining and modeling the daily changes of dissolved oxygen (DO) in the water during one year. Three water bodies with minimal anthropogenic interference in their surroundings were selected, Espraiado (ES), Broa (BR) and Canchim (CA). Every two months, water temperature, pH and conductivity are measured with a multiparameter probe. Nitrogen and phosphorus forms are determined according to standard methods. Also, canopy cover percentages are estimated in situ with a spherical densitometer. Stream flows are quantified through the conservative tracer (NaCl) method. For the metabolism study, DO (PME-MiniDOT) and light (Odyssey Photosynthetic Active Radiation) sensors log data for at least three consecutive days every ten minutes. The reaeration coefficient (k2) is estimated through the method of the tracer gas (SF6). Finally, we model the variations in DO concentrations and calculate the rates of gross and net primary production (GPP and NPP) and respiration based on the one station method described in the literature. Three sampling were carried out in October and December 2015 and February 2016 (the next will be in April, June and August 2016). The results from the first two periods are already available. The mean water temperatures in the streams were 20.0 +/- 0.8C (Oct) and 20.7 +/- 0.5C (Dec). In general, electrical conductivity values were low (ES: 20.5 +/- 3.5uS/cm; BR 5.5 +/- 0.7uS/cm; CA 33 +/- 1.4 uS/cm). The mean pH values were 5.0 (BR), 5.7 (ES) and 6.4 (CA). The mean concentrations of total phosphorus were 8.0ug/L (BR), 66.6ug/L (ES) and 51.5ug/L (CA), whereas soluble reactive phosphorus concentrations were always below 21.0ug/L. The BR stream had the lowest concentration of total nitrogen (0.55mg/L) as compared to CA (0.77mg/L) and ES (1.57mg/L). The average discharges were 8.8 +/- 6L/s (ES), 11.4 +/- 3L/s and CA 2.4 +/- 0.5L/s. The average percentages of canopy cover were 72% (ES), 75% (BR) and 79% (CA). Significant daily changes were observed in the DO concentrations, reflecting predominantly heterotrophic conditions (respiration exceeded the gross primary production, with negative net primary production). The GPP varied from 0-0.4g/m2.d (in Oct and Dec) and the R varied from 0.9-22.7g/m2.d (Oct) and from 0.9-7g/m2.d (Dec). The predominance of heterotrophic conditions suggests increased vulnerability of the ecosystems to artificial inputs of organic matter that would demand oxygen. The investigation of the metabolism in the pristine streams can help defining natural reference conditions of trophic state.Keywords: low-order streams, metabolism, net primary production, trophic state
Procedia PDF Downloads 256833 An Energy Integration Study While Utilizing Heat of Flue Gas: Sponge Iron Process
Authors: Venkata Ramanaiah, Shabina Khanam
Abstract:
Enormous potential for saving energy is available in coal-based sponge iron plants as these are associated with the high percentage of energy wastage per unit sponge iron production. An energy integration option is proposed, in the present paper, to a coal based sponge iron plant of 100 tonnes per day production capacity, being operated in India using SL/RN (Stelco-Lurgi/Republic Steel-National Lead) process. It consists of the rotary kiln, rotary cooler, dust settling chamber, after burning chamber, evaporating cooler, electrostatic precipitator (ESP), wet scrapper and chimney as important equipment. Principles of process integration are used in the proposed option. It accounts for preheating kiln inlet streams like kiln feed and slinger coal up to 170ᴼC using waste gas exiting ESP. Further, kiln outlet stream is cooled from 1020ᴼC to 110ᴼC using kiln air. The working areas in the plant where energy is being lost and can be conserved are identified. Detailed material and energy balances are carried out around the sponge iron plant, and a modified model is developed, to find coal requirement of proposed option, based on hot utility, heat of reactions, kiln feed and air preheating, radiation losses, dolomite decomposition, the heat required to vaporize the coal volatiles, etc. As coal is used as utility and process stream, an iterative approach is used in solution methodology to compute coal consumption. Further, water consumption, operating cost, capital investment, waste gas generation, profit, and payback period of the modification are computed. Along with these, operational aspects of the proposed design are also discussed. To recover and integrate waste heat available in the plant, three gas-solid heat exchangers and four insulated ducts with one FD fan for each are installed additionally. Thus, the proposed option requires total capital investment of $0.84 million. Preheating of kiln feed, slinger coal and kiln air streams reduce coal consumption by 24.63% which in turn reduces waste gas generation by 25.2% in comparison to the existing process. Moreover, 96% reduction in water is also observed, which is the added advantage of the modification. Consequently, total profit is found as $2.06 million/year with payback period of 4.97 months only. The energy efficient factor (EEF), which is the % of the maximum energy that can be saved through design, is found to be 56.7%. Results of the proposed option are also compared with literature and found in good agreement.Keywords: coal consumption, energy conservation, process integration, sponge iron plant
Procedia PDF Downloads 142832 Analysis of Reduced Mechanisms for Premixed Combustion of Methane/Hydrogen/Propane/Air Flames in Geometrically Modified Combustor and Its Effects on Flame Properties
Authors: E. Salem
Abstract:
Combustion has been used for a long time as a means of energy extraction. However, in recent years, there has been a further increase in air pollution, through pollutants such as nitrogen oxides, acid etc. In order to solve this problem, there is a need to reduce carbon and nitrogen oxides through learn burning modifying combustors and fuel dilution. A numerical investigation has been done to investigate the effectiveness of several reduced mechanisms in terms of computational time and accuracy, for the combustion of the hydrocarbons/air or diluted with hydrogen in a micro combustor. The simulations were carried out using the ANSYS Fluent 19.1. To validate the results “PREMIX and CHEMKIN” codes were used to calculate 1D premixed flame based on the temperature, composition of burned and unburned gas mixtures. Numerical calculations were carried for several hydrocarbons by changing the equivalence ratios and adding small amounts of hydrogen into the fuel blends then analyzing the flammable limit, the reduction in NOx and CO emissions, then comparing it to experimental data. By solving the conservations equations, several global reduced mechanisms (2-9-12) were obtained. These reduced mechanisms were simulated on a 2D cylindrical tube with dimensions of 40 cm in length and 2.5 cm diameter. The mesh of the model included a proper fine quad mesh, within the first 7 cm of the tube and around the walls. By developing a proper boundary layer, several simulations were performed on hydrocarbon/air blends to visualize the flame characteristics than were compared with experimental data. Once the results were within acceptable range, the geometry of the combustor was modified through changing the length, diameter, adding hydrogen by volume, and changing the equivalence ratios from lean to rich in the fuel blends, the results on flame temperature, shape, velocity and concentrations of radicals and emissions were observed. It was determined that the reduced mechanisms provided results within an acceptable range. The variation of the inlet velocity and geometry of the tube lead to an increase of the temperature and CO2 emissions, highest temperatures were obtained in lean conditions (0.5-0.9) equivalence ratio. Addition of hydrogen blends into combustor fuel blends resulted in; reduction in CO and NOx emissions, expansion of the flammable limit, under the condition of having same laminar flow, and varying equivalence ratio with hydrogen additions. The production of NO is reduced because the combustion happens in a leaner state and helps in solving environmental problems.Keywords: combustor, equivalence-ratio, hydrogenation, premixed flames
Procedia PDF Downloads 113831 Instruction Program for Human Factors in Maintenance, Addressed to the People Working in Colombian Air Force Aeronautical Maintenance Area to Strengthen Operational Safety
Authors: Rafael Andres Rincon Barrera
Abstract:
Safety in global aviation plays a preponderant role in organizations that seek to avoid accidents in an attempt to preserve their most precious assets (the people and the machines). Human factors-based programs have shown to be effective in managing human-generated risks. The importance of training on human factors in maintenance has not been indifferent to the Colombian Air Force (COLAF). This research, which has a mixed quantitative, qualitative and descriptive approach, deals with its absence of structuring an instruction program in Human Factors in Aeronautical Maintenance, which serves as a tool to improve Operational Safety in the military air units of the COLAF. Research shows the trends and evolution of human factors programs in aeronautical maintenance through the analysis of a data matrix with 33 sources taken from different databases that are about the incorporation of these types of programs in the aeronautical industry in the last 20 years; as well as the improvements in the operational safety process that are presented after the implementation of these ones. Likewise, it compiles different normative guides in force from world aeronautical authorities for training in these programs, establishing a matrix of methodologies that may be applicable to develop a training program in human factors in maintenance. Subsequently, it illustrates the design, validation, and development of a human factors knowledge measurement instrument for maintenance at the COLAF that includes topics on Human Factors (HF), Safety Management System (SMS), and aeronautical maintenance regulations at the COLAF. With the information obtained, it performs the statistical analysis showing the aspects of knowledge and strengthening the staff for the preparation of the instruction program. Performing data triangulation based on the applicable methods and the weakest aspects found in the maintenance people shows a variable crossing from color coding, thus indicating the contents according to a training program for human factors in aeronautical maintenance, which are adjusted according to the competencies that are expected to be developed with the staff in a curricular format established by the COLAF. Among the most important findings are the determination that different authors are dealing with human factors in maintenance agrees that there is no standard model for its instruction and implementation, but that it must be adapted to the needs of the organization, that the Safety Culture in the Companies which incorporated programs on human factors in maintenance increased, that from the data obtained with the instrument for knowledge measurement of human factors in maintenance, the level of knowledge is MEDIUM-LOW with a score of 61.79%. And finally that there is an opportunity to improve Operational Safety for the COLAF through the implementation of the training program of human factors in maintenance for the technicians working in this area.Keywords: Colombian air force, human factors, safety culture, safety management system, triangulation
Procedia PDF Downloads 134830 Khiaban (the Street) as an Ancient Percept of the Iranian Urban Landscape: An Aesthetic Reading of Lalehzar Street, the First Modern Khiaban in Iran
Authors: Mohammad Atashinbar
Abstract:
Lalehzar was one of the main streets in central Tehran in late Qajar and 1st Pahlavi (1880-1940) and a center of attention for the government. It was a natural walk during the last decade of the reign of Nasser al-Din Shah (1880-1895). However, this street lost its prosperity status under the 2nd Pahlavi and evolved from a modern cultural street to a commercial corridor. Lalehzar's decline was the result of the immigration of the upper class from the inner city to the northern part and the consequent transfer of amenities and luxury goods with them. It seems that during Lalehzar's six decades of prosperity, this khiâbân has received an aesthetic look, which has made it enjoyable and appreciated by Tehran’s people. Various post-revolutionary urban management measures have been taken to revive Lalehzar and improve the quality of its urban life. Since the beginning of the Safavid era, the khiâbân was accompanied by the concept of urban space, and its characteristics are explained by referring to the main axis of the Persian Garden with rows of trees, streams, and a line of flowers on both sides. The construction of a street inside the city as an urban space benefits from a mental concept as a spiritual and exciting space, especially in common forms in the Persian Garden. Before that, the khiâbân was a religious and mythical concept, and we can even say that the mastery of this concept led to its appearance in the garden. In Tehran, Lalehzar Street is a gateway to modernity. The aesthetic changes in Lalehzar Street, inspired by Nasser al-Din Shah's journey to Europe around 1870, coinciding with the changes in architectural and urban landscape movements around the world between 1880 and 1940. The Shah is impressed by the modernist urbanism and, in particular, the Champs-Élysées in Paris. A tree-lined promenade with the hallmarks of the Persian Garden is familiar to Nasser al-Din Shah's mental image of beauty. In its state of mind, the main axis of the Persian Garden has the characteristics of a promenade. Therefore, the origins of the aesthetic of Lalehzar Street come from the aesthetics of the khiâbân. Admitting that the Champs-Élysées served as a model for Lalehzar, it seems that the Shah wanted to associate the Champs-Élysées with Lalehzar and highlight its landscape aspects by building this street. Depending on whether the percepts have their own aesthetic, this proposal seeks to analyze the aesthetic evolutions of the khiâbân as a percept towards the street as a component of the urban landscape in Lalehzar. The research attempts to review the aesthetic aspects of Lalehzar between 1880-1940 by using iconographic analysis, based on the available historical data, to find the leading aesthetics principles of this street. The aesthetic view to Lalehzar as an artwork is one of the main achievements of this study.Keywords: Lalehzar, aesthetics, percept, Tehran, street
Procedia PDF Downloads 149829 A Feature Clustering-Based Sequential Selection Approach for Color Texture Classification
Authors: Mohamed Alimoussa, Alice Porebski, Nicolas Vandenbroucke, Rachid Oulad Haj Thami, Sana El Fkihi
Abstract:
Color and texture are highly discriminant visual cues that provide an essential information in many types of images. Color texture representation and classification is therefore one of the most challenging problems in computer vision and image processing applications. Color textures can be represented in different color spaces by using multiple image descriptors which generate a high dimensional set of texture features. In order to reduce the dimensionality of the feature set, feature selection techniques can be used. The goal of feature selection is to find a relevant subset from an original feature space that can improve the accuracy and efficiency of a classification algorithm. Traditionally, feature selection is focused on removing irrelevant features, neglecting the possible redundancy between relevant ones. This is why some feature selection approaches prefer to use feature clustering analysis to aid and guide the search. These techniques can be divided into two categories. i) Feature clustering-based ranking algorithm uses feature clustering as an analysis that comes before feature ranking. Indeed, after dividing the feature set into groups, these approaches perform a feature ranking in order to select the most discriminant feature of each group. ii) Feature clustering-based subset search algorithms can use feature clustering following one of three strategies; as an initial step that comes before the search, binded and combined with the search or as the search alternative and replacement. In this paper, we propose a new feature clustering-based sequential selection approach for the purpose of color texture representation and classification. Our approach is a three step algorithm. First, irrelevant features are removed from the feature set thanks to a class-correlation measure. Then, introducing a new automatic feature clustering algorithm, the feature set is divided into several feature clusters. Finally, a sequential search algorithm, based on a filter model and a separability measure, builds a relevant and non redundant feature subset: at each step, a feature is selected and features of the same cluster are removed and thus not considered thereafter. This allows to significantly speed up the selection process since large number of redundant features are eliminated at each step. The proposed algorithm uses the clustering algorithm binded and combined with the search. Experiments using a combination of two well known texture descriptors, namely Haralick features extracted from Reduced Size Chromatic Co-occurence Matrices (RSCCMs) and features extracted from Local Binary patterns (LBP) image histograms, on five color texture data sets, Outex, NewBarktex, Parquet, Stex and USPtex demonstrate the efficiency of our method compared to seven of the state of the art methods in terms of accuracy and computation time.Keywords: feature selection, color texture classification, feature clustering, color LBP, chromatic cooccurrence matrix
Procedia PDF Downloads 134828 Shark Detection and Classification with Deep Learning
Authors: Jeremy Jenrette, Z. Y. C. Liu, Pranav Chimote, Edward Fox, Trevor Hastie, Francesco Ferretti
Abstract:
Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation. We created a database of shark images by sourcing 24,546 images covering 219 species of sharks from the web application spark pulse and the social network Instagram. We used object detection to extract shark features and inflate this database to 53,345 images. We packaged object-detection and image classification models into a Shark Detector bundle. We developed the Shark Detector to recognize and classify sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: boosting training datasets, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity. The Shark Detector located sharks in baited remote footage and YouTube videos with an average accuracy of 89\%, and classified located subjects to the species level with 69\% accuracy (n =\ eight species). The Shark Detector sorted heterogeneous datasets of images sourced from Instagram with 91\% accuracy and classified species with 70\% accuracy (n =\ 17 species). Data-mining Instagram can inflate training datasets and increase the Shark Detector’s accuracy as well as facilitate archiving of historical and novel shark observations. Base accuracy of genus prediction was 68\% across 25 genera. The average base accuracy of species prediction within each genus class was 85\%. The Shark Detector can classify 45 species. All data-generation methods were processed without manual interaction. As media-based remote monitoring strives to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.Keywords: classification, data mining, Instagram, remote monitoring, sharks
Procedia PDF Downloads 120827 Improving Screening and Treatment of Binge Eating Disorders in Pediatric Weight Management Clinic through a Quality Improvement Framework
Authors: Cristina Fernandez, Felix Amparano, John Tumberger, Stephani Stancil, Sarah Hampl, Brooke Sweeney, Amy R. Beck, Helena H Laroche, Jared Tucker, Eileen Chaves, Sara Gould, Matthew Lindquist, Lora Edwards, Renee Arensberg, Meredith Dreyer, Jazmine Cedeno, Alleen Cummins, Jennifer Lisondra, Katie Cox, Kelsey Dean, Rachel Perera, Nicholas A. Clark
Abstract:
Background: Adolescents with obesity are at higher risk of disordered eating than the general population. Detection of eating disorders (ED) is difficult. Screening questionnaires may aid in early detection of ED. Our team’s prior efforts focused on increasing ED screening rates to ≥90% using a validated 10-question adolescent binge eating disorder screening questionnaire (ADO-BED). This aim was achieved. We then aimed to improve treatment plan initiation of patients ≥12 years of age who screen positive for BED within our WMC from 33% to 70% within 12 months. Methods: Our WMC is within a tertiary-care, free-standing children’s hospital. A3, an improvement framework, was used. A multidisciplinary team (physicians, nurses, registered dietitians, psychologists, and exercise physiologists) was created. The outcome measure was documentation of treatment plan initiation of those who screen positive (goal 70%). The process measure was ADO-BED screening rate of WMC patients (goal ≥90%). Plan-Do-Study-Act (PDSA) cycle 1 included provider education on current literature and treatment plan initiation based upon ADO-BED responses. PDSA 2 involved increasing documentation of treatment plan and retrain process to providers. Pre-defined treatment plans were: 1) repeat screen in 3-6 months, 2) resources provided only, or 3) comprehensive multidisciplinary weight management team evaluation. Run charts monitored impact over time. Results: Within 9 months, 166 patients were seen in WMC. Process measure showed sustained performance above goal (mean 98%). Outcome measure showed special cause improvement from mean of 33% to 100% (n=31). Of treatment plans provided, 45% received Plan 1, 4% Plan 2, and 46% Plan 3. Conclusion: Through a multidisciplinary improvement team approach, we maintained sustained ADO-BED screening performance, and, prior to our 12-month timeline, achieved our project aim. Our efforts may serve as a model for other multidisciplinary WMCs. Next steps may include expanding project scope to other WM programs.Keywords: obesity, pediatrics, clinic, eating disorder
Procedia PDF Downloads 58826 Efficacy and Safety of Computerized Cognitive Training Combined with SSRIs for Treating Cognitive Impairment Among Patients with Late-Life Depression: A 12-Week, Randomized Controlled Study
Authors: Xiao Wang, Qinge Zhang
Abstract:
Background: This randomized, open-label study examined the therapeutic effects of computerized cognitive training (CCT) combined with selective serotonin reuptake inhibitors (SSRIs) on cognitive impairment among patients with late-life depression (LLD). Method: Study data were collected from May 5, 2021, to April 21, 2023. Outpatients who met diagnostic criteria for major depressive disorder according to the fifth revision of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) criteria (i.e., a total score on the 17-item Hamilton Depression Rating Scale (HAMD-17) ≥ 18 and a total score on the Montreal Cognitive Assessment scale (MOCA) <26) were randomly assigned to receive up to 12 weeks of CCT and SSRIs treatment (n=57) or SSRIs and Control treatment (n=61). The primary outcome was the change in Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog) scores from baseline to week 12 between the two groups. The secondary outcomes included changes in the HAMD-17 score, Hamilton Anxiety Scale (HAMA) score and Neuropsychiatric Inventory (NPI) score. Mixed model repeated measures (MMRM) analysis was performed on modified intention-to-treat (mITT) and completer populations. Results: The full analysis set (FAS) included 118 patients (CCT and SSRIs group, n=57; SSRIs and Control group, n =61). Over the 12-week study period, the reduction in the ADAS-cog total score was significant (P < 0.001) in both groups, while MMRM analysis revealed a significantly greater reduction in cognitive function (ADAS-cog total scores) from baseline to posttreatment in the CCT and SSRIs group than in the SSRI and Control group [(F (1,115) =13.65, least-squares mean difference [95% CI]: −2.77 [−3.73, −1.81], p<0.001)]. There were significantly greater improvements in depression symptoms (measured by the HAMD-17) in the CCT and SSRIs group than in the control group [MMRM, estimated mean difference (SE) between groups −3.59 [−5.02, −2.15], p < 0.001]. The least-squares mean changes in the HAMA scores and NPI scores between baseline and week 8 were greater in the CCT and SSRIs group than in the control group (all P < 0.05). There was no significant difference between groups on response rates and remission rates by using the last-observation-carried-forward (LOCF) method (all P > 0.05). The most frequent adverse events (AEs) in both groups were dry mouth, somnolence, and constipation. There was no significant difference in the incidence of adverse events between the two groups. Conclusions: CCT combined with SSRIs was efficacious and well tolerated in LLD patients with cognitive impairment.Keywords: late-life depression, cognitive function, computerized cognitive training, SSRIs
Procedia PDF Downloads 50825 Insect Cell-Based Models: Asutralian Sheep bBlowfly Lucilia Cuprina Embryo Primary Cell line Establishment and Transfection
Authors: Yunjia Yang, Peng Li, Gordon Xu, Timothy Mahony, Bing Zhang, Neena Mitter, Karishma Mody
Abstract:
Sheep flystrike is one of the most economically important diseases affecting the Australian sheep and wool industry (>356M/annually). Currently, control of Lucillia cuprina relies almost exclusively on chemicals controls, and the parasite has developed resistance to nearly all control chemicals used in the past. It is, therefore, critical to develop an alternative solution for the sustainable control and management of flystrike. RNA interference (RNAi) technologies have been successfully explored in multiple animal industries for developing parasites controls. This research project aims to develop a RNAi based biological control for sheep blowfly. Double-stranded RNA (dsRNA) has already proven successful against viruses, fungi, and insects. However, the environmental instability of dsRNA is a major bottleneck for successful RNAi. Bentonite polymer (BenPol) technology can overcome this problem, as it can be tuned for the controlled release of dsRNA in the gut challenging pH environment of the blowfly larvae, prolonging its exposure time to and uptake by target cells. To investigate the potential of BenPol technology for dsRNA delivery, four different BenPol carriers were tested for their dsRNA loading capabilities, and three of them were found to be capable of affording dsRNA stability under multiple temperatures (4°C, 22°C, 40°C, 55°C) in sheep serum. Based on stability results, dsRNA from potential targeted genes was loaded onto BenPol carriers and tested in larvae feeding assays, three genes resulting in knockdowns. Meanwhile, a primary blowfly embryo cell line (BFEC) derived from L. cuprina embryos was successfully established, aim for an effective insect cell model for testing RNAi efficacy for preliminary assessments and screening. The results of this study establish that the dsRNA is stable when loaded on BenPol particles, unlike naked dsRNA rapidly degraded in sheep serum. The stable nanoparticle delivery system offered by BenPol technology can protect and increase the inherent stability of dsRNA molecules at higher temperatures in a complex biological fluid like serum, providing promise for its future use in enhancing animal protection.Keywords: lucilia cuprina, primary cell line establishment, RNA interference, insect cell transfection
Procedia PDF Downloads 72824 Predictive Modelling of Curcuminoid Bioaccessibility as a Function of Food Formulation and Associated Properties
Authors: Kevin De Castro Cogle, Mirian Kubo, Maria Anastasiadi, Fady Mohareb, Claire Rossi
Abstract:
Background: The bioaccessibility of bioactive compounds is a critical determinant of the nutritional quality of various food products. Despite its importance, there is a limited number of comprehensive studies aimed at assessing how the composition of a food matrix influences the bioaccessibility of a compound of interest. This knowledge gap has prompted a growing need to investigate the intricate relationship between food matrix formulations and the bioaccessibility of bioactive compounds. One such class of bioactive compounds that has attracted considerable attention is curcuminoids. These naturally occurring phytochemicals, extracted from the roots of Curcuma longa, have gained popularity owing to their purported health benefits and also well known for their poor bioaccessibility Project aim: The primary objective of this research project is to systematically assess the influence of matrix composition on the bioaccessibility of curcuminoids. Additionally, this study aimed to develop a series of predictive models for bioaccessibility, providing valuable insights for optimising the formula for functional foods and provide more descriptive nutritional information to potential consumers. Methods: Food formulations enriched with curcuminoids were subjected to in vitro digestion simulation, and their bioaccessibility was characterized with chromatographic and spectrophotometric techniques. The resulting data served as the foundation for the development of predictive models capable of estimating bioaccessibility based on specific physicochemical properties of the food matrices. Results: One striking finding of this study was the strong correlation observed between the concentration of macronutrients within the food formulations and the bioaccessibility of curcuminoids. In fact, macronutrient content emerged as a very informative explanatory variable of bioaccessibility and was used, alongside other variables, as predictors in a Bayesian hierarchical model that predicted curcuminoid bioaccessibility accurately (optimisation performance of 0.97 R2) for the majority of cross-validated test formulations (LOOCV of 0.92 R2). These preliminary results open the door to further exploration, enabling researchers to investigate a broader spectrum of food matrix types and additional properties that may influence bioaccessibility. Conclusions: This research sheds light on the intricate interplay between food matrix composition and the bioaccessibility of curcuminoids. This study lays a foundation for future investigations, offering a promising avenue for advancing our understanding of bioactive compound bioaccessibility and its implications for the food industry and informed consumer choices.Keywords: bioactive bioaccessibility, food formulation, food matrix, machine learning, probabilistic modelling
Procedia PDF Downloads 66823 Exploring the Entrepreneur-Function in Uncertainty: Towards a Revised Definition
Authors: Johan Esbach
Abstract:
The entrepreneur has traditionally been defined through various historical lenses, emphasising individual traits, risk-taking, speculation, innovation and firm creation. However, these definitions often fail to address the dynamic nature of the modern entrepreneurial functions, which respond to unpredictable uncertainties and transition to routine management as certainty is achieved. This paper proposes a revised definition, positioning the entrepreneur as a dynamic function rather than a human construct, that emerges to address specific uncertainties in economic systems, but fades once uncertainty is resolved. By examining historical definitions and its limitations, including the works of Cantillon, Say, Schumpeter, and Knight, this paper identifies a gap in literature and develops a generalised definition for the entrepreneur. The revised definition challenges conventional thought by shifting focus from static attributes such as alertness, traits, firm creation, etc., to a dynamic role that includes reliability, adaptation, scalability, and adaptability. The methodology of this paper employs a mixed approach, combining theoretical analysis and case study examination to explore the dynamic nature of the entrepreneurial function in relation to uncertainty. The selection of case studies includes companies like Airbnb, Uber, Netflix, and Tesla, as these firms demonstrate a clear transition from entrepreneurial uncertainty to routine certainty. The data from the case studies is then analysed qualitatively, focusing on the patterns of entrepreneurial function across the selected companies. These results are then validated using quantitative analysis, derived from an independent survey. The primary finding of the paper will validate the entrepreneur as a dynamic function rather than a static, human-centric role. In considering the transition from uncertainty to certainty in companies like Airbnb, Uber, Netflix, and Tesla, the study shows that the entrepreneurial function emerges explicitly to address market, technological, or social uncertainties. Once these uncertainties are resolved and a certainty in the operating environment is established, the need for the entrepreneurial function ceases, giving way to routine management and business operations. The paper emphasises the need for a definitive model that responds to the temporal and contextualised nature of the entrepreneur. In adopting the revised definition, the entrepreneur is positioned to play a crucial role in the reduction of uncertainties within economic systems. Once the uncertainties are addressed, certainty is manifested in new combinations or new firms. Finally, the paper outlines policy implications for fostering environments that enables the entrepreneurial function and transition theory.Keywords: dynamic function, uncertainty, revised definition, transition
Procedia PDF Downloads 19822 Acceptance and Commitment Therapy for Social Anxiety Disorder in Adolescence: A Manualized Online Approach
Authors: Francisca Alves, Diana Figueiredo, Paula Vagos, Luiza Lima, Maria do Céu Salvador, Daniel Rijo
Abstract:
In recent years, Acceptance and Commitment Therapy (ACT) has been shown to be effective in the treatment of numerous anxiety disorders, including social anxiety disorder (SAD). However, limited evidence exists on its therapeutic gains for adolescents with SAD. The current work presents a weekly 10-session manualized online ACT approach to adolescent SAD, being the first study to do so in a clinical sample of adolescents. The intervention ACT@TeenSAD addresses the six proposed processes of psychological inflexibility (i.e., experiential avoidance, cognitive fusion, lack of values clarity, unworkable action, dominance of the conceptualized past and future, attachment to the conceptualized self) in social situations relevant to adolescents (e.g., doing a presentation). It is organized into four modules. The first module explores the role of psychological (in)flexibility in SAD (session 1 and 2), addressing psychoeducation (i.e., functioning of the mind) according to ACT, the development of an individualized model, and creative hopelessness. The second module focuses on the foundation of psychological flexibility (session 3, 4, and 5), specifically on the development and practice of strategies to promote clarification of values, contact with the present moment, the observing self, defusion, and acceptance. The third module encompasses psychological flexibility in action (sessions 6, 7, 8, and 9), encouraging committed action based on values in social situations relevant to the adolescents. The fourth modules’ focus is the revision of gains and relapse prevention (session 10). This intervention further includes two booster sessions after therapy has ended (3 and 6-month follow-up) that aim to review the continued practice of learned abilities and to plan for their future application to potentially anxious social events. As part of an ongoing clinical trial, the intervention will be assessed on its feasibility with adolescents diagnosed with SAD and on its therapeutic efficacy based on a longitudinal design including pretreatment, posttreatment, 3 and 6-month follow-up. If promising, findings may support the online delivery of ACT interventions for SAD, contributing to increased treatment availability to adolescents. This availability of an effective therapeutic approach will be helpful not only in relation to adolescents who face obstacles (e.g., distance) when attending to face-to-face sessions but also particularly to adolescents with SAD, who are usually more reluctant to look for specialized treatment in public or private health facilities.Keywords: acceptance and commitment therapy, social anxiety disorder, adolescence, manualized online approach
Procedia PDF Downloads 157821 Investigation on Correlation of Earthquake Intensity Parameters with Seismic Response of Reinforced Concrete Structures
Authors: Semra Sirin Kiris
Abstract:
Nonlinear dynamic analysis is permitted to be used for structures without any restrictions. The important issue is the selection of the design earthquake to conduct the analyses since quite different response may be obtained using ground motion records at the same general area even resulting from the same earthquake. In seismic design codes, the method requires scaling earthquake records based on site response spectrum to a specified hazard level. Many researches have indicated that this limitation about selection can cause a large scatter in response and other charecteristics of ground motion obtained in different manner may demonstrate better correlation with peak seismic response. For this reason influence of eleven different ground motion parameters on the peak displacement of reinforced concrete systems is examined in this paper. From conducting 7020 nonlinear time history analyses for single degree of freedom systems, the most effective earthquake parameters are given for the range of the initial periods and strength ratios of the structures. In this study, a hysteresis model for reinforced concrete called Q-hyst is used not taken into account strength and stiffness degradation. The post-yielding to elastic stiffness ratio is considered as 0.15. The range of initial period, T is from 0.1s to 0.9s with 0.1s time interval and three different strength ratios for structures are used. The magnitude of 260 earthquake records selected is higher than earthquake magnitude, M=6. The earthquake parameters related to the energy content, duration or peak values of ground motion records are PGA(Peak Ground Acceleration), PGV (Peak Ground Velocity), PGD (Peak Ground Displacement), MIV (Maximum Increamental Velocity), EPA(Effective Peak Acceleration), EPV (Effective Peak Velocity), teff (Effective Duration), A95 (Arias Intensity-based Parameter), SPGA (Significant Peak Ground Acceleration), ID (Damage Factor) and Sa (Spectral Response Spectrum).Observing the correlation coefficients between the ground motion parameters and the peak displacement of structures, different earthquake parameters play role in peak displacement demand related to the ranges formed by the different periods and the strength ratio of a reinforced concrete systems. The influence of the Sa tends to decrease for the high values of strength ratio and T=0.3s-0.6s. The ID and PGD is not evaluated as a measure of earthquake effect since high correlation with displacement demand is not observed. The influence of the A95 is high for T=0.1 but low related to the higher values of T and strength ratio. The correlation of PGA, EPA and SPGA shows the highest correlation for T=0.1s but their effectiveness decreases with high T. Considering all range of structural parameters, the MIV is the most effective parameter.Keywords: earthquake parameters, earthquake resistant design, nonlinear analysis, reinforced concrete
Procedia PDF Downloads 150820 Colored Image Classification Using Quantum Convolutional Neural Networks Approach
Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins
Abstract:
Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning
Procedia PDF Downloads 128819 Medication Side Effects: Implications on the Mental Health and Adherence Behaviour of Patients with Hypertension
Authors: Irene Kretchy, Frances Owusu-Daaku, Samuel Danquah
Abstract:
Hypertension is the leading risk factor for cardiovascular diseases, and a major cause of death and disability worldwide. This study examined whether psychosocial variables influenced patients’ perception and experience of side effects of their medicines, how they coped with these experiences and the impact on mental health and medication adherence to conventional hypertension therapies. Methods: A hospital-based mixed methods study, using quantitative and qualitative approaches was conducted on hypertensive patients. Participants were asked about side effects, medication adherence, common psychological symptoms, and coping mechanisms with the aid of standard questionnaires. Information from the quantitative phase was analyzed with the Statistical Package for Social Sciences (SPSS) version 20. The interviews from the qualitative study were audio-taped with a digital audio recorder, manually transcribed and analyzed using thematic content analysis. The themes originated from participant interviews a posteriori. Results: The experiences of side effects – such as palpitations, frequent urination, recurrent bouts of hunger, erectile dysfunction, dizziness, cough, physical exhaustion - were categorized as no/low (39.75%), moderate (53.0%) and high (7.25%). Significant relationships between depression (x 2 = 24.21, P < 0.0001), anxiety (x 2 = 42.33, P < 0.0001), stress (x 2 = 39.73, P < 0.0001) and side effects were observed. A logistic regression model using the adjusted results for this association are reported – depression [OR = 1.9 (1.03 – 3.57), p = 0.04], anxiety [OR = 1.5 (1.22 – 1.77), p = < 0.001], and stress [OR = 1.3 (1.02 – 1.71), p = 0.04]. Side effects significantly increased the probability of individuals to be non-adherent [OR = 4.84 (95% CI 1.07 – 1.85), p = 0.04] with social factors, media influences and attitudes of primary caregivers further explaining this relationship. The personal adoption of medication modifying strategies, espousing the use of complementary and alternative treatments, and interventions made by clinicians were the main forms of coping with side effects. Conclusions: Results from this study show that contrary to a biomedical approach, the experience of side effects has biological, social and psychological interrelations. The result offers more support for the need for a multi-disciplinary approach to healthcare where all forms of expertise are incorporated into health provision and patient care. Additionally, medication side effects should be considered as a possible cause of non-adherence among hypertensive patients, thus addressing this problem from a Biopsychosocial perspective in any intervention may improve adherence and invariably control blood pressure.Keywords: biopsychosocial, hypertension, medication adherence, psychological disorders
Procedia PDF Downloads 371818 Increasing the Dialogue in Workplaces Enhances the Age-Friendly Organisational Culture and Helps Employees Face Work-Related Dilemmas
Authors: Heli Makkonen, Eini Hyppönen
Abstract:
The ageing of employees, the availability of workforce, and employees’ engagement in work are today’s challenges in the field of health care and social services, and particularly in the care of older people. Therefore, it is important to enhance both the attractiveness of the work in the field of older people’s care and the retention of employees in the field, and also to pay attention to the length of careers. The length of careers can be affected, for example, by developing an age-friendly organisational culture. Changing the organisational culture in a workplace is, however, a slow process which requires engagement from employees and enhanced dialogue between employees. This article presents an example of age-friendly organisational culture in an older people’s care unit and presents the results of the development of this organisational culture to meet the identified development challenges. In this research-based development process, cycles used in action research were applied. Three workshops were arranged for employees in a service home for older people. The workshops worked as interventions, and the employees and their manager were given several consecutive assignments to be completed between them. In addition to workshops, the employees benchmarked two other service homes. In the workshops, data was collected by observing and documenting the conversations. After that, thematic analysis was used to identify the factors connected to an age-friendly organisational culture. By analysing the data and comparing it to previous studies, some dilemmas we recognised that were hindering or enhancing the attractiveness of work and the retention of employees in this nursing home. After each intervention, the process was reflected and evaluated, and the next steps were planned. The areas of development identified in the study were related to, for example, the flexibility of work, holistic ergonomics, the physical environment at the workplace, and the workplace culture. Some of the areas of development were taken over by the work community and carried out in cooperation with e.g. occupational health care. We encouraged the work community, and the employees provided us with information about their progress. In this research project, the focus was on the development of the workplace culture and, in particular, on the development of the culture of interaction. The workshops showed employees’ attitudes and strong opinions, which can be a challenge from the point of view of the attractiveness of work and the retention of employees in the field. On the other hand, the data revealed that the work community has an interest in developing the dialogue in the work community. Enhancing the dialogue gave the employees the opportunity and resources to face even challenging dilemmas related to the attractiveness of work and the retention of employees in the field. The psychological safety was also enhanced at the same time. The results of this study are part of a broader study that aims at building a model for extending older employees’ careers.Keywords: age-friendliness, attractiveness of work, dialogue, older people, organisational culture, workplace culture
Procedia PDF Downloads 76817 Examining the Effects of National Disaster on the Performance of Hospitality Industry in Korea
Authors: Kim Sang Hyuck, Y. Park Sung
Abstract:
The outbreak of national disasters stimulates the decrease of the both internal and domestic tourism demands, causing bad effects on the hospitality industry. The effective and efficient risk management regarding national disasters are being increasingly required from the hospitality industry practitioners and the tourism policymakers. To establish the effective and efficient risk management strategy on national disasters, the most essential prerequisite condition is the correct estimation of national disasters’ effects in terms of the size and duration of the damages occurred from national disaster on hospitality industry. More specifically, the national disasters are twofold: natural disaster and social disaster. In addition, the hospitality industry has consisted of several types of business, such as hotel, restaurant, travel agency, etc. As reasons of the above, it is important to consider how each type of national disasters differently influences on the performance of each type of hospitality industry. Therefore, the purpose of this study is examining the effects of national disaster on hospitality industry in Korea based on the types of national disasters as well as the types of hospitality business. The monthly data was collected from Jan. 2000 to Dec. 2016. The indexes of industrial production for each hospitality industry in Korea were used with the proxy variable for the performance of each hospitality industry. Two national disaster variables (natural disaster and social disaster) were treated as dummy variables. In addition, the exchange rate, industrial production index, and consumer price index were used as control variables in the research model. The impulse response analysis was used to examine the size and duration of the damages occurred from each type of national disaster on each type of hospitality industries. The results of this study show that the natural disaster and the social disaster differently influenced on each type of hospitality industry. More specifically, the performance of airline industry is negatively influenced by the natural disaster at the time of 3 months later from the incidence. However, the negative impacts of social disaster on airline industry occurred not significantly over the time periods. For the hotel industry, both natural disaster and social disaster negatively influence the performance of hotel industry at the time of 5 months and 6 months later, respectively. Also, the negative impact of natural disaster on the performance of restaurant industry occurred at the time of 5 months later, as well as for both 3 months and 6 months later for the social disaster. Finally, both natural disaster and social disaster negatively influence the performance of travel agency at the time of 3 months and 4 months later, respectively. In conclusion, the types of national disasters differently influence the performance of each type of hospitality industry in Korea. These results would provide an important information to establish the effective and efficient risk management strategy for the national disasters.Keywords: impulse response analysis, Korea, national disaster, performance of hospitality industry
Procedia PDF Downloads 183816 Currently Use Pesticides: Fate, Availability, and Effects in Soils
Authors: Lucie Bielská, Lucia Škulcová, Martina Hvězdová, Jakub Hofman, Zdeněk Šimek
Abstract:
The currently used pesticides represent a broad group of chemicals with various physicochemical and environmental properties which input has reached 2×106 tons/year and is expected to even increases. From that amount, only 1% directly interacts with the target organism while the rest represents a potential risk to the environment and human health. Despite being authorized and approved for field applications, the effects of pesticides in the environment can differ from the model scenarios due to the various pesticide-soil interactions and resulting modified fate and behavior. As such, a direct monitoring of pesticide residues and evaluation of their impact on soil biota, aquatic environment, food contamination, and human health should be performed to prevent environmental and economic damages. The present project focuses on fluvisols as they are intensively used in the agriculture but face to several environmental stressors. Fluvisols develop in the vicinity of rivers by the periodic settling of alluvial sediments and periodic interruptions to pedogenesis by flooding. As a result, fluvisols exhibit very high yields per area unit, are intensively used and loaded by pesticides. Regarding the floods, their regular contacts with surface water arise from serious concerns about the surface water contamination. In order to monitor pesticide residues and assess their environmental and biological impact within this project, 70 fluvisols were sampled over the Czech Republic and analyzed for the total and bioaccessible amounts of 40 various pesticides. For that purpose, methodologies for the pesticide extraction and analysis with liquid chromatography-mass spectrometry technique were developed and optimized. To assess the biological risks, both the earthworm bioaccumulation tests and various types of passive sampling techniques (XAD resin, Chemcatcher, and silicon rubber) were optimized and applied. These data on chemical analysis and bioavailability were combined with the results of soil analysis, including the measurement of basic physicochemical soil properties as well detailed characterization of soil organic matter with the advanced method of diffuse reflectance infrared spectrometry. The results provide unique data on the residual levels of pesticides in the Czech Republic and on the factors responsible for increased pesticide residue levels that should be included in the modeling of pesticide fate and effects.Keywords: currently used pesticides, fluvisoils, bioavailability, Quechers, liquid-chromatography-mass spectrometry, soil properties, DRIFT analysis, pesticides
Procedia PDF Downloads 462815 Returns to Communities of the Social Entrepreneurship and Environmental Design (SEED) Integration Results in Architectural Training
Authors: P. Kavuma, J. Mukasa, M. Lusunku
Abstract:
Background and Problem: The widespread poverty in Africa- together with the negative impacts of climate change-are two great global challenges that call for everyone’s involvement including Architects. This in particular places serious challenges on architects to have additional skills in both Entrepreneurship and Environmental Design (SEED). Regrettably, while Architectural Training in most African Universities including those from Uganda lack comprehensive implementation of SEED in their curricula, regulatory bodies have not contributed towards the effective integration of SEED in their professional practice. In response to these challenges, Nkumba University (NU) under Architect Kavuma Paul supported by the Uganda Chambers of Architects– initiated the SEED integration in the undergraduate Architectural curricula to cultivate SEED know-how and examples of best practices. Main activities: Initiated in 2007, going beyond the traditional Architectural degree curriculum, the NU Architect department offers SEED courses including provoking passions for creating desirable positive changes in communities. Learning outcomes are assessed theoretically and practically through field projects. The first set of SEED graduates came out in 2012. As part of the NU post-graduation and alumni survey, in October 2014, the pioneer SEED graduates were contacted through automated reminder emails followed by individual, repeated personal follow-ups via email and phone. Out of the 36 graduates who responded to the survey, 24 have formed four (4) private consortium agencies of 5-7 graduates all of whom have pioneered Ugandan-own-cultivated Architectural social projects that include: fishing farming in shipping containers; solar powered mobile homes in shipping containers, solar powered retail kiosks in rural and fishing communities, and floating homes in the flood-prone areas. Primary outcomes: include being business self –reliant in creating the social change the architects desired in the communities. Examples of the SEED project returns to communities reported by the graduates include; employment creation via fabrication, retail business, marketing, improved diets, safety of life and property, decent shelter in the remote mining and oil exploration areas. Negative outcomes-though not yet evaluated include the disposal of used-up materials. Conclusion: The integration of SEED in Architectural Training has established a baseline benchmark and a replicable model based on best practice projects.Keywords: architectural training, entrepreneurship, environment, integration
Procedia PDF Downloads 402814 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics
Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee
Abstract:
Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru
Procedia PDF Downloads 86813 Use of Satellite Altimetry and Moderate Resolution Imaging Technology of Flood Extent to Support Seasonal Outlooks of Nuisance Flood Risk along United States Coastlines and Managed Areas
Authors: Varis Ransibrahmanakul, Doug Pirhalla, Scott Sheridan, Cameron Lee
Abstract:
U.S. coastal areas and ecosystems are facing multiple sea level rise threats and effects: heavy rain events, cyclones, and changing wind and weather patterns all influence coastal flooding, sedimentation, and erosion along critical barrier islands and can strongly impact habitat resiliency and water quality in protected habitats. These impacts are increasing over time and have accelerated the need for new tracking techniques, models and tools of flood risk to support enhanced preparedness for coastal management and mitigation. To address this issue, NOAA National Ocean Service (NOS) evaluated new metrics from satellite altimetry AVISO/Copernicus and MODIS IR flood extents to isolate nodes atmospheric variability indicative of elevated sea level and nuisance flood events. Using de-trended time series of cross-shelf sea surface heights (SSH), we identified specific Self Organizing Maps (SOM) nodes and transitions having a strongest regional association with oceanic spatial patterns (e.g., heightened downwelling favorable wind-stress and enhanced southward coastal transport) indicative of elevated coastal sea levels. Results show the impacts of the inverted barometer effect as well as the effects of surface wind forcing; Ekman-induced transport along broad expanses of the U.S. eastern coastline. Higher sea levels and corresponding localized flooding are associated with either pattern indicative of enhanced on-shore flow, deepening cyclones, or local- scale winds, generally coupled with an increased local to regional precipitation. These findings will support an integration of satellite products and will inform seasonal outlook model development supported through NOAAs Climate Program Office and NOS office of Center for Operational Oceanographic Products and Services (CO-OPS). Overall results will prioritize ecological areas and coastal lab facilities at risk based on numbers of nuisance flood projected and inform coastal management of flood risk around low lying areas subjected to bank erosion.Keywords: AVISO satellite altimetry SSHA, MODIS IR flood map, nuisance flood, remote sensing of flood
Procedia PDF Downloads 139