Search results for: absorptive capabilities
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1206

Search results for: absorptive capabilities

216 Enhancing Fault Detection in Rotating Machinery Using Wiener-CNN Method

Authors: Mohamad R. Moshtagh, Ahmad Bagheri

Abstract:

Accurate fault detection in rotating machinery is of utmost importance to ensure optimal performance and prevent costly downtime in industrial applications. This study presents a robust fault detection system based on vibration data collected from rotating gears under various operating conditions. The considered scenarios include: (1) both gears being healthy, (2) one healthy gear and one faulty gear, and (3) introducing an imbalanced condition to a healthy gear. Vibration data was acquired using a Hentek 1008 device and stored in a CSV file. Python code implemented in the Spider environment was used for data preprocessing and analysis. Winner features were extracted using the Wiener feature selection method. These features were then employed in multiple machine learning algorithms, including Convolutional Neural Networks (CNN), Multilayer Perceptron (MLP), K-Nearest Neighbors (KNN), and Random Forest, to evaluate their performance in detecting and classifying faults in both the training and validation datasets. The comparative analysis of the methods revealed the superior performance of the Wiener-CNN approach. The Wiener-CNN method achieved a remarkable accuracy of 100% for both the two-class (healthy gear and faulty gear) and three-class (healthy gear, faulty gear, and imbalanced) scenarios in the training and validation datasets. In contrast, the other methods exhibited varying levels of accuracy. The Wiener-MLP method attained 100% accuracy for the two-class training dataset and 100% for the validation dataset. For the three-class scenario, the Wiener-MLP method demonstrated 100% accuracy in the training dataset and 95.3% accuracy in the validation dataset. The Wiener-KNN method yielded 96.3% accuracy for the two-class training dataset and 94.5% for the validation dataset. In the three-class scenario, it achieved 85.3% accuracy in the training dataset and 77.2% in the validation dataset. The Wiener-Random Forest method achieved 100% accuracy for the two-class training dataset and 85% for the validation dataset, while in the three-class training dataset, it attained 100% accuracy and 90.8% accuracy for the validation dataset. The exceptional accuracy demonstrated by the Wiener-CNN method underscores its effectiveness in accurately identifying and classifying fault conditions in rotating machinery. The proposed fault detection system utilizes vibration data analysis and advanced machine learning techniques to improve operational reliability and productivity. By adopting the Wiener-CNN method, industrial systems can benefit from enhanced fault detection capabilities, facilitating proactive maintenance and reducing equipment downtime.

Keywords: fault detection, gearbox, machine learning, wiener method

Procedia PDF Downloads 80
215 Profiling Risky Code Using Machine Learning

Authors: Zunaira Zaman, David Bohannon

Abstract:

This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.

Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties

Procedia PDF Downloads 106
214 Bayesian Estimation of Hierarchical Models for Genotypic Differentiation of Arabidopsis thaliana

Authors: Gautier Viaud, Paul-Henry Cournède

Abstract:

Plant growth models have been used extensively for the prediction of the phenotypic performance of plants. However, they remain most often calibrated for a given genotype and therefore do not take into account genotype by environment interactions. One way of achieving such an objective is to consider Bayesian hierarchical models. Three levels can be identified in such models: The first level describes how a given growth model describes the phenotype of the plant as a function of individual parameters, the second level describes how these individual parameters are distributed within a plant population, the third level corresponds to the attribution of priors on population parameters. Thanks to the Bayesian framework, choosing appropriate priors for the population parameters permits to derive analytical expressions for the full conditional distributions of these population parameters. As plant growth models are of a nonlinear nature, individual parameters cannot be sampled explicitly, and a Metropolis step must be performed. This allows for the use of a hybrid Gibbs--Metropolis sampler. A generic approach was devised for the implementation of both general state space models and estimation algorithms within a programming platform. It was designed using the Julia language, which combines an elegant syntax, metaprogramming capabilities and exhibits high efficiency. Results were obtained for Arabidopsis thaliana on both simulated and real data. An organ-scale Greenlab model for the latter is thus presented, where the surface areas of each individual leaf can be simulated. It is assumed that the error made on the measurement of leaf areas is proportional to the leaf area itself; multiplicative normal noises for the observations are therefore used. Real data were obtained via image analysis of zenithal images of Arabidopsis thaliana over a period of 21 days using a two-step segmentation and tracking algorithm which notably takes advantage of the Arabidopsis thaliana phyllotaxy. Since the model formulation is rather flexible, there is no need that the data for a single individual be available at all times, nor that the times at which data is available be the same for all the different individuals. This allows to discard data from image analysis when it is not considered reliable enough, thereby providing low-biased data in large quantity for leaf areas. The proposed model precisely reproduces the dynamics of Arabidopsis thaliana’s growth while accounting for the variability between genotypes. In addition to the estimation of the population parameters, the level of variability is an interesting indicator of the genotypic stability of model parameters. A promising perspective is to test whether some of the latter should be considered as fixed effects.

Keywords: bayesian, genotypic differentiation, hierarchical models, plant growth models

Procedia PDF Downloads 303
213 A Professional Learning Model for Schools Based on School-University Research Partnering That Is Underpinned and Structured by a Micro-Credentialing Regime

Authors: David Lynch, Jake Madden

Abstract:

There exists a body of literature that reports on the many benefits of partnerships between universities and schools, especially in terms of teaching improvement and school reform. This is because such partnerships can build significant teaching capital, by deepening and expanding the skillsets and mindsets needed to create the connections that support ongoing and embedded teacher professional development and career goals. At the same time, this literature is critical of such initiatives when the partnership outcomes are short- term or one-sided, misaligned to fundamental problems, and not expressly focused on building the desired teaching capabilities. In response to this situation, research conducted by Professor David Lynch and his TeachLab research team, has begun to shed light on the strengths and limitations of school/university partnerships, via the identification of key conceptual elements that appear to act as critical partnership success factors. These elements are theorised as an inter-play between professional knowledge acquisition, readiness, talent management and organisational structure. However, knowledge of how these elements are established, and how they manifest within the school and its teaching workforce as an overall system, remains incomplete. Therefore, research designed to more clearly delineate these elements in relation to their impact on school/university partnerships is thus required. It is within this context that this paper reports on the development and testing of a Professional Learning (PL) model for schools and their teachers that incorporates school-university research partnering within a systematic, whole-of-school PL strategy that is underpinned and structured by a micro-credentialing (MC) regime. MC involves learning a narrow-focused certificate (a micro-credential) in a specific topic area (e.g., 'How to Differentiate Instruction for English as a second language Students') and embedded in the teacher’s day-to-day teaching work. The use of MC is viewed as important to the efficacy and sustainability of teacher PL because it (1) provides an evidence-based framework for teacher learning, (2) has the ability to promote teacher social capital and (3) engender lifelong learning in keeping professional skills current in an embedded and seamless to work manner. The associated research is centred on a primary school in Australia (P-6) that acted as an arena to co-develop, test/investigate and report on outcomes for teacher PL that uses MC to support a whole-of-school partnership with a university.

Keywords: teaching improvement, teacher professional learning, talent management, education partnerships, school-university research

Procedia PDF Downloads 81
212 Hypersonic Propulsion Requirements for Sustained Hypersonic Flight for Air Transportation

Authors: James Rate, Apostolos Pesiridis

Abstract:

In this paper, the propulsion requirements required to achieve sustained hypersonic flight for commercial air transportation are evaluated. In addition, a design methodology is developed and used to determine the propulsive capabilities of both ramjet and scramjet engines. Twelve configurations are proposed for hypersonic flight using varying combinations of turbojet, turbofan, ramjet and scramjet engines. The optimal configuration was determined based on how well each of the configurations met the projected requirements for hypersonic commercial transport. The configurations were separated into four sub-configurations each comprising of three unique derivations. The first sub-configuration comprised four afterburning turbojets and either one or two ramjets idealised for Mach 5 cruise. The number of ramjets required was dependent on the thrust required to accelerate the vehicle from a speed where the turbojets cut out to Mach 5 cruise. The second comprised four afterburning turbojets and either one or two scramjets, similar to the first configuration. The third used four turbojets, one scramjet and one ramjet to aid acceleration from Mach 3 to Mach 5. The fourth configuration was the same as the third, but instead of turbojets, it implemented turbofan engines for the preliminary acceleration of the vehicle. From calculations which determined the fuel consumption at incremental Mach numbers this paper found that the ideal solution would require four turbojet engines and two Scramjet engines. The ideal mission profile was determined as being an 8000km sortie based on an averaging of popular long haul flights with strong business ties, which included Los Angeles to Tokyo, London to New York and Dubai to Beijing. This paper deemed that these routes would benefit from hypersonic transport links based on the previously mentioned factors. This paper has found that this configuration would be sufficient for the 8000km flight to be completed in approximately two and a half hours and would consume less fuel than Concord in doing so. However, this propulsion configuration still result in a greater fuel cost than a conventional passenger. In this regard, this investigation contributes towards the specification of the engine requirements throughout a mission profile for a hypersonic passenger vehicle. A number of assumptions have had to be made for this theoretical approach but the authors believe that this investigation lays the groundwork for appropriate framing of the propulsion requirements for sustained hypersonic flight for commercial air transportation. Despite this, it does serve as a crucial step in the development of the propulsion systems required for hypersonic commercial air transportation. This paper provides a methodology and a focus for the development of the propulsion systems that would be required for sustained hypersonic flight for commercial air transportation.

Keywords: hypersonic, ramjet, propulsion, Scramjet, Turbojet, turbofan

Procedia PDF Downloads 320
211 Actionable Personalised Learning Strategies to Improve a Growth-Mindset in an Educational Setting Using Artificial Intelligence

Authors: Garry Gorman, Nigel McKelvey, James Connolly

Abstract:

This study will evaluate a growth mindset intervention with Junior Cycle Coding and Senior Cycle Computer Science students in Ireland, where gamification will be used to incentivise growth mindset behaviour. An artificial intelligence (AI) driven personalised learning system will be developed to present computer programming learning tasks in a manner that is best suited to the individuals’ own learning preferences while incentivising and rewarding growth mindset behaviour of persistence, mastery response to challenge, and challenge seeking. This research endeavours to measure mindset with before and after surveys (conducted nationally) and by recording growth mindset behaviour whilst playing a digital game. This study will harness the capabilities of AI and aims to determine how a personalised learning (PL) experience can impact the mindset of a broad range of students. The focus of this study will be to determine how personalising the learning experience influences female and disadvantaged students' sense of belonging in the computer science classroom when tasks are presented in a manner that is best suited to the individual. Whole Brain Learning will underpin this research and will be used as a framework to guide the research in identifying key areas such as thinking and learning styles, cognitive potential, motivators and fears, and emotional intelligence. This research will be conducted in multiple school types over one academic year. Digital games will be played multiple times over this period, and the data gathered will be used to inform the AI algorithm. The three data sets are described as follows: (i) Before and after survey data to determine the grit scores and mindsets of the participants, (ii) The Growth Mind-Set data from the game, which will measure multiple growth mindset behaviours, such as persistence, response to challenge and use of strategy, (iii) The AI data to guide PL. This study will highlight the effectiveness of an AI-driven personalised learning experience. The data will position AI within the Irish educational landscape, with a specific focus on the teaching of CS. These findings will benefit coding and computer science teachers by providing a clear pedagogy for the effective delivery of personalised learning strategies for computer science education. This pedagogy will help prevent students from developing a fixed mindset while helping pupils to exhibit persistence of effort, use of strategy, and a mastery response to challenges.

Keywords: computer science education, artificial intelligence, growth mindset, pedagogy

Procedia PDF Downloads 87
210 Development of a Stable RNAi-Based Biological Control for Sheep Blowfly Using Bentonite Polymer Technology

Authors: Yunjia Yang, Peng Li, Gordon Xu, Timothy Mahony, Bing Zhang, Neena Mitter, Karishma Mody

Abstract:

Sheep flystrike is one of the most economically important diseases affecting the Australian sheep and wool industry (>356M/annually). Currently, control of Lucillia cuprina relies almost exclusively on chemicals controls and the parasite has developed resistance to nearly all control chemicals used in the past. It is therefore critical to develop an alternative solution for the sustainable control and management of flystrike. RNA interference (RNAi) technologies have been successfully explored in multiple animal industries for developing parasites controls. This research project aims to develop a RNAi based biological control for sheep blowfly. Double-stranded RNA (dsRNA) has already proven successful against viruses, fungi and insects. However, the environmental instability of dsRNA is a major bottleneck for successful RNAi. Bentonite polymer (BenPol) technology can overcome this problem, as it can be tuned for the controlled release of dsRNA in the gut challenging pH environment of the blowfly larvae, prolonging its exposure time to and uptake by target cells. To investigate the potential of BenPol technology for dsRNA delivery, four different BenPol carriers were tested for their dsRNA loading capabilities, and three of them were found to be capable of affording dsRNA stability under multiple temperatures (4°C, 22°C, 40°C, 55°C) in sheep serum. Based on stability results, dsRNA from potential targeted genes was loaded onto BenPol carriers and tested in larvae feeding assays, three genes resulting in knockdowns. Meanwhile, a primary blowfly embryo cell line (BFEC) derived from L. cuprina embryos was successfully established, aim for an effective insect cell model for testing RNAi efficacy for preliminary assessments and screening. The results of this study establish that the dsRNA is stable when loaded on BenPol particles, unlike naked dsRNA rapidly degraded in sheep serum. The stable nanoparticle delivery system offered by BenPol technology can protect and increase the inherent stability of dsRNA molecules at higher temperatures in a complex biological fluid like serum, providing promise for its future use in enhancing animal protection.

Keywords: flystrike, RNA interference, bentonite polymer technology, Lucillia cuprina

Procedia PDF Downloads 92
209 Bioleaching of Metals Contained in Spent Catalysts by Acidithiobacillus thiooxidans DSM 26636

Authors: Andrea M. Rivas-Castillo, Marlenne Gómez-Ramirez, Isela Rodríguez-Pozos, Norma G. Rojas-Avelizapa

Abstract:

Spent catalysts are considered as hazardous residues of major concern, mainly due to the simultaneous presence of several metals in elevated concentrations. Although hydrometallurgical, pyrometallurgical and chelating agent methods are available to remove and recover some metals contained in spent catalysts; these procedures generate potentially hazardous wastes and the emission of harmful gases. Thus, biotechnological treatments are currently gaining importance to avoid the negative impacts of chemical technologies. To this end, diverse microorganisms have been used to assess the removal of metals from spent catalysts, comprising bacteria, archaea and fungi, whose resistance and metal uptake capabilities differ depending on the microorganism tested. Acidophilic sulfur oxidizing bacteria have been used to investigate the biotreatment and extraction of valuable metals from spent catalysts, namely Acidithiobacillus thiooxidans and Acidithiobacillus ferroxidans, as they present the ability to produce leaching agents such as sulfuric acid and sulfur oxidation intermediates. In the present work, the ability of A. thiooxidans DSM 26636 for the bioleaching of metals contained in five different spent catalysts was assessed by growing the culture in modified Starkey mineral medium (with elemental sulfur at 1%, w/v), and 1% (w/v) pulp density of each residue for up to 21 days at 30 °C and 150 rpm. Sulfur-oxidizing activity was periodically evaluated by determining sulfate concentration in the supernatants according to the NMX-k-436-1977 method. The production of sulfuric acid was assessed in the supernatants as well, by a titration procedure using NaOH 0.5 M with bromothymol blue as acid-base indicator, and by measuring pH using a digital potentiometer. On the other hand, Inductively Coupled Plasma - Optical Emission Spectrometry was used to analyze metal removal from the five different spent catalysts by A. thiooxidans DSM 26636. Results obtained show that, as could be expected, sulfuric acid production is directly related to the diminish of pH, and also to highest metal removal efficiencies. It was observed that Al and Fe are recurrently removed from refinery spent catalysts regardless of their origin and previous usage, although these removals may vary from 9.5 ± 2.2 to 439 ± 3.9 mg/kg for Al, and from 7.13 ± 0.31 to 368.4 ± 47.8 mg/kg for Fe, depending on the spent catalyst proven. Besides, bioleaching of metals like Mg, Ni, and Si was also obtained from automotive spent catalysts, which removals were of up to 66 ± 2.2, 6.2±0.07, and 100±2.4, respectively. Hence, the data presented here exhibit the potential of A. thiooxidans DSM 26636 for the simultaneous bioleaching of metals contained in spent catalysts from diverse provenance.

Keywords: bioleaching, metal removal, spent catalysts, Acidithiobacillus thiooxidans

Procedia PDF Downloads 140
208 Achieving Product Robustness through Variation Simulation: An Industrial Case Study

Authors: Narendra Akhadkar, Philippe Delcambre

Abstract:

In power protection and control products, assembly process variations due to the individual parts manufactured from single or multi-cavity tooling is a major problem. The dimensional and geometrical variations on the individual parts, in the form of manufacturing tolerances and assembly tolerances, are sources of clearance in the kinematic joints, polarization effect in the joints, and tolerance stack-up. All these variations adversely affect the quality of product, functionality, cost, and time-to-market. Variation simulation analysis may be used in the early product design stage to predict such uncertainties. Usually, variations exist in both manufacturing processes and materials. In the tolerance analysis, the effect of the dimensional and geometrical variations of the individual parts on the functional characteristics (conditions) of the final assembled products are studied. A functional characteristic of the product may be affected by a set of interrelated dimensions (functional parameters) that usually form a geometrical closure in a 3D chain. In power protection and control products, the prerequisite is: when a fault occurs in the electrical network, the product must respond quickly to react and break the circuit to clear the fault. Usually, the response time is in milliseconds. Any failure in clearing the fault may result in severe damage to the equipment or network, and human safety is at stake. In this article, we have investigated two important functional characteristics that are associated with the robust performance of the product. It is demonstrated that the experimental data obtained at the Schneider Electric Laboratory prove the very good prediction capabilities of the variation simulation performed using CETOL (tolerance analysis software) in an industrial context. Especially, this study allows design engineers to better understand the critical parts in the product that needs to be manufactured with good, capable tolerances. On the contrary, some parts are not critical for the functional characteristics (conditions) of the product and may lead to some reduction of the manufacturing cost, ensuring robust performance. The capable tolerancing is one of the most important aspects in product and manufacturing process design. In the case of miniature circuit breaker (MCB), the product's quality and its robustness are mainly impacted by two aspects: (1) allocation of design tolerances between the components of a mechanical assembly and (2) manufacturing tolerances in the intermediate machining steps of component fabrication.

Keywords: geometrical variation, product robustness, tolerance analysis, variation simulation

Procedia PDF Downloads 164
207 Simscape Library for Large-Signal Physical Network Modeling of Inertial Microelectromechanical Devices

Authors: S. Srinivasan, E. Cretu

Abstract:

The information flow (e.g. block-diagram or signal flow graph) paradigm for the design and simulation of Microelectromechanical (MEMS)-based systems allows to model MEMS devices using causal transfer functions easily, and interface them with electronic subsystems for fast system-level explorations of design alternatives and optimization. Nevertheless, the physical bi-directional coupling between different energy domains is not easily captured in causal signal flow modeling. Moreover, models of fundamental components acting as building blocks (e.g. gap-varying MEMS capacitor structures) depend not only on the component, but also on the specific excitation mode (e.g. voltage or charge-actuation). In contrast, the energy flow modeling paradigm in terms of generalized across-through variables offers an acausal perspective, separating clearly the physical model from the boundary conditions. This promotes reusability and the use of primitive physical models for assembling MEMS devices from primitive structures, based on the interconnection topology in generalized circuits. The physical modeling capabilities of Simscape have been used in the present work in order to develop a MEMS library containing parameterized fundamental building blocks (area and gap-varying MEMS capacitors, nonlinear springs, displacement stoppers, etc.) for the design, simulation and optimization of MEMS inertial sensors. The models capture both the nonlinear electromechanical interactions and geometrical nonlinearities and can be used for both small and large signal analyses, including the numerical computation of pull-in voltages (stability loss). Simscape behavioral modeling language was used for the implementation of reduced-order macro models, that present the advantage of a seamless interface with Simulink blocks, for creating hybrid information/energy flow system models. Test bench simulations of the library models compare favorably with both analytical results and with more in-depth finite element simulations performed in ANSYS. Separate MEMS-electronic integration tests were done on closed-loop MEMS accelerometers, where Simscape was used for modeling the MEMS device and Simulink for the electronic subsystem.

Keywords: across-through variables, electromechanical coupling, energy flow, information flow, Matlab/Simulink, MEMS, nonlinear, pull-in instability, reduced order macro models, Simscape

Procedia PDF Downloads 136
206 Advanced Techniques in Semiconductor Defect Detection: An Overview of Current Technologies and Future Trends

Authors: Zheng Yuxun

Abstract:

This review critically assesses the advancements and prospective developments in defect detection methodologies within the semiconductor industry, an essential domain that significantly affects the operational efficiency and reliability of electronic components. As semiconductor devices continue to decrease in size and increase in complexity, the precision and efficacy of defect detection strategies become increasingly critical. Tracing the evolution from traditional manual inspections to the adoption of advanced technologies employing automated vision systems, artificial intelligence (AI), and machine learning (ML), the paper highlights the significance of precise defect detection in semiconductor manufacturing by discussing various defect types, such as crystallographic errors, surface anomalies, and chemical impurities, which profoundly influence the functionality and durability of semiconductor devices, underscoring the necessity for their precise identification. The narrative transitions to the technological evolution in defect detection, depicting a shift from rudimentary methods like optical microscopy and basic electronic tests to more sophisticated techniques including electron microscopy, X-ray imaging, and infrared spectroscopy. The incorporation of AI and ML marks a pivotal advancement towards more adaptive, accurate, and expedited defect detection mechanisms. The paper addresses current challenges, particularly the constraints imposed by the diminutive scale of contemporary semiconductor devices, the elevated costs associated with advanced imaging technologies, and the demand for rapid processing that aligns with mass production standards. A critical gap is identified between the capabilities of existing technologies and the industry's requirements, especially concerning scalability and processing velocities. Future research directions are proposed to bridge these gaps, suggesting enhancements in the computational efficiency of AI algorithms, the development of novel materials to improve imaging contrast in defect detection, and the seamless integration of these systems into semiconductor production lines. By offering a synthesis of existing technologies and forecasting upcoming trends, this review aims to foster the dialogue and development of more effective defect detection methods, thereby facilitating the production of more dependable and robust semiconductor devices. This thorough analysis not only elucidates the current technological landscape but also paves the way for forthcoming innovations in semiconductor defect detection.

Keywords: semiconductor defect detection, artificial intelligence in semiconductor manufacturing, machine learning applications, technological evolution in defect analysis

Procedia PDF Downloads 51
205 Building the Professional Readiness of Graduates from Day One: An Empirical Approach to Curriculum Continuous Improvement

Authors: Fiona Wahr, Sitalakshmi Venkatraman

Abstract:

Industry employers require new graduates to bring with them a range of knowledge, skills and abilities which mean these new employees can immediately make valuable work contributions. These will be a combination of discipline and professional knowledge, skills and abilities which give graduates the technical capabilities to solve practical problems whilst interacting with a range of stakeholders. Underpinning the development of these disciplines and professional knowledge, skills and abilities, are “enabling” knowledge, skills and abilities which assist students to engage in learning. These are academic and learning skills which are essential to common starting points for both the learning process of students entering the course as well as forming the foundation for the fully developed graduate knowledge, skills and abilities. This paper reports on a project created to introduce and strengthen these enabling skills into the first semester of a Bachelor of Information Technology degree in an Australian polytechnic. The project uses an action research approach in the context of ongoing continuous improvement for the course to enhance the overall learning experience, learning sequencing, graduate outcomes, and most importantly, in the first semester, student engagement and retention. The focus of this is implementing the new curriculum in first semester subjects of the course with the aim of developing the “enabling” learning skills, such as literacy, research and numeracy based knowledge, skills and abilities (KSAs). The approach used for the introduction and embedding of these KSAs, (as both enablers of learning and to underpin graduate attribute development), is presented. Building on previous publications which reported different aspects of this longitudinal study, this paper recaps on the rationale for the curriculum redevelopment and then presents the quantitative findings of entering students’ reading literacy and numeracy knowledge and skills degree as well as their perceived research ability. The paper presents the methodology and findings for this stage of the research. Overall, the cohort exhibits mixed KSA levels in these areas, with a relatively low aggregated score. In addition, the paper describes the considerations for adjusting the design and delivery of the new subjects with a targeted learning experience, in response to the feedback gained through continuous monitoring. Such a strategy is aimed at accommodating the changing learning needs of the students and serves to support them towards achieving the enabling learning goals starting from day one of their higher education studies.

Keywords: enabling skills, student retention, embedded learning support, continuous improvement

Procedia PDF Downloads 248
204 Integrated Passive Cooling Systems for Tropical Residential Buildings: A Review through the Lens of Latent Heat Assessment

Authors: O. Eso, M. Mohammadi, J. Darkwa, J. Calautit

Abstract:

Residential buildings are responsible for 22% of the global end-use energy demand and 17% of global CO₂ emissions. Tropical climates particularly present higher latent heat gains, leading to more cooling loads. However, the cooling processes are all based on conventional mechanical air conditioning systems which are energy and carbon intensive technologies. Passive cooling systems have in the past been considered as alternative technologies for minimizing energy consumption in buildings. Nevertheless, replacing mechanical cooling systems with passive ones will require a careful assessment of the passive cooling system heat transfer to determine if suitable to outperform their conventional counterparts. This is because internal heat gains, indoor-outdoor heat transfer, and heat transfer through envelope affects the performance of passive cooling systems. While many studies have investigated sensible heat transfer in passive cooling systems, not many studies have focused on their latent heat transfer capabilities. Furthermore, combining heat prevention, heat modulation and heat dissipation to passively cool indoor spaces in the tropical climates is critical to achieve thermal comfort. Since passive cooling systems use only one of these three approaches at a time, integrating more than one passive cooling system for effective indoor latent heat removal while still saving energy is studied. This study is a systematic review of recently published peer review journals on integrated passive cooling systems for tropical residential buildings. The missing links in the experimental and numerical studies with regards to latent heat reduction interventions are presented. Energy simulation studies of integrated passive cooling systems in tropical residential buildings are also discussed. The review has shown that comfortable indoor environment is attainable when two or more passive cooling systems are integrated in tropical residential buildings. Improvement occurs in the heat transfer rate and cooling performance of the passive cooling systems when thermal energy storage systems like phase change materials are included. Integrating passive cooling systems in tropical residential buildings can reduce energy consumption by 6-87% while achieving up to 17.55% reduction in indoor heat flux. The review has highlighted a lack of numerical studies regarding passive cooling system performance in tropical savannah climates. In addition, detailed studies are required to establish suitable latent heat transfer rate in passive cooling ventilation devices under this climate category. This should be considered in subsequent studies. The conclusions and outcomes of this study will help researchers understand the overall energy performance of integrated passive cooling systems in tropical climates and help them identify and design suitable climate specific options for residential buildings.

Keywords: energy savings, latent heat, passive cooling systems, residential buildings, tropical residential buildings

Procedia PDF Downloads 149
203 Adaptative Metabolism of Lactic Acid Bacteria during Brewers' Spent Grain Fermentation

Authors: M. Acin-Albiac, P. Filannino, R. Coda, Carlo G. Rizzello, M. Gobbetti, R. Di Cagno

Abstract:

Demand for smart management of large amounts of agro-food by-products has become an area of major environmental and economic importance worldwide. Brewers' spent grain (BSG), the most abundant by-product generated in the beer-brewing process, represents an example of valuable raw material and source of health-promoting compounds. To the date, the valorization of BSG as a food ingredient has been limited due to poor technological and sensory properties. Tailored bioprocessing through lactic acid bacteria (LAB) fermentation is a versatile and sustainable means for the exploitation of food industry by-products. Indigestible carbohydrates (e.g., hemicelluloses and celluloses), high phenolic content, and mostly lignin make of BSG a hostile environment for microbial survival. Hence, the selection of tailored starters is required for successful fermentation. Our study investigated the metabolic strategies of Leuconostoc pseudomesenteroides and Lactobacillus plantarum strains to exploit BSG as a food ingredient. Two distinctive BSG samples from different breweries (Italian IT- and Finish FL-BSG) were microbially and chemically characterized. Growth kinetics, organic acid profiles, and the evolution of phenolic profiles during the fermentation in two BSG model media were determined. The results were further complemented with gene expression targeting genes involved in the degradation cellulose, hemicelluloses building blocks, and the metabolism of anti-nutritional factors. Overall, the results were LAB genus dependent showing distinctive metabolic capabilities. Leuc. pseudomesenteroides DSM 20193 may degrade BSG xylans while sucrose metabolism could be furtherly exploited for extracellular polymeric substances (EPS) production to enhance BSG pro-technological properties. Although L. plantarum strains may follow the same metabolic strategies during BSG fermentation, the mode of action to pursue such strategies was strain-dependent. L. plantarum PU1 showed a great preference for β-galactans compared to strain WCFS1, while the preference for arabinose occurred at different metabolic phases. Phenolic compounds profiling highlighted a novel metabolic route for lignin metabolism. These findings will allow an improvement of understanding of how lactic acid bacteria transform BSG into economically valuable food ingredients.

Keywords: brewery by-product valorization, metabolism of plant phenolics, metabolism of lactic acid bacteria, gene expression

Procedia PDF Downloads 129
202 Collaboration versus Cooperation: Grassroots Activism in Divided Cities and Communication Networks

Authors: R. Barbour

Abstract:

Peace-building organisations act as a network of information for communities. Through fieldwork, it was highlighted that grassroots organisations and activists may cooperate with each other in their actions of peace-building; however, they would not collaborate. Within two divided societies; Nicosia in Cyprus and Jerusalem in Israel, there is a distinction made by organisations and activists with regards to activities being more ‘co-operative’ than ‘collaborative’. This theme became apparent when having informal conversations and semi-structured interviews with various members of the activist communities. This idea needs further exploration as these distinctions could impact upon the efficiency of peacebuilding activities within divided societies. Civil societies within divided landscapes, both physically and socially, play an important role in conflict resolution. How organisations and activists interact with each other has the possibility to be very influential with regards to peacebuilding activities. Working together sets a positive example for divided communities. Cooperation may be considered a primary level of interaction between CSOs. Therefore, at the beginning of a working relationship, organisations cooperate over basic agendas, parallel power structures and focus, which led to the same objective. Over time, in some instances, due to varying factors such as funding, more trust and understanding within the relationship, it could be seen that processes progressed to more collaborative ways. It is evident to see that NGOs and activist groups are highly independent and focus on their own agendas before coming together over shared issues. At this time, there appears to be more collaboration in Nicosia among CSOs and activists than Jerusalem. The aims and objectives of agendas also influence how organisations work together. In recent years, Nicosia, and Cyprus in general, have perhaps changed their focus from peace-building initiatives to more environmental issues which have become new-age reconciliation topics. Civil society does not automatically indicate like-minded organisations however solidarity within social groups can create ties that bring people and resources together. In unequal societies, such as those in Nicosia and Jerusalem, it is these ties that cut across groups and are essential for social cohesion. Societies are a collection of social groups; individuals who have come together over common beliefs. These groups in turn shape the identities and determine the values and structures within societies. At many different levels and stages, social groups work together through cooperation and collaboration. These structures in turn have the capabilities to open up networks to less powerful or excluded groups, with the aim to produce social cohesion which may contribute social stability and economic welfare over any extended period.

Keywords: collaboration, cooperation, grassroots activism, networks of communication

Procedia PDF Downloads 158
201 Development of Structural Deterioration Models for Flexible Pavement Using Traffic Speed Deflectometer Data

Authors: Sittampalam Manoharan, Gary Chai, Sanaul Chowdhury, Andrew Golding

Abstract:

The primary objective of this paper is to present a simplified approach to develop the structural deterioration model using traffic speed deflectometer data for flexible pavements. Maintaining assets to meet functional performance is not economical or sustainable in the long terms, and it would end up needing much more investments for road agencies and extra costs for road users. Performance models have to be included for structural and functional predicting capabilities, in order to assess the needs, and the time frame of those needs. As such structural modelling plays a vital role in the prediction of pavement performance. A structural condition is important for the prediction of remaining life and overall health of a road network and also major influence on the valuation of road pavement. Therefore, the structural deterioration model is a critical input into pavement management system for predicting pavement rehabilitation needs accurately. The Traffic Speed Deflectometer (TSD) is a vehicle-mounted Doppler laser system that is capable of continuously measuring the structural bearing capacity of a pavement whilst moving at traffic speeds. The device’s high accuracy, high speed, and continuous deflection profiles are useful for network-level applications such as predicting road rehabilitations needs and remaining structural service life. The methodology adopted in this model by utilizing time series TSD maximum deflection (D0) data in conjunction with rutting, rutting progression, pavement age, subgrade strength and equivalent standard axle (ESA) data. Then, regression analyses were undertaken to establish a correlation equation of structural deterioration as a function of rutting, pavement age, seal age and equivalent standard axle (ESA). This study developed a simple structural deterioration model which will enable to incorporate available TSD structural data in pavement management system for developing network-level pavement investment strategies. Therefore, the available funding can be used effectively to minimize the whole –of- life cost of the road asset and also improve pavement performance. This study will contribute to narrowing the knowledge gap in structural data usage in network level investment analysis and provide a simple methodology to use structural data effectively in investment decision-making process for road agencies to manage aging road assets.

Keywords: adjusted structural number (SNP), maximum deflection (D0), equant standard axle (ESA), traffic speed deflectometer (TSD)

Procedia PDF Downloads 151
200 Formation of Human Resources in the Light of Sustainable Development and the Achievement of Full Employment

Authors: Kaddour Fellague Mohammed

Abstract:

The world has seen in recent years, significant developments affected various aspects of life and influenced the different types of institutions, thus was born a new world is a world of globalization, which dominated the scientific revolution and the tremendous technological developments, and that contributed to the re-formation of human resources in contemporary organizations, and made patterns new regulatory and at the same time raised and strongly values and new ideas, the organizations have become more flexible, and faster response to consumer and environmental conditions, and exceeded the problem of time and place in the framework of communication and human interaction and use of advanced information technology and adoption mainly mechanism in running its operations , focused on performance and based strategic thinking and approach in order to achieve its strategic goals high degrees of superiority and excellence, this new reality created an increasing need for a new type of human resources, quality aims to renew and aspire to be a strategic player in managing the organization and drafting of various strategies, think globally and act locally, to accommodate local variables in the international markets, which began organizations tend to strongly as well as the ability to work under different cultures. Human resources management of the most important management functions to focus on the human element, which is considered the most valuable resource of the Department and the most influential in productivity at all, that the management and development of human resources Tattabra a cornerstone in the majority of organizations which aims to strengthen the organizational capacity, and enable companies to attract and rehabilitation of the necessary competencies and are able to keep up with current and future challenges, human resources can contribute to and strongly in achieving the objectives and profit organization, and even expand more than contribute to the creation of new jobs to alleviate unemployment and achieve full operation, administration and human resources mean short optimal use of the human element is available and expected, where he was the efficiency and capabilities, and experience of this human element, and his enthusiasm for the work stop the efficiency and success in reaching their goals, so interested administration scientists developed the principles and foundations that help to make the most of each individual benefit in the organization through human resources management, these foundations start of the planning and selection, training and incentives and evaluation, which is not separate from each other, but are integrated with each other as a system systemic order to reach the efficient functioning of the human resources management and has been the organization as a whole in the context of development sustainable.

Keywords: configuration, training, development, human resources, operating

Procedia PDF Downloads 432
199 Visual Aid and Imagery Ramification on Decision Making: An Exploratory Study Applicable in Emergency Situations

Authors: Priyanka Bharti

Abstract:

Decades ago designs were based on common sense and tradition, but after an enhancement in visualization technology and research, we are now able to comprehend the cognitive ability involved in the decoding of the visual information. However, many fields in visuals need intense research to deliver an efficient explanation for the events. Visuals are an information representation mode through images, symbols and graphics. It plays an impactful role in decision making by facilitating quick recognition, comprehension, and analysis of a situation. They enhance problem-solving capabilities by enabling the processing of more data without overloading the decision maker. As research proves that, visuals offer an improved learning environment by a factor of 400 compared to textual information. Visual information engages learners at a cognitive level and triggers the imagination, which enables the user to process the information faster (visuals are processed 60,000 times faster in the brain than text). Appropriate information, visualization, and its presentation are known to aid and intensify the decision-making process for the users. However, most literature discusses the role of visual aids in comprehension and decision making during normal conditions alone. Unlike emergencies, in a normal situation (e.g. our day to day life) users are neither exposed to stringent time constraints nor face the anxiety of survival and have sufficient time to evaluate various alternatives before making any decision. An emergency is an unexpected probably fatal real-life situation which may inflict serious ramifications on both human life and material possessions unless corrective measures are taken instantly. The situation demands the exposed user to negotiate in a dynamic and unstable scenario in the absence or lack of any preparation, but still, take swift and appropriate decisions to save life/lives or possessions. But the resulting stress and anxiety restricts cue sampling, decreases vigilance, reduces the capacity of working memory, causes premature closure in evaluating alternative options, and results in task shedding. Limited time, uncertainty, high stakes and vague goals negatively affect cognitive abilities to take appropriate decisions. More so, theory of natural decision making by experts has been understood with far more depth than that of an ordinary user. Therefore, in this study, the author aims to understand the role of visual aids in supporting rapid comprehension to take appropriate decisions during an emergency situation.

Keywords: cognition, visual, decision making, graphics, recognition

Procedia PDF Downloads 268
198 A Risk-Based Modeling Approach for Successful Adoption of CAATTs in Audits: An Exploratory Study Applied to Israeli Accountancy Firms

Authors: Alon Cohen, Jeffrey Kantor, Shalom Levy

Abstract:

Technology adoption models are extensively used in the literature to explore drivers and inhibitors affecting the adoption of Computer Assisted Audit Techniques and Tools (CAATTs). Further studies from recent years suggested additional factors that may affect technology adoption by CPA firms. However, the adoption of CAATTs by financial auditors differs from the adoption of technologies in other industries. This is a result of the unique characteristics of the auditing process, which are expressed in the audit risk elements and the risk-based auditing approach, as encoded in the auditing standards. Since these audit risk factors are not part of the existing models that are used to explain technology adoption, these models do not fully correspond to the specific needs and requirements of the auditing domain. The overarching objective of this qualitative research is to fill the gap in the literature, which exists as a result of using generic technology adoption models. Followed by a pretest and based on semi-structured in-depth interviews with 16 Israeli CPA firms of different sizes, this study aims to reveal determinants related to audit risk factors that influence the adoption of CAATTs in audits and proposes a new modeling approach for the successful adoption of CAATTs. The findings emphasize several important aspects: (1) while large CPA firms developed their own inner guidelines to assess the audit risk components, other CPA firms do not follow a formal and validated methodology to evaluate these risks; (2) large firms incorporate a variety of CAATTs, including self-developed advanced tools. On the other hand, small and mid-sized CPA firms incorporate standard CAATTs and still need to catch up to better understand what CAATTs can offer and how they can contribute to the quality of the audit; (3) the top management of mid-sized and small CPA firms should be more proactive and updated about CAATTs capabilities and contributions to audits; and (4) All CPA firms consider professionalism as a major challenge that must be constantly managed to ensure an optimal CAATTs operation. The study extends the existing knowledge of CAATTs adoption by looking at it from a risk-based auditing approach. It suggests a new model for CAATTs adoption by incorporating influencing audit risk factors that auditors should examine when considering CAATTs adoption. Since the model can be used in various audited scenarios and supports strategic, risk-based decisions, it maximizes the great potential of CAATTs on the quality of the audits. The results and insights can be useful to CPA firms, internal auditors, CAATTs developers and regulators. Moreover, it may motivate audit standard-setters to issue updated guidelines regarding CAATTs adoption in audits.

Keywords: audit risk, CAATTs, financial auditing, information technology, technology adoption models

Procedia PDF Downloads 67
197 A Temporary Shelter Proposal for Displaced People

Authors: İrem Yetkin, Feray Maden, Seda Tosun, Yenal Akgün, Özgür Kilit, Koray Korkmaz, Gökhan Kiper, Mustafa Gündüzalp

Abstract:

Forced migration, whether caused by conflicts or other factors, frequently places individuals in vulnerable situations, necessitating immediate access to shelter. To promptly address the immediate needs of affected individuals, temporary shelters are often established. These shelters are characterized by their adaptable and functional nature, encompassing lightweight and sustainable structural systems, rapid assembly capabilities, modularity, and transportability. The shelter design is contingent upon demand, resulting in distinct phases for different structural forms. A multi-phased shelter approach covers emergency response, temporary shelter, and permanent reconstruction. Emergency shelters play a critical role in providing immediate life-saving aid, while temporary and transitional shelters, which are also called “t-shelters,” offer longer-term living environments during the recovery and rebuilding phases. Among these, temporary shelters are more extensively covered in the literature due to their diverse inhabiting functions. The roles of emergency shelters and temporary shelters are inherently separate, addressing distinct aspects of sheltering processes. Given their prolonged usage, temporary shelters are built for greater durability compared to emergency shelters. Nonetheless, inadequacies in temporary shelters can lead to challenges in ensuring habitability. Issues like non-expandable structures unsuitable for accommodating large families, the use of short-term shelters that worsen conditions, non-waterproof materials providing insufficient protection against bad weather conditions, and complex installation systems contribute to these problems. Given the aforementioned problems, there arises a need to develop adaptive shelters featuring lightweight components for ease of transport, possess the ability for rapid assembly, and utilize durable materials to withstand adverse weather conditions. In this study, first, the state-of-the-art on temporary shelters is presented. Then, an adaptive temporary shelter composed of foldable plates is proposed, which can easily be assembled and transportable. The proposed shelter is deliberated upon its movement capacity, transportability, and flexibility. This study makes a valuable contribution to the literature since it not only offers a systematic analysis of temporary shelters utilizing kinetic systems but also presents a practical solution that meets the necessary design requirements.

Keywords: deployable structures, foldable plates, forced migration, temporary shelters

Procedia PDF Downloads 72
196 Market Segmentation of Cruise Ship Passengers: Implications for Marketing of Local Products and Services at Destination Points

Authors: Gunnar Oskarsson, Irena Georgsdottir

Abstract:

Tourism has been growing incredibly fast during the past years, including the cruise industry, which is gaining increasing popularity among various groups of travelers. It is a challenging task for companies serving cruise ship passengers with local products and services at the point of destination to reach them in due time with information about their offerings, as well learning how to adapt their offerings and messages to the type of customers arriving on each particular occasion. Although some research has been conducted in this sphere, there is still limited knowledge about many specifics within this sector of the tourist industry. The objective of this research is to examine one of these, with the main goal of studying the segmentation of cruise passengers and to learn about marketing practices directed towards them. A qualitative research method, based on in-depth interviews, was used, as this provides an opportunity to gain insight into the participants’ perspectives. Interviews were conducted with 10 respondents from different companies in the tourist industry in Iceland, who interact with cruise passengers on a regular basis in their work environment. The main objective was to gain an understanding of what distinguishes different customer groups, or segments, in this industry, and of the marketing approaches directed towards them. The main findings reveal that participants note the strongest difference between cruise passengers of different nationalities, passengers coming on different ships (size and type), and passengers arriving at different times of the year. A drastic difference was noticed between nationalities in four main segments, American, British, Other European, and Asian customers, although some of these segments could be divided into even further sub-segments. Other important differencing factors were size and type of ships, quality or number of stars on the ship, and travelling time of the year. Companies serving cruise ship passengers, as well as the customers themselves, could benefit if the offerings of services were designed specifically for particular segments within the industry. Concerning marketing towards cruise passengers, the results indicate that it is carried out almost exclusively through the Internet using; a reliable website and, search engine optimization. Marketing is also by word-of-mouth. This research can assist practitioners by offering a deeper understanding of the approaches that may be effective in marketing local products and services to cruise ship passengers, based on their segmentation and by identifying effective ways to reach them. The research, furthermore, provides a valuable contribution to marketing knowledge for the benefit of an increasingly important market segment in a fast growing tourist industry.

Keywords: capabilities, global integration, internationalisation, SMEs

Procedia PDF Downloads 401
195 Clubhouse: A Minor Rebellion against the Algorithmic Tyranny of the Majority

Authors: Vahid Asadzadeh, Amin Ataee

Abstract:

Since the advent of social media, there has been a wave of optimism among researchers and civic activists about the influence of virtual networks on the democratization process, which has gradually waned. One of the lesser-known concerns is how to increase the possibility of hearing the voices of different minorities. According to the theory of media logic, the media, using their technological capabilities, act as a structure through which events and ideas are interpreted. Social media, through the use of the learning machine and the use of algorithms, has formed a kind of structure in which the voices of minorities and less popular topics are lost among the commotion of the trends. In fact, the recommended systems and algorithms used in social media are designed to help promote trends and make popular content more popular, and content that belongs to minorities is constantly marginalized. As social networks gradually play a more active role in politics, the possibility of freely participating in the reproduction and reinterpretation of structures in general and political structures in particular (as Laclau‎ and Mouffe had in mind‎) can be considered as criteria to democracy in action. The point is that the media logic of virtual networks is shaped by the rule and even the tyranny of the majority, and this logic does not make it possible to design a self-foundation and self-revolutionary model of democracy. In other words, today's social networks, though seemingly full of variety But they are governed by the logic of homogeneity, and they do not have the possibility of multiplicity as is the case in immanent radical democracies (influenced by Gilles Deleuze). However, with the emergence and increasing popularity of Clubhouse as a new social media, there seems to be a shift in the social media space, and that is the diminishing role of algorithms and systems reconditioners as content delivery interfaces. This has led to the fact that in the Clubhouse, the voices of minorities are better heard, and the diversity of political tendencies manifests itself better. The purpose of this article is to show, first, how social networks serve the elimination of minorities in general, and second, to argue that the media logic of social networks must adapt to new interpretations of democracy that give more space to minorities and human rights. Finally, this article will show how the Clubhouse serves the new interpretations of democracy at least in a minimal way. To achieve the mentioned goals, in this article by a descriptive-analytical method, first, the relation between media logic and postmodern democracy will be inquired. The political economy popularity in social media and its conflict with democracy will be discussed. Finally, it will be explored how the Clubhouse provides a new horizon for the concepts embodied in radical democracy, a horizon that more effectively serves the rights of minorities and human rights in general.

Keywords: algorithmic tyranny, Clubhouse, minority rights, radical democracy, social media

Procedia PDF Downloads 145
194 A Perspective of Digital Formation in the Solar Community as a Prototype for Finding Sustainable Algorithmic Conditions on Earth

Authors: Kunihisa Kakumoto

Abstract:

“Purpose”: Global environmental issues are now being raised in a global dimension. By predicting sprawl phenomena beyond the limits of nature with algorithms, we can expect to protect our social life within the limits of nature. It turns out that the sustainable state of the planet now consists in maintaining a balance between the capabilities of nature and the possibilities of our social life. The amount of water on earth is finite. Sustainability is therefore highly dependent on water capacity. A certain amount of water is stored in the forest by planting and green space, and the amount of water can be considered in relation to the green space. CO2 is also absorbed by green plants. "Possible measurements and methods": The concept of the solar community has been introduced in technical papers on the occasion of many international conferences. The solar community concept is based on data collected from one solar model house. This algorithmic study simulates the amount of water stored by lush green vegetation. In addition, we calculated and compared the amount of CO2 emissions from the Taiyo Community and the amount of CO2 reduction from greening. Based on the trial calculation results of these solar communities, we are simulating the sustainable state of the earth as an algorithm trial calculation result. We believe that we should also consider the composition of this solar community group using digital technology as control technology. "Conclusion": We consider the solar community as a prototype for finding sustainable conditions for the planet. The role of water is very important as the supply capacity of water is limited. However, the circulation of social life is not constructed according to the mechanism of nature. This simulation trial calculation is explained using the total water supply volume as an example. According to this process, algorithmic calculations consider the total capacity of the water supply and the population and habitable numbers of the area. Green vegetated land is very important to keep enough water. Green vegetation is also very important to maintain CO2 balance. A simulation trial calculation is possible from the relationship between the CO2 emissions of the solar community and the amount of CO2 reduction due to greening. In order to find this total balance and sustainable conditions, the algorithmic simulation calculation takes into account lush vegetation and total water supply. Research to find sustainable conditions is done by simulating an algorithmic model of the solar community as a prototype. In this one prototype example, it's balanced. The activities of our social life must take place within the permissive limits of natural mechanisms. Of course, we aim for a more ideal balance by utilizing auxiliary digital control technology such as AI.

Keywords: solar community, sustainability, prototype, algorithmic simulation

Procedia PDF Downloads 61
193 Implicit U-Net Enhanced Fourier Neural Operator for Long-Term Dynamics Prediction in Turbulence

Authors: Zhijie Li, Wenhui Peng, Zelong Yuan, Jianchun Wang

Abstract:

Turbulence is a complex phenomenon that plays a crucial role in various fields, such as engineering, atmospheric science, and fluid dynamics. Predicting and understanding its behavior over long time scales have been challenging tasks. Traditional methods, such as large-eddy simulation (LES), have provided valuable insights but are computationally expensive. In the past few years, machine learning methods have experienced rapid development, leading to significant improvements in computational speed. However, ensuring stable and accurate long-term predictions remains a challenging task for these methods. In this study, we introduce the implicit U-net enhanced Fourier neural operator (IU-FNO) as a solution for stable and efficient long-term predictions of the nonlinear dynamics in three-dimensional (3D) turbulence. The IU-FNO model combines implicit re-current Fourier layers to deepen the network and incorporates the U-Net architecture to accurately capture small-scale flow structures. We evaluate the performance of the IU-FNO model through extensive large-eddy simulations of three types of 3D turbulence: forced homogeneous isotropic turbulence (HIT), temporally evolving turbulent mixing layer, and decaying homogeneous isotropic turbulence. The results demonstrate that the IU-FNO model outperforms other FNO-based models, including vanilla FNO, implicit FNO (IFNO), and U-net enhanced FNO (U-FNO), as well as the dynamic Smagorinsky model (DSM), in predicting various turbulence statistics. Specifically, the IU-FNO model exhibits improved accuracy in predicting the velocity spectrum, probability density functions (PDFs) of vorticity and velocity increments, and instantaneous spatial structures of the flow field. Furthermore, the IU-FNO model addresses the stability issues encountered in long-term predictions, which were limitations of previous FNO models. In addition to its superior performance, the IU-FNO model offers faster computational speed compared to traditional large-eddy simulations using the DSM model. It also demonstrates generalization capabilities to higher Taylor-Reynolds numbers and unseen flow regimes, such as decaying turbulence. Overall, the IU-FNO model presents a promising approach for long-term dynamics prediction in 3D turbulence, providing improved accuracy, stability, and computational efficiency compared to existing methods.

Keywords: data-driven, Fourier neural operator, large eddy simulation, fluid dynamics

Procedia PDF Downloads 74
192 Economic Policy to Promote small and Medium-sized Enterprises in Georgia in the Post-Pandemic Period

Authors: Gulnaz Erkomaishvili

Abstract:

Introduction: The paper assesses the impact of the COVID-19 pandemic on the activities of small and medium-sized enterprises in Georgia, identifies their problems, and analyzes the state economic policy measures. During the pandemic, entrepreneurs named the imposition of restrictions, access to financial resources, shortage of qualified personnel, high tax rates, unhealthy competition in the market, etc. as the main challenges. The Georgian government has had to take special measures to mitigate the crisis impact caused by the pandemic. For example - in 2020, they mobilized more than 1,6 billion Gel for various eventsto support entrepreneurs. Small and medium-sized entrepreneurship development strategy is presented based on the research; Corresponding conclusions are made, and recommendations are developed. Objectives: The object of research is small and medium-sized enterprises and economic-political decisions aimed at their promotion.Methodology: This paper uses general and specific methods, in particular, analysis, synthesis, induction, deduction, scientific abstraction, comparative and statistical methods, as well as experts’ evaluation. In-depth interviews with experts were conducted to determine quantitative and qualitative indicators; Publications of the National Statistics Office of Georgia are used to determine the regularity between analytical and statistical estimations. Also, theoretical and applied research of international organizations and scientist-economists are used. Contributions: The COVID-19pandemic has had a significant impact on small and medium-sized enterprises. For them, Lockdown is a major challenge. Total sales volume decreased. At the same time, the innovative capabilities of enterprises and the volume of sales in remote channels have increased. As for the assessment of state support measures by small and medium-sizedentrepreneurs, despite the existence of support programs, a large number of entrepreneurs still do not evaluate the measures taken by the state positively. Among the desirable measures to be taken by the state, which would improve the activities of small and medium-sized entrepreneurs, who negatively or largely negatively assessed the activity of the state, named: tax incentives/exemption from certain taxes at the initial stage; Need for periodic trainings/organization of digital technologies, marketing training courses to improve the qualification of employees; Logic and adequacy of criteria when awarding grants and funding; Facilitating the finding of investors; Less bureaucracy, etc.

Keywords: small and medium enterprises, small and medium entrepreneurship, economic policy for small and medium entrepreneurship development, government regulations in Georgia, COVID-19 pandemic

Procedia PDF Downloads 155
191 Unlocking Synergy: Exploring the Impact of Integrating Knowledge Management and Competitive Intelligence for Synergistic Advantage for Efficient, Inclusive and Optimum Organizational Performance

Authors: Godian Asami Mabindah

Abstract:

The convergence of knowledge management (KM) and competitive intelligence (CI) has gained significant attention in recent years as organizations seek to enhance their competitive advantage in an increasingly complex and dynamic business environment. This research study aims to explore and understand the synergistic relationship between KM and CI and its impact on organizational performance. By investigating how the integration of KM and CI practices can contribute to decision-making, innovation, and competitive advantage, this study seeks to unlock the potential benefits and challenges associated with this integration. The research employs a mixed-methods approach to gather comprehensive data. A quantitative analysis is conducted using survey data collected from a diverse sample of organizations across different industries. The survey measures the extent of integration between KM and CI practices and examines the perceived benefits and challenges associated with this integration. Additionally, qualitative interviews are conducted with key organizational stakeholders to gain deeper insights into their experiences, perspectives, and best practices regarding the synergistic relationship. The findings of this study are expected to reveal several significant outcomes. Firstly, it is anticipated that organizations that effectively integrate KM and CI practices will outperform those that treat them as independent functions. The study aims to highlight the positive impact of this integration on decision-making, innovation, organizational learning, and competitive advantage. Furthermore, the research aims to identify critical success factors and enablers for achieving constructive interaction between KM and CI, such as leadership support, culture, technology infrastructure, and knowledge-sharing mechanisms. The implications of this research are far-reaching. Organizations can leverage the findings to develop strategies and practices that facilitate the integration of KM and CI, leading to enhanced competitive intelligence capabilities and improved knowledge management processes. Additionally, the research contributes to the academic literature by providing a comprehensive understanding of the synergistic relationship between KM and CI and proposing a conceptual framework that can guide future research in this area. By exploring the synergies between KM and CI, this study seeks to help organizations harness their collective power to gain a competitive edge in today's dynamic business landscape. The research provides practical insights and guidelines for organizations to effectively integrate KM and CI practices, leading to improved decision-making, innovation, and overall organizational performance.

Keywords: Competitive Intelligence, Knowledge Management, Organizational Performance, Incusivity, Optimum Performance

Procedia PDF Downloads 91
190 Generative Behaviors and Psychological Well-Being in Mexican Elders

Authors: Ana L. Gonzalez-Celis, Edgardo Ruiz-Carrillo, Karina Reyes-Jarquin, Margarita Chavez-Becerra

Abstract:

Since recent decades, the aging has been viewed from a more positive perspective, where is not only about losses and damage, but also about being on a stage where you can enjoy life and live with well-being and quality of life. The challenge to feel better is to find those resources that seniors have. For that reason, psychological well-being has shown interest in the study of the affect and life satisfaction (hedonic well-being), while from a more recent tradition, focus on the development of capabilities and the personal growth, considering both as the main indicators of the quality of life. A resource that can be used in the later age is generativity, which refers to the ability of older people to develop and grow through activities that contribute with the improvement of the context in which they live and participate. In this way the generative interest is understood as a favourable attitude that contribute to the common benefit while strengthening and enriching the social institutions, to ensure continuity between generations and social development. On the other hand, generative behavior, differentiating from generative interest, is the expression of that attitude reflected in activities that make a social contribution and a benefit for generations to come. Hence the purpose of the research was to test if there is an association between the generative behaviour type and the psychological well-being with their dimensions. For this reason 188 Mexican adults from 60 to 94 years old (M = 69.78), 67% women, 33% men, completed two instruments: The Ryff’s Well-Being Scales to measure psychological well-being with 39 items with two dimensions (Hedonic and Eudaimonic well-being), and the Loyola’s Generative Behaviors Scale, grouped in five categories: Knowledge transmitted to the next generation, things to be remember, creativity, be productive, contribution to the community, and responsibility of other people. In addition, the socio-demographic data sheet was tested, and self-reported health status. The results indicated that the psychological well-being and its dimensions were significantly associated with the presence of generative behavior, where the level of well-being was higher when the frequency of some generative behaviour excelled; finding that the behavior with greater psychological well-being (M = 81.04, SD = 8.18) was "things to be remembered"; while with greater hedonic well-being (M = 73.39, SD = 12.19) was the behavior "responsibility of other people"; and with greater Eudaimonic well-being (M = 84.61, SD = 6.63), was the behavior "things to be remembered”. The most important findings highlight the importance of generative behaviors in adulthood, finding empirical evidence that the generativity in the last stage of life is associated with well-being. However, by finding differences in the types of generative behaviors at the level of well-being, is proposed the idea that generativity is not situated as an isolated construct, but needs other contextualized and related constructs that can simultaneously operate at different levels, taking into account the relationship between the environment and the individual, encompassing both the social and psychological dimension.

Keywords: eudaimonic well-being, generativity, hedonic well-being, Mexican elders, psychological well-being

Procedia PDF Downloads 273
189 Analysis and Design Modeling for Next Generation Network Intrusion Detection and Prevention System

Authors: Nareshkumar Harale, B. B. Meshram

Abstract:

The continued exponential growth of successful cyber intrusions against today’s businesses has made it abundantly clear that traditional perimeter security measures are no longer adequate and effective. We evolved the network trust architecture from trust-untrust to Zero-Trust, With Zero Trust, essential security capabilities are deployed in a way that provides policy enforcement and protection for all users, devices, applications, data resources, and the communications traffic between them, regardless of their location. Information exchange over the Internet, in spite of inclusion of advanced security controls, is always under innovative, inventive and prone to cyberattacks. TCP/IP protocol stack, the adapted standard for communication over network, suffers from inherent design vulnerabilities such as communication and session management protocols, routing protocols and security protocols are the major cause of major attacks. With the explosion of cyber security threats, such as viruses, worms, rootkits, malwares, Denial of Service attacks, accomplishing efficient and effective intrusion detection and prevention is become crucial and challenging too. In this paper, we propose a design and analysis model for next generation network intrusion detection and protection system as part of layered security strategy. The proposed system design provides intrusion detection for wide range of attacks with layered architecture and framework. The proposed network intrusion classification framework deals with cyberattacks on standard TCP/IP protocol, routing protocols and security protocols. It thereby forms the basis for detection of attack classes and applies signature based matching for known cyberattacks and data mining based machine learning approaches for unknown cyberattacks. Our proposed implemented software can effectively detect attacks even when malicious connections are hidden within normal events. The unsupervised learning algorithm applied to network audit data trails results in unknown intrusion detection. Association rule mining algorithms generate new rules from collected audit trail data resulting in increased intrusion prevention though integrated firewall systems. Intrusion response mechanisms can be initiated in real-time thereby minimizing the impact of network intrusions. Finally, we have shown that our approach can be validated and how the analysis results can be used for detecting and protection from the new network anomalies.

Keywords: network intrusion detection, network intrusion prevention, association rule mining, system analysis and design

Procedia PDF Downloads 227
188 Insect Cell-Based Models: Asutralian Sheep bBlowfly Lucilia Cuprina Embryo Primary Cell line Establishment and Transfection

Authors: Yunjia Yang, Peng Li, Gordon Xu, Timothy Mahony, Bing Zhang, Neena Mitter, Karishma Mody

Abstract:

Sheep flystrike is one of the most economically important diseases affecting the Australian sheep and wool industry (>356M/annually). Currently, control of Lucillia cuprina relies almost exclusively on chemicals controls, and the parasite has developed resistance to nearly all control chemicals used in the past. It is, therefore, critical to develop an alternative solution for the sustainable control and management of flystrike. RNA interference (RNAi) technologies have been successfully explored in multiple animal industries for developing parasites controls. This research project aims to develop a RNAi based biological control for sheep blowfly. Double-stranded RNA (dsRNA) has already proven successful against viruses, fungi, and insects. However, the environmental instability of dsRNA is a major bottleneck for successful RNAi. Bentonite polymer (BenPol) technology can overcome this problem, as it can be tuned for the controlled release of dsRNA in the gut challenging pH environment of the blowfly larvae, prolonging its exposure time to and uptake by target cells. To investigate the potential of BenPol technology for dsRNA delivery, four different BenPol carriers were tested for their dsRNA loading capabilities, and three of them were found to be capable of affording dsRNA stability under multiple temperatures (4°C, 22°C, 40°C, 55°C) in sheep serum. Based on stability results, dsRNA from potential targeted genes was loaded onto BenPol carriers and tested in larvae feeding assays, three genes resulting in knockdowns. Meanwhile, a primary blowfly embryo cell line (BFEC) derived from L. cuprina embryos was successfully established, aim for an effective insect cell model for testing RNAi efficacy for preliminary assessments and screening. The results of this study establish that the dsRNA is stable when loaded on BenPol particles, unlike naked dsRNA rapidly degraded in sheep serum. The stable nanoparticle delivery system offered by BenPol technology can protect and increase the inherent stability of dsRNA molecules at higher temperatures in a complex biological fluid like serum, providing promise for its future use in enhancing animal protection.

Keywords: lucilia cuprina, primary cell line establishment, RNA interference, insect cell transfection

Procedia PDF Downloads 73
187 Dual-use UAVs in Armed Conflicts: Opportunities and Risks for Cyber and Electronic Warfare

Authors: Piret Pernik

Abstract:

Based on strategic, operational, and technical analysis of the ongoing armed conflict in Ukraine, this paper will examine the opportunities and risks of using small commercial drones (dual-use unmanned aerial vehicles, UAV) for military purposes. The paper discusses the opportunities and risks in the information domain, encompassing both cyber and electromagnetic interference and attacks. The paper will draw conclusions on a possible strategic impact to the battlefield outcomes in the modern armed conflicts by the widespread use of dual-use UAVs. This article will contribute to filling the gap in the literature by examining based on empirical data cyberattacks and electromagnetic interference. Today, more than one hundred states and non-state actors possess UAVs ranging from low cost commodity models, widely are dual-use, available and affordable to anyone, to high-cost combat UAVs (UCAV) with lethal kinetic strike capabilities, which can be enhanced with Artificial Intelligence (AI) and Machine Learning (ML). Dual-use UAVs have been used by various actors for intelligence, reconnaissance, surveillance, situational awareness, geolocation, and kinetic targeting. Thus they function as force multipliers enabling kinetic and electronic warfare attacks and provide comparative and asymmetric operational and tactical advances. Some go as far as argue that automated (or semi-automated) systems can change the character of warfare, while others observe that the use of small drones has not changed the balance of power or battlefield outcomes. UAVs give considerable opportunities for commanders, for example, because they can be operated without GPS navigation, makes them less vulnerable and dependent on satellite communications. They can and have been used to conduct cyberattacks, electromagnetic interference, and kinetic attacks. However, they are highly vulnerable to those attacks themselves. So far, strategic studies, literature, and expert commentary have overlooked cybersecurity and electronic interference dimension of the use of dual use UAVs. The studies that link technical analysis of opportunities and risks with strategic battlefield outcomes is missing. It is expected that dual use commercial UAV proliferation in armed and hybrid conflicts will continue and accelerate in the future. Therefore, it is important to understand specific opportunities and risks related to the crowdsourced use of dual-use UAVs, which can have kinetic effects. Technical countermeasures to protect UAVs differ depending on a type of UAV (small, midsize, large, stealth combat), and this paper will offer a unique analysis of small UAVs both from the view of opportunities and risks for commanders and other actors in armed conflict.

Keywords: dual-use technology, cyber attacks, electromagnetic warfare, case studies of cyberattacks in armed conflicts

Procedia PDF Downloads 102