Search results for: competitive intelligence
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2813

Search results for: competitive intelligence

83 An Explorative Analysis of Effective Project Management of Research and Research-Related Projects within a recently Formed Multi-Campus Technology University

Authors: Àidan Higgins

Abstract:

Higher education will be crucial in the coming decades in helping to make Ireland a nation is known for innovation, competitive enterprise, and ongoing academic success, as well as a desirable location to live and work with a high quality of life, vibrant culture, and inclusive social structures. Higher education institutions will actively connect with each student community, society, and business; they will help students develop a sense of place and identity in Ireland and provide the tools they need to contribute significantly to the global community. It will also serve as a catalyst for novel ideas through research, many of which will become the foundation for long-lasting inventive businesses in the future as part of the 2030 National Strategy on Education focuses on change and developing our education system with a focus on how we carry out Research. The emphasis is central to knowledge transfer and a consistent research framework with exploiting opportunities and having the necessary expertise. The newly formed Technological Universities (TU) in Ireland are based on a government initiative to create a new type of higher education institution that focuses on applied and industry-focused research and education. The basis of the TU is to bring together two or more existing institutes of technology to create a larger and more comprehensive institution that offers a wider range of programs and services to students and industry partners. The TU model aims to promote collaboration between academia, industry, and community organizations to foster innovation, research, and economic development. The TU model also aims to enhance the student experience by providing a more seamless pathway from undergraduate to postgraduate studies, as well as greater opportunities for work placements and engagement with industry partners. Additionally, the TUs are designed to provide a greater emphasis on applied research, technology transfer, and entrepreneurship, with the goal of fostering innovation and contributing to economic growth. A project is a collection of organised tasks carried out precisely to produce a singular output (product or service) within a given time frame. Project management is a set of activities that facilitates the successful implementation of a project. The significant differences between research and development projects are the (lack of) precise requirements and (the inability to) plan an outcome from the beginning of the project. The evaluation criteria for a research project must consider these and other "particularities" in works; for instance, proving something cannot be done may be a successful outcome. This study intends to explore how a newly established multi-campus technological university manages research projects effectively. The study will identify the potential and difficulties of managing research projects, the tools, resources and processes available in a multi-campus Technological University context and the methods and approaches employed to deal with these difficulties. Key stakeholders like project managers, academics, and administrators will be surveyed as part of the study, which will also involve an explorative investigation of current literature and data. The findings of this study will contribute significantly to creating best practices for project management in this setting and offer insightful information about the efficient management of research projects within a multi-campus technological university.

Keywords: project management, research and research-related projects, multi-campus technology university, processes

Procedia PDF Downloads 59
82 Pre-Industrial Local Architecture According to Natural Properties

Authors: Selin Küçük

Abstract:

Pre-industrial architecture is integration of natural and subsequent properties by intelligence and experience. Since various settlements relatively industrialized or non-industrialized at any time, ‘pre-industrial’ term does not refer to a definite time. Natural properties, which are existent conditions and materials in natural local environment, are climate, geomorphology and local materials. Subsequent properties, which are all anthropological comparatives, are culture of societies, requirements of people and construction techniques that people use. Yet, after industrialization, technology took technique’s place, cultural effects are manipulated, requirements are changed and local/natural properties are almost disappeared in architecture. Technology is universal, global and expands simply; conversely technique is time and experience dependent and should has a considerable cultural background. This research is about construction techniques according to natural properties of a region and classification of these techniques. Understanding local architecture is only possible by searching its background which is hard to reach. There are always changes in positive and negative in architectural techniques through the time. Archaeological layers of a region sometimes give more accurate information about transformation of architecture. However, natural properties of any region are the most helpful elements to perceive construction techniques. Many international sources from different cultures are interested in local architecture by mentioning natural properties separately. Unfortunately, there is no literature deals with this subject as far as systematically in the correct way. This research aims to improve a clear perspective of local architecture existence by categorizing archetypes according to natural properties. The ultimate goal of this research is generating a clear classification of local architecture independent from subsequent (anthropological) properties over the world such like a handbook. Since local architecture is the most sustainable architecture with refer to its economic, ecologic and sociological properties, there should be an excessive information about construction techniques to be learned from. Constructing the same buildings in all over the world is one of the main criticism of modern architectural system. While this critics going on, the same buildings without identity increase incrementally. In post-industrial term, technology widely took technique’s place, yet cultural effects are manipulated, requirements are changed and natural local properties are almost disappeared in architecture. These study does not offer architects to use local techniques, but it indicates the progress of pre-industrial architectural evolution which is healthier, cheaper and natural. Immigration from rural areas to developing/developed cities should be prohibited, thus culture and construction techniques can be preserved. Since big cities have psychological, sensational and sociological impact on people, rural settlers can be convinced to not to immigrate by providing new buildings designed according to natural properties and maintaining their settlements. Improving rural conditions would remove the economical and sociological gulf between cities and rural. What result desired to arrived in, is if there is no deformation (adaptation process of another traditional buildings because of immigration) or assimilation in a climatic region, there should be very similar solutions in the same climatic regions of the world even if there is no relationship (trade, communication etc.) among them.

Keywords: climate zones, geomorphology, local architecture, local materials

Procedia PDF Downloads 428
81 Railway Composite Flooring Design: Numerical Simulation and Experimental Studies

Authors: O. Lopez, F. Pedro, A. Tadeu, J. Antonio, A. Coelho

Abstract:

The future of the railway industry lies in the innovation of lighter, more efficient and more sustainable trains. Weight optimizations in railway vehicles allow reducing power consumption and CO₂ emissions, increasing the efficiency of the engines and the maximum speed reached. Additionally, they reduce wear of wheels and rails, increase the space available for passengers, etc. Among the various systems that integrate railway interiors, the flooring system is one which has greater impact both on passenger safety and comfort, as well as on the weight of the interior systems. Due to the high weight saving potential, relative high mechanical resistance, good acoustic and thermal performance, ease of modular design, cost-effectiveness and long life, the use of new sustainable composite materials and panels provide the latest innovations for competitive solutions in the development of flooring systems. However, one of the main drawbacks of the flooring systems is their relatively poor resistance to point loads. Point loads in railway interiors can be caused by passengers or by components fixed to the flooring system, such as seats and restraint systems, handrails, etc. In this way, they can originate higher fatigue solicitations under service loads or zones with high stress concentrations under exceptional loads (higher longitudinal, transverse and vertical accelerations), thus reducing its useful life. Therefore, to verify all the mechanical and functional requirements of the flooring systems, many physical prototypes would be created during the design phase, with all of the high costs associated with it. Nowadays, the use of virtual prototyping methods by computer-aided design (CAD) and computer-aided engineering (CAE) softwares allow validating a product before committing to making physical test prototypes. The scope of this work was to current computer tools and integrate the processes of innovation, development, and manufacturing to reduce the time from design to finished product and optimise the development of the product for higher levels of performance and reliability. In this case, the mechanical response of several sandwich panels with different cores, polystyrene foams, and composite corks, were assessed, to optimise the weight and the mechanical performance of a flooring solution for railways. Sandwich panels with aluminum face sheets were tested to characterise its mechanical performance and determine the polystyrene foam and cork properties when used as inner cores. Then, a railway flooring solution was fully modelled (including the elastomer pads to provide the required vibration isolation from the car body) and perform structural simulations using FEM analysis to comply all the technical product specifications for the supply of a flooring system. Zones with high stress concentrations are studied and tested. The influence of vibration modes on the comfort level and stability is discussed. The information obtained with the computer tools was then completed with several mechanical tests performed on some solutions, and on specific components. The results of the numerical simulations and experimental campaign carried out are presented in this paper. This research work was performed as part of the POCI-01-0247-FEDER-003474 (coMMUTe) Project funded by Portugal 2020 through COMPETE 2020.

Keywords: cork agglomerate core, mechanical performance, numerical simulation, railway flooring system

Procedia PDF Downloads 179
80 The Quantum Theory of Music and Languages

Authors: Mballa Abanda Serge, Henda Gnakate Biba, Romaric Guemno Kuate, Akono Rufine Nicole, Petfiang Sidonie, Bella Sidonie

Abstract:

The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original and innovative research thesis. The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization, It designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and world music or variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: music, entanglement, langauge, science

Procedia PDF Downloads 79
79 Methodological Deficiencies in Knowledge Representation Conceptual Theories of Artificial Intelligence

Authors: Nasser Salah Eldin Mohammed Salih Shebka

Abstract:

Current problematic issues in AI fields are mainly due to those of knowledge representation conceptual theories, which in turn reflected on the entire scope of cognitive sciences. Knowledge representation methods and tools are driven from theoretical concepts regarding human scientific perception of the conception, nature, and process of knowledge acquisition, knowledge engineering and knowledge generation. And although, these theoretical conceptions were themselves driven from the study of the human knowledge representation process and related theories; some essential factors were overlooked or underestimated, thus causing critical methodological deficiencies in the conceptual theories of human knowledge and knowledge representation conceptions. The evaluation criteria of human cumulative knowledge from the perspectives of nature and theoretical aspects of knowledge representation conceptions are affected greatly by the very materialistic nature of cognitive sciences. This nature caused what we define as methodological deficiencies in the nature of theoretical aspects of knowledge representation concepts in AI. These methodological deficiencies are not confined to applications of knowledge representation theories throughout AI fields, but also exceeds to cover the scientific nature of cognitive sciences. The methodological deficiencies we investigated in our work are: - The Segregation between cognitive abilities in knowledge driven models.- Insufficiency of the two-value logic used to represent knowledge particularly on machine language level in relation to the problematic issues of semantics and meaning theories. - Deficient consideration of the parameters of (existence) and (time) in the structure of knowledge. The latter requires that we present a more detailed introduction of the manner in which the meanings of Existence and Time are to be considered in the structure of knowledge. This doesn’t imply that it’s easy to apply in structures of knowledge representation systems, but outlining a deficiency caused by the absence of such essential parameters, can be considered as an attempt to redefine knowledge representation conceptual approaches, or if proven impossible; constructs a perspective on the possibility of simulating human cognition on machines. Furthermore, a redirection of the aforementioned expressions is required in order to formulate the exact meaning under discussion. This redirection of meaning alters the role of Existence and time factors to the Frame Work Environment of knowledge structure; and therefore; knowledge representation conceptual theories. Findings of our work indicate the necessity to differentiate between two comparative concepts when addressing the relation between existence and time parameters, and between that of the structure of human knowledge. The topics presented throughout the paper can also be viewed as an evaluation criterion to determine AI’s capability to achieve its ultimate objectives. Ultimately, we argue some of the implications of our findings that suggests that; although scientific progress may have not reached its peak, or that human scientific evolution has reached a point where it’s not possible to discover evolutionary facts about the human Brain and detailed descriptions of how it represents knowledge, but it simply implies that; unless these methodological deficiencies are properly addressed; the future of AI’s qualitative progress remains questionable.

Keywords: cognitive sciences, knowledge representation, ontological reasoning, temporal logic

Procedia PDF Downloads 111
78 Contribution of Research to Innovation Management in the Traditional Fruit Production

Authors: Camille Aouinaït, Danilo Christen, Christoph Carlen

Abstract:

Introduction: Small and Medium-sized Enterprises (SMEs) are facing different challenges such as pressures on environmental resources, the rise of downstream power, and trade liberalization. Remaining competitive by implementing innovations and engaging in collaborations could be a strategic solution. In Switzerland, the Federal Institute for Research in Agriculture (Agroscope), the Federal schools of technology (EPFL and ETHZ), Cantonal universities and Universities of Applied Sciences (UAS) can provide substantial inputs. UAS were developed with specific missions to match the labor markets and society needs. Research projects produce patents, publications and improved networks of scientific expertise. The study’s goal is to measure the contribution of UAS and research organization to innovation and the impact of collaborations with partners in the non-academic environment in Swiss traditional fruit production. Materials and methods: The European projects Traditional Food Network to improve the transfer of knowledge for innovation (TRAFOON) and Social Impact Assessment of Productive Interactions between science and society (SIAMPI) frame the present study. The former aims to fill the gap between the needs of traditional food producing SMEs and innovations implemented following European projects. The latter developed a method to assess the impacts of scientific research. On one side, interviews with market players have been performed to make an inventory of needs of Swiss SMEs producing apricots and berries. The participative method allowed matching the current needs and the existing innovations coming from past European projects. Swiss stakeholders (e.g. producers, retailers, an inter-branch organization of fruits and vegetables) directly rated the needs on a five-Likert scale. To transfer the knowledge to SMEs, training workshops have been organized for apricot and berries actors separately, on specific topics. On the other hand, a mapping of a social network is drawn to characterize the links between actors, with a focus on the Swiss canton of Valais and UAS Valais Wallis. Type and frequency of interactions among actors have identified thanks to interviews. Preliminary results: A list of 369 SMEs needs grouped in 22 categories was produced with 37 fulfilled questionnaires. Swiss stakeholders rated 31 needs very important. Training workshops on apricot are focusing on varietal innovations, storage, disease (bacterial blight), pest (Drosophila suzukii), sorting and rootstocks. Entrepreneurship was targeted through trademark discussions in berry production. The UAS Valais Wallis collaborated on a few projects with Agroscope along with industries, at European and national levels. Political and public bodies interfere with the central area of agricultural vulgarization that induces close relationships between the research and the practical side. Conclusions: The needs identified by Swiss stakeholders are becoming part of training workshops to incentivize innovations. The UAS Valais Wallis takes part in collaboration projects with the research environment and market players that bring innovations helping SMEs in their contextual environment. Then, a Strategic Research and Innovation Agenda will be created in order to pursue research and answer the issues facing by SMEs.

Keywords: agriculture, innovation, knowledge transfer, university and research collaboration

Procedia PDF Downloads 393
77 Trajectory Optimization for Autonomous Deep Space Missions

Authors: Anne Schattel, Mitja Echim, Christof Büskens

Abstract:

Trajectory planning for deep space missions has become a recent topic of great interest. Flying to space objects like asteroids provides two main challenges. One is to find rare earth elements, the other to gain scientific knowledge of the origin of the world. Due to the enormous spatial distances such explorer missions have to be performed unmanned and autonomously. The mathematical field of optimization and optimal control can be used to realize autonomous missions while protecting recourses and making them safer. The resulting algorithms may be applied to other, earth-bound applications like e.g. deep sea navigation and autonomous driving as well. The project KaNaRiA ('Kognitionsbasierte, autonome Navigation am Beispiel des Ressourcenabbaus im All') investigates the possibilities of cognitive autonomous navigation on the example of an asteroid mining mission, including the cruise phase and approach as well as the asteroid rendezvous, landing and surface exploration. To verify and test all methods an interactive, real-time capable simulation using virtual reality is developed under KaNaRiA. This paper focuses on the specific challenge of the guidance during the cruise phase of the spacecraft, i.e. trajectory optimization and optimal control, including first solutions and results. In principle there exist two ways to solve optimal control problems (OCPs), the so called indirect and direct methods. The indirect methods are being studied since several decades and their usage needs advanced skills regarding optimal control theory. The main idea of direct approaches, also known as transcription techniques, is to transform the infinite-dimensional OCP into a finite-dimensional non-linear optimization problem (NLP) via discretization of states and controls. These direct methods are applied in this paper. The resulting high dimensional NLP with constraints can be solved efficiently by special NLP methods, e.g. sequential quadratic programming (SQP) or interior point methods (IP). The movement of the spacecraft due to gravitational influences of the sun and other planets, as well as the thrust commands, is described through ordinary differential equations (ODEs). The competitive mission aims like short flight times and low energy consumption are considered by using a multi-criteria objective function. The resulting non-linear high-dimensional optimization problems are solved by using the software package WORHP ('We Optimize Really Huge Problems'), a software routine combining SQP at an outer level and IP to solve underlying quadratic subproblems. An application-adapted model of impulsive thrusting, as well as a model of an electrically powered spacecraft propulsion system, is introduced. Different priorities and possibilities of a space mission regarding energy cost and flight time duration are investigated by choosing different weighting factors for the multi-criteria objective function. Varying mission trajectories are analyzed and compared, both aiming at different destination asteroids and using different propulsion systems. For the transcription, the robust method of full discretization is used. The results strengthen the need for trajectory optimization as a foundation for autonomous decision making during deep space missions. Simultaneously they show the enormous increase in possibilities for flight maneuvers by being able to consider different and opposite mission objectives.

Keywords: deep space navigation, guidance, multi-objective, non-linear optimization, optimal control, trajectory planning.

Procedia PDF Downloads 411
76 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator

Authors: Yildiz Stella Dak, Jale Tezcan

Abstract:

Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.

Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection

Procedia PDF Downloads 328
75 Elevated Systemic Oxidative-Nitrosative Stress and Cerebrovascular Function in Professional Rugby Union Players: The Link to Impaired Cognition

Authors: Tom S. Owens, Tom A. Calverley, Benjamin S. Stacey, Christopher J. Marley, George Rose, Lewis Fall, Gareth L. Jones, Priscilla Williams, John P. R. Williams, Martin Steggall, Damian M. Bailey

Abstract:

Introduction and aims: Sports-related concussion (SRC) represents a significant and growing public health concern in rugby union, yet remains one of the least understood injuries facing the health community today. Alongside increasing SRC incidence rates, there is concern that prior recurrent concussion may contribute to long-term neurologic sequelae in later-life. This may be due to an accelerated decline in cerebral perfusion, a major risk factor for neurocognitive decline and neurodegeneration, though the underlying mechanisms remain to be established. The present study hypothesised that recurrent concussion in current professional rugby union players would result in elevated systemic oxidative-nitrosative stress, reflected by a free radical-mediated reduction in nitric oxide (NO) bioavailability and impaired cerebrovascular and cognitive function. Methodology: A longitudinal study design was adopted across the 2017-2018 rugby union season. Ethical approval was obtained from the University of South Wales Ethics Committee. Data collection is ongoing, and therefore the current report documents result from the pre-season and first half of the in-season data collection. Participants were initially divided into two subgroups; 23 professional rugby union players (aged 26 ± 5 years) and 22 non-concussed controls (27 ± 8 years). Pre-season measurements were performed for cerebrovascular function (Doppler ultrasound of middle cerebral artery velocity (MCAv) in response to hypocapnia/normocapnia/hypercapnia), cephalic venous concentrations of the ascorbate radical (A•-, electron paramagnetic resonance spectroscopy), NO (ozone-based chemiluminescence) and cognition (neuropsychometric tests). Notational analysis was performed to assess contact in the rugby group throughout each competitive game. Results: 1001 tackles and 62 injuries, including three concussions were observed across the first half of the season. However, no associations were apparent between number of tackles and any injury type (P > 0.05). The rugby group expressed greater oxidative stress as indicated by increased A•- (P < 0.05 vs. control) and a subsequent decrease in NO bioavailability (P < 0.05 vs. control). The rugby group performed worse in the Ray Auditory Verbal Learning Test B (RAVLT-B, learning, and memory) and the Grooved Pegboard test using both the dominant and non-dominant hands (visuomotor coordination, P < 0.05 vs. control). There were no between-group differences in cerebral perfusion at baseline (MCAv: 54 ± 13 vs. 59 ± 12, P > 0.05). Likewise, no between-group differences in CVRCO2Hypo (2.58 ± 1.01 vs. 2.58 ± 0.75, P > 0.05) or CVRCO2Hyper (2.69 ± 1.07 vs. 3.35 ± 1.28, P > 0.05) were observed. Conclusion: The present study identified that the rugby union players are characterized by impaired cognitive function subsequent to elevated systemic-oxidative-nitrosative stress. However, this appears to be independent of any functional impairment in cerebrovascular function. Given the potential long-term trajectory towards accelerated cognitive decline in populations exposed to SRC, prophylaxis to increase NO bioavailability warrants consideration.

Keywords: cognition, concussion, mild traumatic brain injury, rugby

Procedia PDF Downloads 176
74 Change of Education Business in the Age of 5G

Authors: Heikki Ruohomaa, Vesa Salminen

Abstract:

Regions are facing huge competition to attract companies, businesses, inhabitants, students, etc. This way to improve living and business environment, which is rapidly changing due to digitalization. On the other hand, from the industry's point of view, the availability of a skilled labor force and an innovative environment are crucial factors. In this context, qualified staff has been seen to utilize the opportunities of digitalization and respond to the needs of future skills. World Manufacturing Forum has stated in the year 2019- report that in next five years, 40% of workers have to change their core competencies. Through digital transformation, new technologies like cloud, mobile, big data, 5G- infrastructure, platform- technology, data- analysis, and social networks with increasing intelligence and automation, enterprises can capitalize on new opportunities and optimize existing operations to achieve significant business improvement. Digitalization will be an important part of the everyday life of citizens and present in the working day of the average citizen and employee in the future. For that reason, the education system and education programs on all levels of education from diaper age to doctorate have been directed to fulfill this ecosystem strategy. Goal: The Fourth Industrial Revolution will bring unprecedented change to societies, education organizations and business environments. This article aims to identify how education, education content, the way education has proceeded, and overall whole the education business is changing. Most important is how we should respond to this inevitable co- evolution. Methodology: The study aims to verify how the learning process is boosted by new digital content, new learning software and tools, and customer-oriented learning environments. The change of education programs and individual education modules can be supported by applied research projects. You can use them in making proof- of- the concept of new technology, new ways to teach and train, and through the experiences gathered change education content, way to educate and finally education business as a whole. Major findings: Applied research projects can prove the concept- phases on real environment field labs to test technology opportunities and new tools for training purposes. Customer-oriented applied research projects are also excellent for students to make assignments and use new knowledge and content and teachers to test new tools and create new ways to educate. New content and problem-based learning are used in future education modules. This article introduces some case study experiences on customer-oriented digital transformation projects and how gathered knowledge on new digital content and a new way to educate has influenced education. The case study is related to experiences of research projects, customer-oriented field labs/learning environments and education programs of Häme University of Applied Sciences.

Keywords: education process, digitalization content, digital tools for education, learning environments, transdisciplinary co-operation

Procedia PDF Downloads 175
73 Bioinspired Green Synthesis of Magnetite Nanoparticles Using Room-Temperature Co-Precipitation: A Study of the Effect of Amine Additives on Particle Morphology in Fluidic Systems

Authors: Laura Norfolk, Georgina Zimbitas, Jan Sefcik, Sarah Staniland

Abstract:

Magnetite nanoparticles (MNP) have been an area of increasing research interest due to their extensive applications in industry, such as in carbon capture, water purification, and crucially, the biomedical industry. The use of MNP in the biomedical industry is rising, with studies on their effect as Magnetic resonance imaging contrast agents, drug delivery systems, and as hyperthermic cancer treatments becoming prevalent in the nanomaterial research community. Particles used for biomedical purposes must meet stringent criteria; the particles must have consistent shape and size between particles. Variation between particle morphology can drastically alter the effective surface area of the material, making it difficult to correctly dose particles that are not homogeneous. Particles of defined shape such as octahedral and cubic have been shown to outperform irregular shaped particles in some applications, leading to the need to synthesize particles of defined shape. In nature, highly homogeneous MNP are found within magnetotactic bacteria, a unique bacteria capable of producing magnetite nanoparticles internally under ambient conditions. Biomineralisation proteins control the properties of the MNPs, enhancing their homogeneity. One of these proteins, Mms6, has been successfully isolated and used in vitro as an additive in room-temperature co-precipitation reactions (RTCP) to produce particles of defined mono-dispersed size & morphology. When considering future industrial scale-up it is crucial to consider the costs and feasibility of an additive, as an additive that is not readily available or easily synthesized at a competitive price will not be sustainable. As such, additives selected for this research are inspired by the functional groups of biomineralisation proteins, but cost-effective, environmentally friendly, and compatible with scale-up. Diethylenetriamine (DETA), triethylenetetramine (TETA), tetraethylenepentamine (TEPA), and pentaethylenehexamine (PEHA) have been successfully used in RTCP to modulate the properties of particles synthesized, leading to the formation of octahedral nanoparticles with no use of organic solvents, heating, or toxic precursors. By extending this principle to a fluidic system, ongoing research will reveal whether the amine additives can also exert morphological control in an environment which is suited toward higher particle yield. Two fluidic systems have been employed; a peristaltic turbulent flow mixing system suitable for the rapid production of MNP, and a macrofluidic system for the synthesis of tailored nanomaterials under a laminar flow regime. The presence of the amine additives in the turbulent flow system in initial results appears to offer similar morphological control as observed under RTCP conditions, with higher proportions of octahedral particles formed. This is a proof of concept which may pave the way to green synthesis of tailored MNP on an industrial scale. Mms6 and amine additives have been used in the macrofluidic system, with Mms6 allowing magnetite to be synthesized at unfavourable ferric ratios, but no longer influencing particle size. This suggests this synthetic technique while still benefiting from the addition of additives, may not allow additives to fully influence the particles formed due to the faster timescale of reaction. The amine additives have been tested at various concentrations, the results of which will be discussed in this paper.

Keywords: bioinspired, green synthesis, fluidic, magnetite, morphological control, scale-up

Procedia PDF Downloads 112
72 Predicting Provider Service Time in Outpatient Clinics Using Artificial Intelligence-Based Models

Authors: Haya Salah, Srinivas Sharan

Abstract:

Healthcare facilities use appointment systems to schedule their appointments and to manage access to their medical services. With the growing demand for outpatient care, it is now imperative to manage physician's time effectively. However, high variation in consultation duration affects the clinical scheduler's ability to estimate the appointment duration and allocate provider time appropriately. Underestimating consultation times can lead to physician's burnout, misdiagnosis, and patient dissatisfaction. On the other hand, appointment durations that are longer than required lead to doctor idle time and fewer patient visits. Therefore, a good estimation of consultation duration has the potential to improve timely access to care, resource utilization, quality of care, and patient satisfaction. Although the literature on factors influencing consultation length abound, little work has done to predict it using based data-driven approaches. Therefore, this study aims to predict consultation duration using supervised machine learning algorithms (ML), which predicts an outcome variable (e.g., consultation) based on potential features that influence the outcome. In particular, ML algorithms learn from a historical dataset without explicitly being programmed and uncover the relationship between the features and outcome variable. A subset of the data used in this study has been obtained from the electronic medical records (EMR) of four different outpatient clinics located in central Pennsylvania, USA. Also, publicly available information on doctor's characteristics such as gender and experience has been extracted from online sources. This research develops three popular ML algorithms (deep learning, random forest, gradient boosting machine) to predict the treatment time required for a patient and conducts a comparative analysis of these algorithms with respect to predictive performance. The findings of this study indicate that ML algorithms have the potential to predict the provider service time with superior accuracy. While the current approach of experience-based appointment duration estimation adopted by the clinic resulted in a mean absolute percentage error of 25.8%, the Deep learning algorithm developed in this study yielded the best performance with a MAPE of 12.24%, followed by gradient boosting machine (13.26%) and random forests (14.71%). Besides, this research also identified the critical variables affecting consultation duration to be patient type (new vs. established), doctor's experience, zip code, appointment day, and doctor's specialty. Moreover, several practical insights are obtained based on the comparative analysis of the ML algorithms. The machine learning approach presented in this study can serve as a decision support tool and could be integrated into the appointment system for effectively managing patient scheduling.

Keywords: clinical decision support system, machine learning algorithms, patient scheduling, prediction models, provider service time

Procedia PDF Downloads 120
71 Navigating AI in Higher Education: Exploring Graduate Students’ Perspectives on Teacher-Provided AI Guidelines

Authors: Mamunur Rashid, Jialin Yan

Abstract:

The current years have witnessed a rapid evolution and integration of artificial intelligence (AI) in various fields, prominently influencing the education industry. Acknowledging this transformative wave, AI tools like ChatGPT and Grammarly have undeniably introduced perspectives and skills, enriching the educational experiences of higher education students. The prevalence of AI utilization in higher education also drives an increasing number of researchers' attention in various dimensions. Departments, offices, and professors in universities also designed and released a set of policies and guidelines on using AI effectively. In regard to this, the study targets exploring and analyzing graduate students' perspectives regarding AI guidelines set by teachers. A mixed-methods study will be mainly conducted in this study, employing in-depth interviews and focus groups to investigate and collect students' perspectives. Relevant materials, such as syllabi and course instructions, will also be analyzed through the documentary analysis to facilitate understanding of the study. Surveys will also be used for data collection and students' background statistics. The integration of both interviews and surveys will provide a comprehensive array of student perspectives across various academic disciplines. The study is anchored in the theoretical framework of self-determination theory (SDT), which emphasizes and explains the students' perspective under the AI guidelines through three core needs: autonomy, competence, and relatedness. This framework is instrumental in understanding how AI guidelines influence students' intrinsic motivation and sense of empowerment in their learning environments. Through qualitative analysis, the study reveals a sense of confusion and uncertainty among students regarding the appropriate application and ethical considerations of AI tools, indicating potential challenges in meeting their needs for competence and autonomy. The quantitative data further elucidates these findings, highlighting a significant communication gap between students and educators in the formulation and implementation of AI guidelines. The critical findings of this study mainly come from two aspects: First, the majority of graduate students are uncertain and confused about relevant AI guidelines given by teachers. Second, this study also demonstrates that the design and effectiveness of course materials, such as the syllabi and instructions, also need to adapt in regard to AI policies. It indicates that certain of the existing guidelines provided by teachers lack consideration of students' perspectives, leading to a misalignment with students' needs for autonomy, competence, and relatedness. More emphasize and efforts need to be dedicated to both teacher and student training on AI policies and ethical considerations. To conclude, in this study, graduate students' perspectives on teacher-provided AI guidelines are explored and reflected upon, calling for additional training and strategies to improve how these guidelines can be better disseminated for their effective integration and adoption. Although AI guidelines provided by teachers may be helpful and provide new insights for students, educational institutions should take a more anchoring role to foster a motivating, empowering, and student-centered learning environment. The study also provides some relevant recommendations, including guidance for students on the ethical use of AI and AI policy training for teachers in higher education.

Keywords: higher education policy, graduate students’ perspectives, higher education teacher, AI guidelines, AI in education

Procedia PDF Downloads 73
70 Digital Skepticism In A Legal Philosophical Approach

Authors: dr. Bendes Ákos

Abstract:

Digital skepticism, a critical stance towards digital technology and its pervasive influence on society, presents significant challenges when analyzed from a legal philosophical perspective. This abstract aims to explore the intersection of digital skepticism and legal philosophy, emphasizing the implications for justice, rights, and the rule of law in the digital age. Digital skepticism arises from concerns about privacy, security, and the ethical implications of digital technology. It questions the extent to which digital advancements enhance or undermine fundamental human values. Legal philosophy, which interrogates the foundations and purposes of law, provides a framework for examining these concerns critically. One key area where digital skepticism and legal philosophy intersect is in the realm of privacy. Digital technologies, particularly data collection and surveillance mechanisms, pose substantial threats to individual privacy. Legal philosophers must grapple with questions about the limits of state power and the protection of personal autonomy. They must consider how traditional legal principles, such as the right to privacy, can be adapted or reinterpreted in light of new technological realities. Security is another critical concern. Digital skepticism highlights vulnerabilities in cybersecurity and the potential for malicious activities, such as hacking and cybercrime, to disrupt legal systems and societal order. Legal philosophy must address how laws can evolve to protect against these new forms of threats while balancing security with civil liberties. Ethics plays a central role in this discourse. Digital technologies raise ethical dilemmas, such as the development and use of artificial intelligence and machine learning algorithms that may perpetuate biases or make decisions without human oversight. Legal philosophers must evaluate the moral responsibilities of those who design and implement these technologies and consider the implications for justice and fairness. Furthermore, digital skepticism prompts a reevaluation of the concept of the rule of law. In an increasingly digital world, maintaining transparency, accountability, and fairness becomes more complex. Legal philosophers must explore how legal frameworks can ensure that digital technologies serve the public good and do not entrench power imbalances or erode democratic principles. Finally, the intersection of digital skepticism and legal philosophy has practical implications for policy-making. Legal scholars and practitioners must work collaboratively to develop regulations and guidelines that address the challenges posed by digital technology. This includes crafting laws that protect individual rights, ensure security, and promote ethical standards in technology development and deployment. In conclusion, digital skepticism provides a crucial lens for examining the impact of digital technology on law and society. A legal philosophical approach offers valuable insights into how legal systems can adapt to protect fundamental values in the digital age. By addressing privacy, security, ethics, and the rule of law, legal philosophers can help shape a future where digital advancements enhance, rather than undermine, justice and human dignity.

Keywords: legal philosophy, privacy, security, ethics, digital skepticism

Procedia PDF Downloads 43
69 The Incidental Linguistic Information Processing and Its Relation to General Intellectual Abilities

Authors: Evgeniya V. Gavrilova, Sofya S. Belova

Abstract:

The present study was aimed at clarifying the relationship between general intellectual abilities and efficiency in free recall and rhymed words generation task after incidental exposure to linguistic stimuli. The theoretical frameworks stress that general intellectual abilities are based on intentional mental strategies. In this context, it seems to be crucial to examine the efficiency of incidentally presented information processing in cognitive task and its relation to general intellectual abilities. The sample consisted of 32 Russian students. Participants were exposed to pairs of words. Each pair consisted of two common nouns or two city names. Participants had to decide whether a city name was presented in each pair. Thus words’ semantics was processed intentionally. The city names were considered to be focal stimuli, whereas common nouns were considered to be peripheral stimuli. Along with that each pair of words could be rhymed or not be rhymed, but this phonemic aspect of stimuli’s characteristic (rhymed and non-rhymed words) was processed incidentally. Then participants were asked to produce as many rhymes as they could to new words. The stimuli presented earlier could be used as well. After that, participants had to retrieve all words presented earlier. In the end, verbal and non-verbal abilities were measured with number of special psychometric tests. As for free recall task intentionally processed focal stimuli had an advantage in recall compared to peripheral stimuli. In addition all the rhymed stimuli were recalled more effectively than non-rhymed ones. The inverse effect was found in words generation task where participants tended to use mainly peripheral stimuli compared to focal ones. Furthermore peripheral rhymed stimuli were most popular target category of stimuli that was used in this task. Thus the information that was processed incidentally had a supplemental influence on efficiency of stimuli processing as well in free recall as in word generation task. Different patterns of correlations between intellectual abilities and efficiency in different stimuli processing in both tasks were revealed. Non-verbal reasoning ability correlated positively with free recall of peripheral rhymed stimuli, but it was not related to performance on rhymed words’ generation task. Verbal reasoning ability correlated positively with free recall of focal stimuli. As for rhymed words generation task, verbal intelligence correlated negatively with generation of focal stimuli and correlated positively with generation of all peripheral stimuli. The present findings lead to two key conclusions. First, incidentally processed stimuli had an advantage in free recall and word generation task. Thus incidental information processing appeared to be crucial for subsequent cognitive performance. Secondly, it was demonstrated that incidentally processed stimuli were recalled more frequently by participants with high nonverbal reasoning ability and were more effectively used by participants with high verbal reasoning ability in subsequent cognitive tasks. That implies that general intellectual abilities could benefit from operating by different levels of information processing while cognitive problem solving. This research was supported by the “Grant of President of RF for young PhD scientists” (contract № is 14.Z56.17.2980- MK) and the Grant № 15-36-01348a2 of Russian Foundation for Humanities.

Keywords: focal and peripheral stimuli, general intellectual abilities, incidental information processing

Procedia PDF Downloads 229
68 The Challenges of Citizen Engagement in Urban Transformation: Key Learnings from Three European Cities

Authors: Idoia Landa Oregi, Itsaso Gonzalez Ochoantesana, Olatz Nicolas Buxens, Carlo Ferretti

Abstract:

The impact of citizens in urban transformations has become increasingly important in the pursuit of creating citizen-centered cities. Citizens at the forefront of the urban transformation process are key to establishing resilient, sustainable, and inclusive cities that cater to the needs of all residents. Therefore, collecting data and information directly from citizens is crucial for the sustainable development of cities. Within this context, public participation becomes a pillar for acquiring the necessary information from citizens. Public participation in urban transformation processes establishes a more responsive, equitable, and resilient urban environment. This approach cultivates a sense of shared responsibility and collective progress in building cities that truly serve the well-being of all residents. However, the implementation of public participation practices often overlooks strategies to effectively engage citizens in the processes, resulting in non-successful participatory outcomes. Therefore, this research focuses on identifying and analyzing the critical aspects of citizen engagement during the same participatory urban transformation process in different European contexts: Ermua (Spain), Elva (Estonia) and Matera (Italy). The participatory neighborhood regeneration process is divided into three main stages, to turn social districts into inclusive and smart neighborhoods: (i) the strategic level, (ii) the design level, and (iii) the implementation level. In the initial stage, the focus is on diagnosing the neighborhood and creating a shared vision with the community. The second stage centers around collaboratively designing various action plans to foster inclusivity and intelligence while pushing local economic development within the district. Finally, the third stage ensures the proper co-implementation of the designed actions in the neighborhood. To this date, the presented results critically analyze the key aspects of engagement in the first stage of the methodology, the strategic plan, in the three above-mentioned contexts. It is a multifaceted study that incorporates three case studies to shed light on the various perspectives and strategies adopted by each city. The results indicate that despite of the various cultural contexts, all cities face similar barriers when seeking to enhance engagement. Accordingly, the study identifies specific challenges within the participatory approach across the three cities such as the existence of discontented citizens, communication gaps, inconsistent participation, or administration resistance. Consequently, key learnings of the process indicate that a collaborative sphere needs to be cultivated, educating both citizens and administrations in the aspects of co-governance, giving these practices the appropriate space and their own communication channels. This study is part of the DROP project, funded by the European Union, which aims to develop a citizen-centered urban renewal methodology to transform the social districts into smart and inclusive neighborhoods.

Keywords: citizen-centred cities, engagement, public participation, urban transformation

Procedia PDF Downloads 67
67 Case-Based Reasoning for Modelling Random Variables in the Reliability Assessment of Existing Structures

Authors: Francesca Marsili

Abstract:

The reliability assessment of existing structures with probabilistic methods is becoming an increasingly important and frequent engineering task. However probabilistic reliability methods are based on an exhaustive knowledge of the stochastic modeling of the variables involved in the assessment; at the moment standards for the modeling of variables are absent, representing an obstacle to the dissemination of probabilistic methods. The framework according to probability distribution functions (PDFs) are established is represented by the Bayesian statistics, which uses Bayes Theorem: a prior PDF for the considered parameter is established based on information derived from the design stage and qualitative judgments based on the engineer past experience; then, the prior model is updated with the results of investigation carried out on the considered structure, such as material testing, determination of action and structural properties. The application of Bayesian statistics arises two different kind of problems: 1. The results of the updating depend on the engineer previous experience; 2. The updating of the prior PDF can be performed only if the structure has been tested, and quantitative data that can be statistically manipulated have been collected; performing tests is always an expensive and time consuming operation; furthermore, if the considered structure is an ancient building, destructive tests could compromise its cultural value and therefore should be avoided. In order to solve those problems, an interesting research path is represented by investigating Artificial Intelligence (AI) techniques that can be useful for the automation of the modeling of variables and for the updating of material parameters without performing destructive tests. Among the others, one that raises particular attention in relation to the object of this study is constituted by Case-Based Reasoning (CBR). In this application, cases will be represented by existing buildings where material tests have already been carried out and an updated PDFs for the material mechanical parameters has been computed through a Bayesian analysis. Then each case will be composed by a qualitative description of the material under assessment and the posterior PDFs that describe its material properties. The problem that will be solved is the definition of PDFs for material parameters involved in the reliability assessment of the considered structure. A CBR system represent a good candi¬date in automating the modelling of variables because: 1. Engineers already draw an estimation of the material properties based on the experience collected during the assessment of similar structures, or based on similar cases collected in literature or in data-bases; 2. Material tests carried out on structure can be easily collected from laboratory database or from literature; 3. The system will provide the user of a reliable probabilistic description of the variables involved in the assessment that will also serve as a tool in support of the engineer’s qualitative judgments. Automated modeling of variables can help in spreading probabilistic reliability assessment of existing buildings in the common engineering practice, and target at the best intervention and further tests on the structure; CBR represents a technique which may help to achieve this.

Keywords: reliability assessment of existing buildings, Bayesian analysis, case-based reasoning, historical structures

Procedia PDF Downloads 336
66 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines

Authors: Alexander Guzman Urbina, Atsushi Aoyama

Abstract:

The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.

Keywords: deep learning, risk assessment, neuro fuzzy, pipelines

Procedia PDF Downloads 290
65 Use of Artificial Neural Networks to Estimate Evapotranspiration for Efficient Irrigation Management

Authors: Adriana Postal, Silvio C. Sampaio, Marcio A. Villas Boas, Josué P. Castro

Abstract:

This study deals with the estimation of reference evapotranspiration (ET₀) in an agricultural context, focusing on efficient irrigation management to meet the growing interest in the sustainable management of water resources. Given the importance of water in agriculture and its scarcity in many regions, efficient use of this resource is essential to ensure food security and environmental sustainability. The methodology used involved the application of artificial intelligence techniques, specifically Multilayer Perceptron (MLP) Artificial Neural Networks (ANNs), to predict ET₀ in the state of Paraná, Brazil. The models were trained and validated with meteorological data from the Brazilian National Institute of Meteorology (INMET), together with data obtained from a producer's weather station in the western region of Paraná. Two optimizers (SGD and Adam) and different meteorological variables, such as temperature, humidity, solar radiation, and wind speed, were explored as inputs to the models. Nineteen configurations with different input variables were tested; amidst them, configuration 9, with 8 input variables, was identified as the most efficient of all. Configuration 10, with 4 input variables, was considered the most effective, considering the smallest number of variables. The main conclusions of this study show that MLP ANNs are capable of accurately estimating ET₀, providing a valuable tool for irrigation management in agriculture. Both configurations (9 and 10) showed promising performance in predicting ET₀. The validation of the models with cultivator data underlined the practical relevance of these tools and confirmed their generalization ability for different field conditions. The results of the statistical metrics, including Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Coefficient of Determination (R²), showed excellent agreement between the model predictions and the observed data, with MAE as low as 0.01 mm/day and 0.03 mm/day, respectively. In addition, the models achieved an R² between 0.99 and 1, indicating a satisfactory fit to the real data. This agreement was also confirmed by the Kolmogorov-Smirnov test, which evaluates the agreement of the predictions with the statistical behavior of the real data and yields values between 0.02 and 0.04 for the producer data. In addition, the results of this study suggest that the developed technique can be applied to other locations by using specific data from these sites to further improve ET₀ predictions and thus contribute to sustainable irrigation management in different agricultural regions. The study has some limitations, such as the use of a single ANN architecture and two optimizers, the validation with data from only one producer, and the possible underestimation of the influence of seasonality and local climate variability. An irrigation management application using the most efficient models from this study is already under development. Future research can explore different ANN architectures and optimization techniques, validate models with data from multiple producers and regions, and investigate the model's response to different seasonal and climatic conditions.

Keywords: agricultural technology, neural networks in agriculture, water efficiency, water use optimization

Procedia PDF Downloads 45
64 The Effect of Artificial Intelligence on Mobile Phones and Communication Systems

Authors: Ibram Khalafalla Roshdy Shokry

Abstract:

This paper gives service feel multiple get entry to (CSMA) verbal exchange model based totally totally on SoC format method. Such model can be used to guide the modelling of the complex c084d04ddacadd4b971ae3d98fecfb2a communique systems, consequently use of such communication version is an crucial method in the creation of excessive general overall performance conversation. SystemC has been selected as it gives a homogeneous format drift for complicated designs (i.e. SoC and IP based format). We use a swarm device to validate CSMA designed version and to expose how advantages of incorporating communication early within the layout process. The wireless conversation created via the modeling of CSMA protocol that may be used to attain conversation among all of the retailers and to coordinate get proper of entry to to the shared medium (channel).The device of automobiles with wi-fiwireless communique abilities is expected to be the important thing to the evolution to next era intelligent transportation systems (ITS). The IEEE network has been continuously operating at the development of an wireless vehicular communication protocol for the enhancement of wi-fi get admission to in Vehicular surroundings (WAVE). Vehicular verbal exchange systems, known as V2X, help car to car (V2V) and automobile to infrastructure (V2I) communications. The wi-ficiencywireless of such communication systems relies upon on several elements, amongst which the encircling surroundings and mobility are prominent. as a result, this observe makes a speciality of the evaluation of the actual performance of vehicular verbal exchange with unique cognizance on the effects of the actual surroundings and mobility on V2X verbal exchange. It begins by wi-fi the actual most range that such conversation can guide and then evaluates V2I and V2V performances. The Arada LocoMate OBU transmission device changed into used to check and evaluate the effect of the transmission range in V2X verbal exchange. The evaluation of V2I and V2V communique takes the real effects of low and excessive mobility on transmission under consideration.Multiagent systems have received sizeable attention in numerous wi-fields, which include robotics, independent automobiles, and allotted computing, where a couple of retailers cooperate and speak to reap complicated duties. wi-figreen communication among retailers is a critical thing of these systems, because it directly influences their usual performance and scalability. This scholarly work gives an exploration of essential communication factors and conducts a comparative assessment of diverse protocols utilized in multiagent systems. The emphasis lies in scrutinizing the strengths, weaknesses, and applicability of those protocols across diverse situations. The studies additionally sheds light on rising tendencies within verbal exchange protocols for multiagent systems, together with the incorporation of device mastering strategies and the adoption of blockchain-based totally solutions to make sure comfy communique. those developments offer valuable insights into the evolving landscape of multiagent structures and their verbal exchange protocols.

Keywords: communication, multi-agent systems, protocols, consensussystemC, modelling, simulation, CSMA

Procedia PDF Downloads 24
63 Knowledge Based Software Model for the Management and Treatment of Malaria Patients: A Case of Kalisizo General Hospital

Authors: Mbonigaba Swale

Abstract:

Malaria is an infection or disease caused by parasites (Plasmodium Falciparum — causes severe Malaria, plasmodium Vivax, Plasmodium Ovale, and Plasmodium Malariae), transmitted by bites of infected anopheles (female) mosquitoes to humans. These vectors comprise of two types in Africa, particularly in Uganda, i.e. anopheles fenestus and Anopheles gambaie (‘example Anopheles arabiensis,,); feeds on man inside the house mainly at dusk, mid-night and dawn and rests indoors and makes them effective transmitters (vectors) of the disease. People in both urban and rural areas have consistently become prone to repetitive attacks of malaria, causing a lot of deaths and significantly increasing the poverty levels of the rural poor. Malaria is a national problem; it causes a lot of maternal pre-natal and antenatal disorders, anemia in pregnant mothers, low birth weights for the newly born, convulsions and epilepsy among the infants. Cumulatively, it kills about one million children every year in sub-Saharan Africa. It has been estimated to account for 25-35% of all outpatient visits, 20-45% of acute hospital admissions and 15-35% of hospital deaths. Uganda is the leading victim country, for which Rakai and Masaka districts are the most affected. So, it is not clear whether these abhorrent situations and episodes of recurrences and failure to cure from the disease are a result of poor diagnosis, prescription and dosing, treatment habits and compliance of the patients to the drugs or the ethical domain of the stake holders in relation to the main stream methodology of malaria management. The research is aimed at offering an alternative approach to manage and deal absolutely with problem by using a knowledge based software model of Artificial Intelligence (Al) that is capable of performing common-sense and cognitive reasoning so as to take decisions like the human brain would do to provide instantaneous expert solutions so as to avoid speculative simulation of the problem during differential diagnosis in the most accurate and literal inferential aspect. This system will assist physicians in many kinds of medical diagnosis, prescribing treatments and doses, and in monitoring patient responses, basing on the body weight and age group of the patient, it will be able to provide instantaneous and timely information options, alternative ways and approaches to influence decision making during case analysis. The computerized system approach, a new model in Uganda termed as “Software Aided Treatment” (SAT) will try to change the moral and ethical approach and influence conduct so as to improve the skills, experience and values (social and ethical) in the administration and management of the disease and drugs (combination therapy and generics) by both the patient and the health worker.

Keywords: knowledge based software, management, treatment, diagnosis

Procedia PDF Downloads 54
62 Radish Sprout Growth Dependency on LED Color in Plant Factory Experiment

Authors: Tatsuya Kasuga, Hidehisa Shimada, Kimio Oguchi

Abstract:

Recent rapid progress in ICT (Information and Communication Technology) has advanced the penetration of sensor networks (SNs) and their attractive applications. Agriculture is one of the fields well able to benefit from ICT. Plant factories control several parameters related to plant growth in closed areas such as air temperature, humidity, water, culture medium concentration, and artificial lighting by using computers and AI (Artificial Intelligence) is being researched in order to obtain stable and safe production of vegetables and medicinal plants all year anywhere, and attain self-sufficiency in food. By providing isolation from the natural environment, a plant factory can achieve higher productivity and safe products. However, the biggest issue with plant factories is the return on investment. Profits are tenuous because of the large initial investments and running costs, i.e. electric power, incurred. At present, LED (Light Emitting Diode) lights are being adopted because they are more energy-efficient and encourage photosynthesis better than the fluorescent lamps used in the past. However, further cost reduction is essential. This paper introduces experiments that reveal which color of LED lighting best enhances the growth of cultured radish sprouts. Radish sprouts were cultivated in the experimental environment formed by a hydroponics kit with three cultivation shelves (28 samples per shelf) each with an artificial lighting rack. Seven LED arrays of different color (white, blue, yellow green, green, yellow, orange, and red) were compared with a fluorescent lamp as the control. Lighting duration was set to 12 hours a day. Normal water with no fertilizer was circulated. Seven days after germination, the length, weight and area of leaf of each sample were measured. Electrical power consumption for all lighting arrangements was also measured. Results and discussions: As to average sample length, no clear difference was observed in terms of color. As regards weight, orange LED was less effective and the difference was significant (p < 0.05). As to leaf area, blue, yellow and orange LEDs were significantly less effective. However, all LEDs offered higher productivity per W consumed than the fluorescent lamp. Of the LEDs, the blue LED array attained the best results in terms of length, weight and area of leaf per W consumed. Conclusion and future works: An experiment on radish sprout cultivation under 7 different color LED arrays showed no clear difference in terms of sample size. However, if electrical power consumption is considered, LEDs offered about twice the growth rate of the fluorescent lamp. Among them, blue LEDs showed the best performance. Further cost reduction e.g. low power lighting remains a big issue for actual system deployment. An automatic plant monitoring system with sensors is another study target.

Keywords: electric power consumption, LED color, LED lighting, plant factory

Procedia PDF Downloads 186
61 Assessment of Neurodevelopmental Needs in Duchenne Muscular Dystrophy

Authors: Mathula Thangarajh

Abstract:

Duchenne muscular dystrophy (DMD) is a severe form of X-linked muscular dystrophy caused by mutations in the dystrophin gene resulting in progressive skeletal muscle weakness. Boys with DMD also have significant cognitive disabilities. The intelligence quotient of boys with DMD, compared to peers, is approximately one standard deviation below average. Detailed neuropsychological testing has demonstrated that boys with DMD have a global developmental impairment, with verbal memory and visuospatial skills most significantly affected. Furthermore, the total brain volume and gray matter volume are lower in children with DMD compared to age-matched controls. These results are suggestive of a significant structural and functional compromise to the developing brain as a result of absent dystrophin protein expression. There is also some genetic evidence to suggest that mutations in the 3’ end of the DMD gene are associated with more severe neurocognitive problems. Our working hypothesis is that (i) boys with DMD do not make gains in neurodevelopmental skills compared to typically developing children and (ii) women carriers of DMD mutations may have subclinical cognitive deficits. We also hypothesize that there may be an intergenerational vulnerability of cognition, with boys of DMD-carrier mothers being more affected cognitively than boys of non-DMD-carrier mothers. The objectives of this study are: 1. Assess the neurodevelopment in boys with DMD at 4-time points and perform baseline neuroradiological assessment, 2. Assess cognition in biological mothers of DMD participants at baseline, 3. Assess possible correlation between DMD mutation and cognitive measures. This study also explores functional brain abnormalities in people with DMD by exploring how regional and global connectivity of the brain underlies executive function deficits in DMD. Such research can contribute to a better holistic understanding of the cognition alterations due to DMD and could potentially allow clinicians to create better-tailored treatment plans for the DMD population. There are four study visits for each participant (baseline, 2-4 weeks, 1 year, 18 months). At each visit, the participant completes the NIH Toolbox Cognition Battery, a validated psychometric measure that is recommended by NIH Common Data Elements for use in DMD. Visits 1, 3, and 4 also involve the administration of the BRIEF-2, ABAS-3, PROMIS/NeuroQoL, PedsQL Neuromuscular module 3.0, Draw a Clock Test, and an optional fMRI scan with the N-back matching task. We expect to enroll 52 children with DMD, 52 mothers of children with DMD, and 30 healthy control boys. This study began in 2020 during the height of the COVID-19 pandemic. Due to this, there were subsequent delays in recruitment because of travel restrictions. However, we have persevered and continued to recruit new participants for the study. We partnered with the Muscular Dystrophy Association (MDA) and helped advertise the study to interested families. Since then, we have had families from across the country contact us about their interest in the study. We plan to continue to enroll a diverse population of DMD participants to contribute toward a better understanding of Duchenne Muscular Dystrophy.

Keywords: neurology, Duchenne muscular dystrophy, muscular dystrophy, cognition, neurodevelopment, x-linked disorder, DMD, DMD gene

Procedia PDF Downloads 98
60 Facilitating the Learning Environment as a Servant Leader: Empowering Self-Directed Student Learning

Authors: Thomas James Bell III

Abstract:

Pedagogy is thought of as one's philosophy, theory, or teaching method. This study examines the science of learning, considering the forced reconsideration of effective pedagogy brought on by the aftermath of the 2020 coronavirus pandemic. With the aid of various technologies, online education holds challenges and promises to enhance the learning environment if implemented to facilitate student learning. Behaviorism centers around the belief that the instructor is the sage on the classroom stage using repetition techniques as the primary learning instrument. This approach to pedagogy ascribes complete control of the learning environment and works best for students to learn by allowing students to answer questions with immediate feedback. Such structured learning reinforcement tends to guide students' learning without considering learners' independence and individual reasoning. And such activities may inadvertently stifle the student's ability to develop critical thinking and self-expression skills. Fundamentally liberationism pedagogy dismisses the concept that education is merely about students learning things and more about the way students learn. Alternatively, the liberationist approach democratizes the classroom by redefining the role of the teacher and student. The teacher is no longer viewed as the sage on the stage but as a guide on the side. Instead, this approach views students as creators of knowledge and not empty vessels to be filled with knowledge. Moreover, students are well suited to decide how best to learn and which areas improvements are needed. This study will explore the classroom instructor as a servant leader in the twenty-first century, which allows students to integrate technology that encapsulates more individual learning styles. The researcher will examine the Professional Scrum Master (PSM I) exam pass rate results of 124 students in six sections of an Agile scrum course. The students will be separated into two groups; the first group will follow a structured instructor-led course outlined by a course syllabus. The second group will consist of several small teams (ten or fewer) of self-led and self-empowered students. The teams will conduct several event meetings that include sprint planning meetings, daily scrums, sprint reviews, and retrospective meetings throughout the semester will the instructor facilitating the teams' activities as needed. The methodology for this study will use the compare means t-test to compare the mean of an exam pass rate in one group to the mean of the second group. A one-tailed test (i.e., less than or greater than) will be used with the null hypothesis, for the difference between the groups in the population will be set to zero. The major findings will expand the pedagogical approach that suggests pedagogy primarily exist in support of teacher-led learning, which has formed the pillars of traditional classroom teaching. But in light of the fourth industrial revolution, there is a fusion of learning platforms across the digital, physical, and biological worlds with disruptive technological advancements in areas such as the Internet of Things (IoT), artificial intelligence (AI), 3D printing, robotics, and others.

Keywords: pedagogy, behaviorism, liberationism, flipping the classroom, servant leader instructor, agile scrum in education

Procedia PDF Downloads 142
59 Emerging Positive Education Interventions for Clean Sport Behavior: A Pilot Study

Authors: Zeinab Zaremohzzabieh, Syasya Firzana Azmi, Haslinda Abdullah, Soh Kim Geok, Aini Azeqa Ma'rof, Hayrol Azril Mohammed Shaffril

Abstract:

The escalating prevalence of doping in sports, casting a shadow over both high-performance and recreational settings, has emerged as a formidable concern, particularly within the realm of young athletes. Doping, characterized by the surreptitious use of prohibited substances to gain a competitive edge, underscores the pressing need for comprehensive and efficacious preventive measures. This study aims to address a crucial void in current research by unraveling the motivations that drive clean adolescent athletes to steadfastly abstain from performance-enhancing substances. In navigating this intricate landscape, the study adopts a positive psychology perspective, investigating into the conditions and processes that contribute to the holistic well-being of individuals and communities. At the heart of this exploration lies the application of the PERMA model, a comprehensive positive psychology framework encapsulating positive emotion, engagement, relationships, meaning, and accomplishments. This model functions as a distinctive lens, dissecting intervention results to offer nuanced insights into the complex dynamics of clean sport behavior. The research is poised to usher in a paradigm shift from conventional anti-doping strategies, predominantly fixated on identifying deficits, towards an innovative approach firmly rooted in positive psychology. The objective of this study is to evaluate the efficacy of a positive education intervention program tailored to promote clean sport behavior among Malaysian adolescent athletes. Representing unexplored terrain within the landscape of anti-doping efforts, this initiative endeavors to reshape the focus from deficiencies to strengths. The meticulously crafted pilot study engages thirty adolescent athletes, divided into a control group of 15 and an experimental group of 15. The pilot study serves as the crucible to assess the effectiveness of the prepared intervention package, providing indispensable insights that will meticulously guide the finalization of an all-encompassing intervention program for the main study. The main study adopts a pioneering two-arm randomized control trial methodology, actively involving adolescent athletes from diverse Malaysian high schools. This approach aims to address critical lacunae in anti-doping strategies, specifically calibrated to resonate with the unique context of Malaysian schools. The study, cognizant of the imperative to develop preventive measures harmonizing with the cultural and educational milieu of Malaysian adolescent athletes, aspires to cultivate a culture of clean sport. In conclusion, this research aspires to contribute unprecedented insights into the efficacy of positive education interventions firmly rooted in the PERMA model. By unraveling the intricacies of clean sport behavior, particularly within the context of Malaysian adolescent athletes, the study seeks to introduce transformative preventive methods. The adoption of positive psychology as an avant-garde anti-doping tool represents an innovative and promising approach, bridging a conspicuous gap in scholarly research and offering potential panaceas for the sporting community. As this study unfurls its chapters, it carries the promise not only to enrich our understanding of clean sport behavior but also to pave the way for positive metamorphosis within the realm of adolescent sports in Malaysia.

Keywords: positive education interventions, a pilot study, clean sport behavior, adolescent athletes, Malaysia

Procedia PDF Downloads 56
58 Comparison of the Effect of Heart Rate Variability Biofeedback and Slow Breathing Training on Promoting Autonomic Nervous Function Related Performance

Authors: Yi Jen Wang, Yu Ju Chen

Abstract:

Background: Heart rate variability (HRV) biofeedback can promote autonomic nervous function, sleep quality and reduce psychological stress. In HRV biofeedback training, it is hoped that through the guidance of machine video or audio, the patient can breathe slowly according to his own heart rate changes so that the heart and lungs can achieve resonance, thereby promoting the related effects of autonomic nerve function; while, it is also pointed out that if slow breathing of 6 times per minute can also guide the case to achieve the effect of cardiopulmonary resonance. However, there is no relevant research to explore the comparison of the effectiveness of cardiopulmonary resonance by using video or audio HRV biofeedback training and metronome-guided slow breathing. Purpose: To compare the promotion of autonomic nervous function performance between using HRV biofeedback and slow breathing guided by a metronome. Method: This research is a kind of experimental design with convenient sampling; the cases are randomly divided into the heart rate variability biofeedback training group and the slow breathing training group. The HRV biofeedback training group will conduct HRV biofeedback training in a four-week laboratory and use the home training device for autonomous training; while the slow breathing training group will conduct slow breathing training in the four-week laboratory using the mobile phone APP breathing metronome to guide the slow breathing training, and use the mobile phone APP for autonomous training at home. After two groups were enrolled and four weeks after the intervention, the autonomic nervous function-related performance was repeatedly measured. Using the chi-square test, student’s t-test and other statistical methods to analyze the results, and use p <0.05 as the basis for statistical significance. Results: A total of 27 subjects were included in the analysis. After four weeks of training, the HRV biofeedback training group showed significant improvement in the HRV indexes (SDNN, RMSSD, HF, TP) and sleep quality. Although the stress index also decreased, it did not reach statistical significance; the slow breathing training group was not statistically significant after four weeks of training, only sleep quality improved significantly, while the HRV indexes (SDNN, RMSSD, TP) all increased. Although HF and stress indexes decreased, they were not statistically significant. Comparing the difference between the two groups after training, it was found that the HF index improved significantly and reached statistical significance in the HRV biofeedback training group. Although the sleep quality of the two groups improved, it did not reach that level in a statistically significant difference. Conclusion: HRV biofeedback training is more effective in promoting autonomic nervous function than slow breathing training, but the effects of reducing stress and promoting sleep quality need to be explored after increasing the number of samples. The results of this study can provide a reference for clinical or community health promotion. In the future, it can also be further designed to integrate heart rate variability biological feedback training into the development of AI artificial intelligence wearable devices, which can make it more convenient for people to train independently and get effective feedback in time.

Keywords: autonomic nervous function, HRV biofeedback, heart rate variability, slow breathing

Procedia PDF Downloads 174
57 The Role of Metaheuristic Approaches in Engineering Problems

Authors: Ferzat Anka

Abstract:

Many types of problems can be solved using traditional analytical methods. However, these methods take a long time and cause inefficient use of resources. In particular, different approaches may be required in solving complex and global engineering problems that we frequently encounter in real life. The bigger and more complex a problem, the harder it is to solve. Such problems are called Nondeterministic Polynomial time (NP-hard) in the literature. The main reasons for recommending different metaheuristic algorithms for various problems are the use of simple concepts, the use of simple mathematical equations and structures, the use of non-derivative mechanisms, the avoidance of local optima, and their fast convergence. They are also flexible, as they can be applied to different problems without very specific modifications. Thanks to these features, it can be easily embedded even in many hardware devices. Accordingly, this approach can also be used in trend application areas such as IoT, big data, and parallel structures. Indeed, the metaheuristic approaches are algorithms that return near-optimal results for solving large-scale optimization problems. This study is focused on the new metaheuristic method that has been merged with the chaotic approach. It is based on the chaos theorem and helps relevant algorithms to improve the diversity of the population and fast convergence. This approach is based on Chimp Optimization Algorithm (ChOA), that is a recently introduced metaheuristic algorithm inspired by nature. This algorithm identified four types of chimpanzee groups: attacker, barrier, chaser, and driver, and proposed a suitable mathematical model for them based on the various intelligence and sexual motivations of chimpanzees. However, this algorithm is not more successful in the convergence rate and escaping of the local optimum trap in solving high-dimensional problems. Although it and some of its variants use some strategies to overcome these problems, it is observed that it is not sufficient. Therefore, in this study, a newly expanded variant is described. In the algorithm called Ex-ChOA, hybrid models are proposed for position updates of search agents, and a dynamic switching mechanism is provided for transition phases. This flexible structure solves the slow convergence problem of ChOA and improves its accuracy in multidimensional problems. Therefore, it tries to achieve success in solving global, complex, and constrained problems. The main contribution of this study is 1) It improves the accuracy and solves the slow convergence problem of the ChOA. 2) It proposes new hybrid movement strategy models for position updates of search agents. 3) It provides success in solving global, complex, and constrained problems. 4) It provides a dynamic switching mechanism between phases. The performance of the Ex-ChOA algorithm is analyzed on a total of 8 benchmark functions, as well as a total of 2 classical and constrained engineering problems. The proposed algorithm is compared with the ChoA, and several well-known variants (Weighted-ChoA, Enhanced-ChoA) are used. In addition, an Improved algorithm from the Grey Wolf Optimizer (I-GWO) method is chosen for comparison since the working model is similar. The obtained results depict that the proposed algorithm performs better or equivalently to the compared algorithms.

Keywords: optimization, metaheuristic, chimp optimization algorithm, engineering constrained problems

Procedia PDF Downloads 76
56 Elements of Creativity and Innovation

Authors: Fadwa Al Bawardi

Abstract:

In March 2021, the Saudi Arabian Council of Ministers issued a decision to form a committee called the "Higher Committee for Research, Development and Innovation," a committee linked to the Council of Economic and Development Affairs, chaired by the Chairman of the Council of Economic and Development Affairs, and concerned with the development of the research, development and innovation sector in the Kingdom. In order to talk about the dimensions of this wonderful step, let us first try to answer the following questions. Is there a difference between creativity and innovation..? What are the factors of creativity in the individual. Are they mental genetic factors or are they factors that an individual acquires through learning..? The methodology included surveys that have been conducted on more than 500 individuals, males and females, between the ages of 18 till 60. And the answer is. "Creativity" is the creation of a new idea, while "Innovation" is the development of an already existing idea in a new, successful way. They are two sides of the same coin, as the "creative idea" needs to be developed and transformed into an "innovation" in order to achieve either strategic achievements at the level of countries and institutions to enhance organizational intelligence, or achievements at the level of individuals. For example, the beginning of smart phones was just a creative idea from IBM in 1994, but the actual successful innovation for the manufacture, development and marketing of these phones was through Apple later. Nor does creativity have to be hereditary. There are three basic factors for creativity: The first factor is "the presence of a challenge or an obstacle" that the individual faces and seeks thinking to find solutions to overcome, even if thinking requires a long time. The second factor is the "environment surrounding" of the individual, which includes science, training, experience gained, the ability to use techniques, as well as the ability to assess whether the idea is feasible or otherwise. To achieve this factor, the individual must be aware of own skills, strengths, hobbies, and aspects in which one can be creative, and the individual must also be self-confident and courageous enough to suggest those new ideas. The third factor is "Experience and the Ability to Accept Risk and Lack of Initial Success," and then learn from mistakes and try again tirelessly. There are some tools and techniques that help the individual to reach creative and innovative ideas, such as: Mind Maps tool, through which the available information is drawn by writing a short word for each piece of information and arranging all other relevant information through clear lines, which helps in logical thinking and correct vision. There is also a tool called "Flow Charts", which are graphics that show the sequence of data and expected results according to an ordered scenario of events and workflow steps, giving clarity to the ideas, their sequence, and what is expected of them. There are also other great tools such as the Six Hats tool, a useful tool to be applied by a group of people for effective planning and detailed logical thinking, and the Snowball tool. And all of them are tools that greatly help in organizing and arranging mental thoughts, and making the right decisions. It is also easy to learn, apply and use all those tools and techniques to reach creative and innovative solutions. The detailed figures and results of the conducted surveys are available upon request, with charts showing the %s based on gender, age groups, and job categories.

Keywords: innovation, creativity, factors, tools

Procedia PDF Downloads 54
55 Statistical Models and Time Series Forecasting on Crime Data in Nepal

Authors: Dila Ram Bhandari

Abstract:

Throughout the 20th century, new governments were created where identities such as ethnic, religious, linguistic, caste, communal, tribal, and others played a part in the development of constitutions and the legal system of victim and criminal justice. Acute issues with extremism, poverty, environmental degradation, cybercrimes, human rights violations, crime against, and victimization of both individuals and groups have recently plagued South Asian nations. Everyday massive number of crimes are steadfast, these frequent crimes have made the lives of common citizens restless. Crimes are one of the major threats to society and also for civilization. Crime is a bone of contention that can create a societal disturbance. The old-style crime solving practices are unable to live up to the requirement of existing crime situations. Crime analysis is one of the most important activities of the majority of intelligent and law enforcement organizations all over the world. The South Asia region lacks such a regional coordination mechanism, unlike central Asia of Asia Pacific regions, to facilitate criminal intelligence sharing and operational coordination related to organized crime, including illicit drug trafficking and money laundering. There have been numerous conversations in recent years about using data mining technology to combat crime and terrorism. The Data Detective program from Sentient as a software company, uses data mining techniques to support the police (Sentient, 2017). The goals of this internship are to test out several predictive model solutions and choose the most effective and promising one. First, extensive literature reviews on data mining, crime analysis, and crime data mining were conducted. Sentient offered a 7-year archive of crime statistics that were daily aggregated to produce a univariate dataset. Moreover, a daily incidence type aggregation was performed to produce a multivariate dataset. Each solution's forecast period lasted seven days. Statistical models and neural network models were the two main groups into which the experiments were split. For the crime data, neural networks fared better than statistical models. This study gives a general review of the applied statistics and neural network models. A detailed image of each model's performance on the available data and generalizability is provided by a comparative analysis of all the models on a comparable dataset. Obviously, the studies demonstrated that, in comparison to other models, Gated Recurrent Units (GRU) produced greater prediction. The crime records of 2005-2019 which was collected from Nepal Police headquarter and analysed by R programming. In conclusion, gated recurrent unit implementation could give benefit to police in predicting crime. Hence, time series analysis using GRU could be a prospective additional feature in Data Detective.

Keywords: time series analysis, forecasting, ARIMA, machine learning

Procedia PDF Downloads 164
54 Leveraging Advanced Technologies and Data to Eliminate Abandoned, Lost, or Otherwise Discarded Fishing Gear and Derelict Fishing Gear

Authors: Grant Bifolchi

Abstract:

As global environmental problems continue to have highly adverse effects, finding long-term, sustainable solutions to combat ecological distress are of growing paramount concern. Ghost Gear—also known as abandoned, lost or otherwise discarded fishing gear (ALDFG) and derelict fishing gear (DFG)—represents one of the greatest threats to the world’s oceans, posing a significant hazard to human health, livelihoods, and global food security. In fact, according to the UN Food and Agriculture Organization (FAO), abandoned, lost and discarded fishing gear represents approximately 10% of marine debris by volume. Around the world, many governments, governmental and non-profit organizations are doing their best to manage the reporting and retrieval of nets, lines, ropes, traps, floats and more from their respective bodies of water. However, these organizations’ ability to effectively manage files and documents about the environmental problem further complicates matters. In Ghost Gear monitoring and management, organizations face additional complexities. Whether it’s data ingest, industry regulations and standards, garnering actionable insights into the location, security, and management of data, or the application of enforcement due to disparate data—all of these factors are placing massive strains on organizations struggling to save the planet from the dangers of Ghost Gear. In this 90-minute educational session, globally recognized Ghost Gear technology expert Grant Bifolchi CET, BBA, Bcom, will provide real-world insight into how governments currently manage Ghost Gear and the technology that can accelerate success in combatting ALDFG and DFG. In this session, attendees will learn how to: • Identify specific technologies to solve the ingest and management of Ghost Gear data categories, including type, geo-location, size, ownership, regional assignment, collection and disposal. • Provide enhanced access to authorities, fisheries, independent fishing vessels, individuals, etc., while securely controlling confidential and privileged data to globally recognized standards. • Create and maintain processing accuracy to effectively track ALDFG/DFG reporting progress—including acknowledging receipt of the report and sharing it with all pertinent stakeholders to ensure approvals are secured. • Enable and utilize Business Intelligence (BI) and Analytics to store and analyze data to optimize organizational performance, maintain anytime-visibility of report status, user accountability, scheduling, management, and foster governmental transparency. • Maintain Compliance Reporting through highly defined, detailed and automated reports—enabling all stakeholders to share critical insights with internal colleagues, regulatory agencies, and national and international partners.

Keywords: ghost gear, ALDFG, DFG, abandoned, lost or otherwise discarded fishing gear, data, technology

Procedia PDF Downloads 94