Search results for: spiritual intelligence
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1755

Search results for: spiritual intelligence

75 The Importance of Development Evaluation to Preterm Children in Remote Area

Authors: Chung-Yuan Wang, Min Hsu, Bo-Ya Juan, Hsiv Ching Lin, Hsveh Min Lin, Hsiu-Fang Yeh

Abstract:

The success of Taiwan's National Health Insurance (NHI) system attracts widespread praise from the international society. However, the availability of medical care in a emote area is limited. Without the convenient public transportation system and mature social welfare policy, these people are difficult to regain their health and prevent disability. Preterm children have more risk to get development delay. Preterm children in a remote area have the human right to get rehabilitation resources as those in the city area. Therefore, the aim of this study was to show the importance of development screening to preterm children in a remote area and a tract the government to notice the issue. In Pingtung, children who are suspected development delay would be suggested to take a skillful screening evaluation in our hospital. Those preterm children (within 1-year-old) visited our pediatric clinic would also be referred to take the development evaluation. After the physiatrist’s systemic evaluation, the subjects would be scheduled to take the development evaluation. Gross motor, fine motor, speech comprehension/expression and mental study were included. The evaluation was in-charged by a physical therapist, occupational therapy, speech therapist and pediatric psychologist. The tools were Peabody developmental scale, Bayley Scales of Infant and Toddler Development (Bayley-III) and Wechsler Preschool & Primary Scale of Intelligence-Revised (WPPSI-R). In 2013, 459 children received the service in our hospital. Among these children, fifty-seven were noted with preterm baby history (gestation within 37 weeks). Thirty-six of these preterm children, who had never receive development evaluation, were included in this study. Thirty-six subjects (twenty-six male and ten female) were included. Nineteen subjects were found development delay. Six subjects were found suspected development delay. In gross motor, six subjects were development delay and eight were suspected development delay. In fine motor, five subjects were development delay and three were suspected development delay. In speech, sixteen subjects were development delay and six were suspected development delay. In our study, through the provision of development evaluation service, 72.2% preterm baby were found their development delay or suspected delay. They need further early intervention rehabilitation service. We made their parents realize that when development delay was recognized at the early stage, they are often reversible. No only the patients but also their families were improved their health status. The number of the subjects was limited in our study. Further study might be needed. Compared with 770 physical therapist (PT) and 370 occupational therapy (OT) in Taipei, there are only 108 PT and 54 OT in Pingtung. Further, there are much fewer therapists working on the field of pediatric rehabilitation. Living healthy is a human's right, no matter where does he live. For those development delay children in remote area, particularly preterm children, early detection, and early intervention rehabilitation service could play an important role in decreasing their disability and improving their quality of life. Through this study, we suggest the government to add more national resources on the development evaluation to preterm children in a remote area.

Keywords: development, early intervention, preterm children, rehabilitation

Procedia PDF Downloads 415
74 Pre-Industrial Local Architecture According to Natural Properties

Authors: Selin Küçük

Abstract:

Pre-industrial architecture is integration of natural and subsequent properties by intelligence and experience. Since various settlements relatively industrialized or non-industrialized at any time, ‘pre-industrial’ term does not refer to a definite time. Natural properties, which are existent conditions and materials in natural local environment, are climate, geomorphology and local materials. Subsequent properties, which are all anthropological comparatives, are culture of societies, requirements of people and construction techniques that people use. Yet, after industrialization, technology took technique’s place, cultural effects are manipulated, requirements are changed and local/natural properties are almost disappeared in architecture. Technology is universal, global and expands simply; conversely technique is time and experience dependent and should has a considerable cultural background. This research is about construction techniques according to natural properties of a region and classification of these techniques. Understanding local architecture is only possible by searching its background which is hard to reach. There are always changes in positive and negative in architectural techniques through the time. Archaeological layers of a region sometimes give more accurate information about transformation of architecture. However, natural properties of any region are the most helpful elements to perceive construction techniques. Many international sources from different cultures are interested in local architecture by mentioning natural properties separately. Unfortunately, there is no literature deals with this subject as far as systematically in the correct way. This research aims to improve a clear perspective of local architecture existence by categorizing archetypes according to natural properties. The ultimate goal of this research is generating a clear classification of local architecture independent from subsequent (anthropological) properties over the world such like a handbook. Since local architecture is the most sustainable architecture with refer to its economic, ecologic and sociological properties, there should be an excessive information about construction techniques to be learned from. Constructing the same buildings in all over the world is one of the main criticism of modern architectural system. While this critics going on, the same buildings without identity increase incrementally. In post-industrial term, technology widely took technique’s place, yet cultural effects are manipulated, requirements are changed and natural local properties are almost disappeared in architecture. These study does not offer architects to use local techniques, but it indicates the progress of pre-industrial architectural evolution which is healthier, cheaper and natural. Immigration from rural areas to developing/developed cities should be prohibited, thus culture and construction techniques can be preserved. Since big cities have psychological, sensational and sociological impact on people, rural settlers can be convinced to not to immigrate by providing new buildings designed according to natural properties and maintaining their settlements. Improving rural conditions would remove the economical and sociological gulf between cities and rural. What result desired to arrived in, is if there is no deformation (adaptation process of another traditional buildings because of immigration) or assimilation in a climatic region, there should be very similar solutions in the same climatic regions of the world even if there is no relationship (trade, communication etc.) among them.

Keywords: climate zones, geomorphology, local architecture, local materials

Procedia PDF Downloads 397
73 The Quantum Theory of Music and Languages

Authors: Mballa Abanda Serge, Henda Gnakate Biba, Romaric Guemno Kuate, Akono Rufine Nicole, Petfiang Sidonie, Bella Sidonie

Abstract:

The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original and innovative research thesis. The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization, It designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and world music or variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: music, entanglement, langauge, science

Procedia PDF Downloads 53
72 Methodological Deficiencies in Knowledge Representation Conceptual Theories of Artificial Intelligence

Authors: Nasser Salah Eldin Mohammed Salih Shebka

Abstract:

Current problematic issues in AI fields are mainly due to those of knowledge representation conceptual theories, which in turn reflected on the entire scope of cognitive sciences. Knowledge representation methods and tools are driven from theoretical concepts regarding human scientific perception of the conception, nature, and process of knowledge acquisition, knowledge engineering and knowledge generation. And although, these theoretical conceptions were themselves driven from the study of the human knowledge representation process and related theories; some essential factors were overlooked or underestimated, thus causing critical methodological deficiencies in the conceptual theories of human knowledge and knowledge representation conceptions. The evaluation criteria of human cumulative knowledge from the perspectives of nature and theoretical aspects of knowledge representation conceptions are affected greatly by the very materialistic nature of cognitive sciences. This nature caused what we define as methodological deficiencies in the nature of theoretical aspects of knowledge representation concepts in AI. These methodological deficiencies are not confined to applications of knowledge representation theories throughout AI fields, but also exceeds to cover the scientific nature of cognitive sciences. The methodological deficiencies we investigated in our work are: - The Segregation between cognitive abilities in knowledge driven models.- Insufficiency of the two-value logic used to represent knowledge particularly on machine language level in relation to the problematic issues of semantics and meaning theories. - Deficient consideration of the parameters of (existence) and (time) in the structure of knowledge. The latter requires that we present a more detailed introduction of the manner in which the meanings of Existence and Time are to be considered in the structure of knowledge. This doesn’t imply that it’s easy to apply in structures of knowledge representation systems, but outlining a deficiency caused by the absence of such essential parameters, can be considered as an attempt to redefine knowledge representation conceptual approaches, or if proven impossible; constructs a perspective on the possibility of simulating human cognition on machines. Furthermore, a redirection of the aforementioned expressions is required in order to formulate the exact meaning under discussion. This redirection of meaning alters the role of Existence and time factors to the Frame Work Environment of knowledge structure; and therefore; knowledge representation conceptual theories. Findings of our work indicate the necessity to differentiate between two comparative concepts when addressing the relation between existence and time parameters, and between that of the structure of human knowledge. The topics presented throughout the paper can also be viewed as an evaluation criterion to determine AI’s capability to achieve its ultimate objectives. Ultimately, we argue some of the implications of our findings that suggests that; although scientific progress may have not reached its peak, or that human scientific evolution has reached a point where it’s not possible to discover evolutionary facts about the human Brain and detailed descriptions of how it represents knowledge, but it simply implies that; unless these methodological deficiencies are properly addressed; the future of AI’s qualitative progress remains questionable.

Keywords: cognitive sciences, knowledge representation, ontological reasoning, temporal logic

Procedia PDF Downloads 82
71 Walking in a Web of Animality: An Animality Informed Ethnography for an Inclusive Coexistence With (Other) Animals

Authors: Francesco De Giorgio

Abstract:

As different groups of wild animals are moving from natural to more anthropic environments, the need to overcome the human-animal gap for ethical coexistence becomes a public concern. Ethnology and ethnography play fundamental roles in the understanding of dynamics, perspective and movement in our interaction with (other) animals. In this effort, the Animality perspective provides an essential ethical lens and quality guidance for ethnography. It deconstructs the human/animal distinction and creates an inclusive approach to society. It further transgresses the rigid lines of normalizing images in human cultures, in which individuals are easily marginalized as ‘different’. Just like labeling an animal with species-specific behavior, judging and categorizing humans according to culture-specific expectations is easier than recognizing subjectivity. A fusion of anti-speciesist ethnology and ethnography of natural and social sciences can redress the shortcomings of current practices of multispecies ethnography that largely remain within an exclusively normalized human perspective. Empirically, the paper is based on current research on wild urban animals and human movement in Genua (IT), collecting data from systematic observations in the field regarding wild boars and ethnographic data collection over a period of time (18 months) where the human involved are educated in a changing perspective of coexistence. An “animality-ethnography” starts from observing our animal movement, how much and when we move, how we intersect our movement with that of other animals cohabiting with us, how we can observe and know others by moving, and ways of walking. The research will show how (interspecies) socio-cognition implies motion and movement and animal journeys between nature and the city, but also within the cities themselves, where a web of motion becomes the basic cultural matrix for cohabiting spaces, places, and systems. Here, the term "cognition" does not refer just to the brain or mind or intelligence. Indeed, cognition has a lot to do with movement, space, motion, proprioception, and the body. The ability to be informed, not only through what you see but also through the information you get from being in tune with the motion of a shared dynamic. To be an informative presence instead of an active stimulus or passive expectation, where the latter leaves too much space for projections and interpretations. What is proposed here is an understanding of our own animal movement linked to our own animal cognition. The result of breaking down your own culturally prescribed way in ethnographic research is breaking the barrier of limited options for observation and comprehension of the Other. Walking in the same way results in seeing others in the same way, studying them through only one channel of perception, causing a one-dimensional life instead of a multidimensional web. Returning to an understanding of our Animality, our animal movement, being in tune to improve a socio-cognitive context of cohabitation, both with domestic and wild animals, both in a forest or in a metropolis, represents the challenge of the coming years, and the evolution of the next centuries, to both preserve and share cultures, beyond the boundaries of species.

Keywords: antispeciesist ethology, interspecies coexistence, socio-cognition, intersectionality, animality

Procedia PDF Downloads 44
70 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator

Authors: Yildiz Stella Dak, Jale Tezcan

Abstract:

Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.

Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection

Procedia PDF Downloads 305
69 Nurse Participation for the Economical Effectiveness in Medical Organizations

Authors: Alua Masalimova, Dameli Sulubecova, Talgat Isaev, Raushan Magzumova

Abstract:

The usual relation to nurses of heads of medical organizations in Kazakhstan is to use them only for per performing medical manipulations, but new economic conditions require the introduction of nursing innovations. There is an increasing need for managers of hospital departments and regions of ambulatory clinics to ensure comfortable conditions for doctors, nurses, aides, as well as monitoring marketing technology (the needs and satisfaction of staff work, the patient satisfaction of the department). It is going to the past the nursing activities as physician assistant performing his prescriptions passively. We are suggesting a model for the developing the head nurse as the manager on the example of Blood Service. We have studied in the scientific-production center of blood transfusion head nurses by the standard method of interviewing for involvement in coordinating the flow of information, promoting the competitiveness of the department. Results: the average age of the respondents 43,1 ± 9,8, female - 100%; manager in the Organization – 9,3 ± 10,3 years. Received positive responses to the knowledge of the nearest offices in providing similar medical service - 14,2%. The cost of similar medical services in other competitive organizations did not know 100%, did a study of employee satisfaction Division labour-85,7% answered negatively, the satisfaction donors work staff studied in 50.0% of cases involved in attracting paid Services Division showed a 28.5% of the respondent. Participation in management decisions medical organization: strategic planning - 14,2%, forming analysis report for the year – 14,2%, recruitment-30.0%, equipment-14.2%. Participation in the social and technical designing workplaces Division staff showed 85,0% of senior nurses. Participate in the cohesion of the staff of the Division method of the team used the 10.0% of respondents. Further, we have studied the behavioral competencies for senior sisters: customer focus – 20,0% of respondents have attended, the ability to work in a team – 40,0%. Personal qualities senior nurses were apparent: sociability – 80,0%, the ability to manage information – 40,0%, to make their own decisions - 14,2%, 28,5% creativity, the desire to improve their professionalism – 50,0%. Thus, the modern market conditions dictate this organization, which works for the rights of economic management; include the competence of the post of the senior nurse knowledge and skills of Marketing Management Department. Skills to analyses the information collected and use of management offers superior medical leadership organization. The medical organization in the recruitment of the senior nurse offices take into account personal qualities: flexibility, fluency of thinking, communication skills and ability to work in a team. As well as leadership qualities, ambition, high emotional and social intelligence, that will bring out the medical unit on competitiveness within the country and abroad.

Keywords: blood service, head nurse, manager, skills

Procedia PDF Downloads 226
68 Change of Education Business in the Age of 5G

Authors: Heikki Ruohomaa, Vesa Salminen

Abstract:

Regions are facing huge competition to attract companies, businesses, inhabitants, students, etc. This way to improve living and business environment, which is rapidly changing due to digitalization. On the other hand, from the industry's point of view, the availability of a skilled labor force and an innovative environment are crucial factors. In this context, qualified staff has been seen to utilize the opportunities of digitalization and respond to the needs of future skills. World Manufacturing Forum has stated in the year 2019- report that in next five years, 40% of workers have to change their core competencies. Through digital transformation, new technologies like cloud, mobile, big data, 5G- infrastructure, platform- technology, data- analysis, and social networks with increasing intelligence and automation, enterprises can capitalize on new opportunities and optimize existing operations to achieve significant business improvement. Digitalization will be an important part of the everyday life of citizens and present in the working day of the average citizen and employee in the future. For that reason, the education system and education programs on all levels of education from diaper age to doctorate have been directed to fulfill this ecosystem strategy. Goal: The Fourth Industrial Revolution will bring unprecedented change to societies, education organizations and business environments. This article aims to identify how education, education content, the way education has proceeded, and overall whole the education business is changing. Most important is how we should respond to this inevitable co- evolution. Methodology: The study aims to verify how the learning process is boosted by new digital content, new learning software and tools, and customer-oriented learning environments. The change of education programs and individual education modules can be supported by applied research projects. You can use them in making proof- of- the concept of new technology, new ways to teach and train, and through the experiences gathered change education content, way to educate and finally education business as a whole. Major findings: Applied research projects can prove the concept- phases on real environment field labs to test technology opportunities and new tools for training purposes. Customer-oriented applied research projects are also excellent for students to make assignments and use new knowledge and content and teachers to test new tools and create new ways to educate. New content and problem-based learning are used in future education modules. This article introduces some case study experiences on customer-oriented digital transformation projects and how gathered knowledge on new digital content and a new way to educate has influenced education. The case study is related to experiences of research projects, customer-oriented field labs/learning environments and education programs of Häme University of Applied Sciences.

Keywords: education process, digitalization content, digital tools for education, learning environments, transdisciplinary co-operation

Procedia PDF Downloads 150
67 Predicting Provider Service Time in Outpatient Clinics Using Artificial Intelligence-Based Models

Authors: Haya Salah, Srinivas Sharan

Abstract:

Healthcare facilities use appointment systems to schedule their appointments and to manage access to their medical services. With the growing demand for outpatient care, it is now imperative to manage physician's time effectively. However, high variation in consultation duration affects the clinical scheduler's ability to estimate the appointment duration and allocate provider time appropriately. Underestimating consultation times can lead to physician's burnout, misdiagnosis, and patient dissatisfaction. On the other hand, appointment durations that are longer than required lead to doctor idle time and fewer patient visits. Therefore, a good estimation of consultation duration has the potential to improve timely access to care, resource utilization, quality of care, and patient satisfaction. Although the literature on factors influencing consultation length abound, little work has done to predict it using based data-driven approaches. Therefore, this study aims to predict consultation duration using supervised machine learning algorithms (ML), which predicts an outcome variable (e.g., consultation) based on potential features that influence the outcome. In particular, ML algorithms learn from a historical dataset without explicitly being programmed and uncover the relationship between the features and outcome variable. A subset of the data used in this study has been obtained from the electronic medical records (EMR) of four different outpatient clinics located in central Pennsylvania, USA. Also, publicly available information on doctor's characteristics such as gender and experience has been extracted from online sources. This research develops three popular ML algorithms (deep learning, random forest, gradient boosting machine) to predict the treatment time required for a patient and conducts a comparative analysis of these algorithms with respect to predictive performance. The findings of this study indicate that ML algorithms have the potential to predict the provider service time with superior accuracy. While the current approach of experience-based appointment duration estimation adopted by the clinic resulted in a mean absolute percentage error of 25.8%, the Deep learning algorithm developed in this study yielded the best performance with a MAPE of 12.24%, followed by gradient boosting machine (13.26%) and random forests (14.71%). Besides, this research also identified the critical variables affecting consultation duration to be patient type (new vs. established), doctor's experience, zip code, appointment day, and doctor's specialty. Moreover, several practical insights are obtained based on the comparative analysis of the ML algorithms. The machine learning approach presented in this study can serve as a decision support tool and could be integrated into the appointment system for effectively managing patient scheduling.

Keywords: clinical decision support system, machine learning algorithms, patient scheduling, prediction models, provider service time

Procedia PDF Downloads 91
66 Navigating AI in Higher Education: Exploring Graduate Students’ Perspectives on Teacher-Provided AI Guidelines

Authors: Mamunur Rashid, Jialin Yan

Abstract:

The current years have witnessed a rapid evolution and integration of artificial intelligence (AI) in various fields, prominently influencing the education industry. Acknowledging this transformative wave, AI tools like ChatGPT and Grammarly have undeniably introduced perspectives and skills, enriching the educational experiences of higher education students. The prevalence of AI utilization in higher education also drives an increasing number of researchers' attention in various dimensions. Departments, offices, and professors in universities also designed and released a set of policies and guidelines on using AI effectively. In regard to this, the study targets exploring and analyzing graduate students' perspectives regarding AI guidelines set by teachers. A mixed-methods study will be mainly conducted in this study, employing in-depth interviews and focus groups to investigate and collect students' perspectives. Relevant materials, such as syllabi and course instructions, will also be analyzed through the documentary analysis to facilitate understanding of the study. Surveys will also be used for data collection and students' background statistics. The integration of both interviews and surveys will provide a comprehensive array of student perspectives across various academic disciplines. The study is anchored in the theoretical framework of self-determination theory (SDT), which emphasizes and explains the students' perspective under the AI guidelines through three core needs: autonomy, competence, and relatedness. This framework is instrumental in understanding how AI guidelines influence students' intrinsic motivation and sense of empowerment in their learning environments. Through qualitative analysis, the study reveals a sense of confusion and uncertainty among students regarding the appropriate application and ethical considerations of AI tools, indicating potential challenges in meeting their needs for competence and autonomy. The quantitative data further elucidates these findings, highlighting a significant communication gap between students and educators in the formulation and implementation of AI guidelines. The critical findings of this study mainly come from two aspects: First, the majority of graduate students are uncertain and confused about relevant AI guidelines given by teachers. Second, this study also demonstrates that the design and effectiveness of course materials, such as the syllabi and instructions, also need to adapt in regard to AI policies. It indicates that certain of the existing guidelines provided by teachers lack consideration of students' perspectives, leading to a misalignment with students' needs for autonomy, competence, and relatedness. More emphasize and efforts need to be dedicated to both teacher and student training on AI policies and ethical considerations. To conclude, in this study, graduate students' perspectives on teacher-provided AI guidelines are explored and reflected upon, calling for additional training and strategies to improve how these guidelines can be better disseminated for their effective integration and adoption. Although AI guidelines provided by teachers may be helpful and provide new insights for students, educational institutions should take a more anchoring role to foster a motivating, empowering, and student-centered learning environment. The study also provides some relevant recommendations, including guidance for students on the ethical use of AI and AI policy training for teachers in higher education.

Keywords: higher education policy, graduate students’ perspectives, higher education teacher, AI guidelines, AI in education

Procedia PDF Downloads 36
65 Digital Skepticism In A Legal Philosophical Approach

Authors: dr. Bendes Ákos

Abstract:

Digital skepticism, a critical stance towards digital technology and its pervasive influence on society, presents significant challenges when analyzed from a legal philosophical perspective. This abstract aims to explore the intersection of digital skepticism and legal philosophy, emphasizing the implications for justice, rights, and the rule of law in the digital age. Digital skepticism arises from concerns about privacy, security, and the ethical implications of digital technology. It questions the extent to which digital advancements enhance or undermine fundamental human values. Legal philosophy, which interrogates the foundations and purposes of law, provides a framework for examining these concerns critically. One key area where digital skepticism and legal philosophy intersect is in the realm of privacy. Digital technologies, particularly data collection and surveillance mechanisms, pose substantial threats to individual privacy. Legal philosophers must grapple with questions about the limits of state power and the protection of personal autonomy. They must consider how traditional legal principles, such as the right to privacy, can be adapted or reinterpreted in light of new technological realities. Security is another critical concern. Digital skepticism highlights vulnerabilities in cybersecurity and the potential for malicious activities, such as hacking and cybercrime, to disrupt legal systems and societal order. Legal philosophy must address how laws can evolve to protect against these new forms of threats while balancing security with civil liberties. Ethics plays a central role in this discourse. Digital technologies raise ethical dilemmas, such as the development and use of artificial intelligence and machine learning algorithms that may perpetuate biases or make decisions without human oversight. Legal philosophers must evaluate the moral responsibilities of those who design and implement these technologies and consider the implications for justice and fairness. Furthermore, digital skepticism prompts a reevaluation of the concept of the rule of law. In an increasingly digital world, maintaining transparency, accountability, and fairness becomes more complex. Legal philosophers must explore how legal frameworks can ensure that digital technologies serve the public good and do not entrench power imbalances or erode democratic principles. Finally, the intersection of digital skepticism and legal philosophy has practical implications for policy-making. Legal scholars and practitioners must work collaboratively to develop regulations and guidelines that address the challenges posed by digital technology. This includes crafting laws that protect individual rights, ensure security, and promote ethical standards in technology development and deployment. In conclusion, digital skepticism provides a crucial lens for examining the impact of digital technology on law and society. A legal philosophical approach offers valuable insights into how legal systems can adapt to protect fundamental values in the digital age. By addressing privacy, security, ethics, and the rule of law, legal philosophers can help shape a future where digital advancements enhance, rather than undermine, justice and human dignity.

Keywords: legal philosophy, privacy, security, ethics, digital skepticism

Procedia PDF Downloads 3
64 The Incidental Linguistic Information Processing and Its Relation to General Intellectual Abilities

Authors: Evgeniya V. Gavrilova, Sofya S. Belova

Abstract:

The present study was aimed at clarifying the relationship between general intellectual abilities and efficiency in free recall and rhymed words generation task after incidental exposure to linguistic stimuli. The theoretical frameworks stress that general intellectual abilities are based on intentional mental strategies. In this context, it seems to be crucial to examine the efficiency of incidentally presented information processing in cognitive task and its relation to general intellectual abilities. The sample consisted of 32 Russian students. Participants were exposed to pairs of words. Each pair consisted of two common nouns or two city names. Participants had to decide whether a city name was presented in each pair. Thus words’ semantics was processed intentionally. The city names were considered to be focal stimuli, whereas common nouns were considered to be peripheral stimuli. Along with that each pair of words could be rhymed or not be rhymed, but this phonemic aspect of stimuli’s characteristic (rhymed and non-rhymed words) was processed incidentally. Then participants were asked to produce as many rhymes as they could to new words. The stimuli presented earlier could be used as well. After that, participants had to retrieve all words presented earlier. In the end, verbal and non-verbal abilities were measured with number of special psychometric tests. As for free recall task intentionally processed focal stimuli had an advantage in recall compared to peripheral stimuli. In addition all the rhymed stimuli were recalled more effectively than non-rhymed ones. The inverse effect was found in words generation task where participants tended to use mainly peripheral stimuli compared to focal ones. Furthermore peripheral rhymed stimuli were most popular target category of stimuli that was used in this task. Thus the information that was processed incidentally had a supplemental influence on efficiency of stimuli processing as well in free recall as in word generation task. Different patterns of correlations between intellectual abilities and efficiency in different stimuli processing in both tasks were revealed. Non-verbal reasoning ability correlated positively with free recall of peripheral rhymed stimuli, but it was not related to performance on rhymed words’ generation task. Verbal reasoning ability correlated positively with free recall of focal stimuli. As for rhymed words generation task, verbal intelligence correlated negatively with generation of focal stimuli and correlated positively with generation of all peripheral stimuli. The present findings lead to two key conclusions. First, incidentally processed stimuli had an advantage in free recall and word generation task. Thus incidental information processing appeared to be crucial for subsequent cognitive performance. Secondly, it was demonstrated that incidentally processed stimuli were recalled more frequently by participants with high nonverbal reasoning ability and were more effectively used by participants with high verbal reasoning ability in subsequent cognitive tasks. That implies that general intellectual abilities could benefit from operating by different levels of information processing while cognitive problem solving. This research was supported by the “Grant of President of RF for young PhD scientists” (contract № is 14.Z56.17.2980- MK) and the Grant № 15-36-01348a2 of Russian Foundation for Humanities.

Keywords: focal and peripheral stimuli, general intellectual abilities, incidental information processing

Procedia PDF Downloads 211
63 The Challenges of Citizen Engagement in Urban Transformation: Key Learnings from Three European Cities

Authors: Idoia Landa Oregi, Itsaso Gonzalez Ochoantesana, Olatz Nicolas Buxens, Carlo Ferretti

Abstract:

The impact of citizens in urban transformations has become increasingly important in the pursuit of creating citizen-centered cities. Citizens at the forefront of the urban transformation process are key to establishing resilient, sustainable, and inclusive cities that cater to the needs of all residents. Therefore, collecting data and information directly from citizens is crucial for the sustainable development of cities. Within this context, public participation becomes a pillar for acquiring the necessary information from citizens. Public participation in urban transformation processes establishes a more responsive, equitable, and resilient urban environment. This approach cultivates a sense of shared responsibility and collective progress in building cities that truly serve the well-being of all residents. However, the implementation of public participation practices often overlooks strategies to effectively engage citizens in the processes, resulting in non-successful participatory outcomes. Therefore, this research focuses on identifying and analyzing the critical aspects of citizen engagement during the same participatory urban transformation process in different European contexts: Ermua (Spain), Elva (Estonia) and Matera (Italy). The participatory neighborhood regeneration process is divided into three main stages, to turn social districts into inclusive and smart neighborhoods: (i) the strategic level, (ii) the design level, and (iii) the implementation level. In the initial stage, the focus is on diagnosing the neighborhood and creating a shared vision with the community. The second stage centers around collaboratively designing various action plans to foster inclusivity and intelligence while pushing local economic development within the district. Finally, the third stage ensures the proper co-implementation of the designed actions in the neighborhood. To this date, the presented results critically analyze the key aspects of engagement in the first stage of the methodology, the strategic plan, in the three above-mentioned contexts. It is a multifaceted study that incorporates three case studies to shed light on the various perspectives and strategies adopted by each city. The results indicate that despite of the various cultural contexts, all cities face similar barriers when seeking to enhance engagement. Accordingly, the study identifies specific challenges within the participatory approach across the three cities such as the existence of discontented citizens, communication gaps, inconsistent participation, or administration resistance. Consequently, key learnings of the process indicate that a collaborative sphere needs to be cultivated, educating both citizens and administrations in the aspects of co-governance, giving these practices the appropriate space and their own communication channels. This study is part of the DROP project, funded by the European Union, which aims to develop a citizen-centered urban renewal methodology to transform the social districts into smart and inclusive neighborhoods.

Keywords: citizen-centred cities, engagement, public participation, urban transformation

Procedia PDF Downloads 29
62 Case-Based Reasoning for Modelling Random Variables in the Reliability Assessment of Existing Structures

Authors: Francesca Marsili

Abstract:

The reliability assessment of existing structures with probabilistic methods is becoming an increasingly important and frequent engineering task. However probabilistic reliability methods are based on an exhaustive knowledge of the stochastic modeling of the variables involved in the assessment; at the moment standards for the modeling of variables are absent, representing an obstacle to the dissemination of probabilistic methods. The framework according to probability distribution functions (PDFs) are established is represented by the Bayesian statistics, which uses Bayes Theorem: a prior PDF for the considered parameter is established based on information derived from the design stage and qualitative judgments based on the engineer past experience; then, the prior model is updated with the results of investigation carried out on the considered structure, such as material testing, determination of action and structural properties. The application of Bayesian statistics arises two different kind of problems: 1. The results of the updating depend on the engineer previous experience; 2. The updating of the prior PDF can be performed only if the structure has been tested, and quantitative data that can be statistically manipulated have been collected; performing tests is always an expensive and time consuming operation; furthermore, if the considered structure is an ancient building, destructive tests could compromise its cultural value and therefore should be avoided. In order to solve those problems, an interesting research path is represented by investigating Artificial Intelligence (AI) techniques that can be useful for the automation of the modeling of variables and for the updating of material parameters without performing destructive tests. Among the others, one that raises particular attention in relation to the object of this study is constituted by Case-Based Reasoning (CBR). In this application, cases will be represented by existing buildings where material tests have already been carried out and an updated PDFs for the material mechanical parameters has been computed through a Bayesian analysis. Then each case will be composed by a qualitative description of the material under assessment and the posterior PDFs that describe its material properties. The problem that will be solved is the definition of PDFs for material parameters involved in the reliability assessment of the considered structure. A CBR system represent a good candi¬date in automating the modelling of variables because: 1. Engineers already draw an estimation of the material properties based on the experience collected during the assessment of similar structures, or based on similar cases collected in literature or in data-bases; 2. Material tests carried out on structure can be easily collected from laboratory database or from literature; 3. The system will provide the user of a reliable probabilistic description of the variables involved in the assessment that will also serve as a tool in support of the engineer’s qualitative judgments. Automated modeling of variables can help in spreading probabilistic reliability assessment of existing buildings in the common engineering practice, and target at the best intervention and further tests on the structure; CBR represents a technique which may help to achieve this.

Keywords: reliability assessment of existing buildings, Bayesian analysis, case-based reasoning, historical structures

Procedia PDF Downloads 317
61 Development and Adaptation of a LGBM Machine Learning Model, with a Suitable Concept Drift Detection and Adaptation Technique, for Barcelona Household Electric Load Forecasting During Covid-19 Pandemic Periods (Pre-Pandemic and Strict Lockdown)

Authors: Eric Pla Erra, Mariana Jimenez Martinez

Abstract:

While aggregated loads at a community level tend to be easier to predict, individual household load forecasting present more challenges with higher volatility and uncertainty. Furthermore, the drastic changes that our behavior patterns have suffered due to the COVID-19 pandemic have modified our daily electrical consumption curves and, therefore, further complicated the forecasting methods used to predict short-term electric load. Load forecasting is vital for the smooth and optimized planning and operation of our electric grids, but it also plays a crucial role for individual domestic consumers that rely on a HEMS (Home Energy Management Systems) to optimize their energy usage through self-generation, storage, or smart appliances management. An accurate forecasting leads to higher energy savings and overall energy efficiency of the household when paired with a proper HEMS. In order to study how COVID-19 has affected the accuracy of forecasting methods, an evaluation of the performance of a state-of-the-art LGBM (Light Gradient Boosting Model) will be conducted during the transition between pre-pandemic and lockdowns periods, considering day-ahead electric load forecasting. LGBM improves the capabilities of standard Decision Tree models in both speed and reduction of memory consumption, but it still offers a high accuracy. Even though LGBM has complex non-linear modelling capabilities, it has proven to be a competitive method under challenging forecasting scenarios such as short series, heterogeneous series, or data patterns with minimal prior knowledge. An adaptation of the LGBM model – called “resilient LGBM” – will be also tested, incorporating a concept drift detection technique for time series analysis, with the purpose to evaluate its capabilities to improve the model’s accuracy during extreme events such as COVID-19 lockdowns. The results for the LGBM and resilient LGBM will be compared using standard RMSE (Root Mean Squared Error) as the main performance metric. The models’ performance will be evaluated over a set of real households’ hourly electricity consumption data measured before and during the COVID-19 pandemic. All households are located in the city of Barcelona, Spain, and present different consumption profiles. This study is carried out under the ComMit-20 project, financed by AGAUR (Agència de Gestiód’AjutsUniversitaris), which aims to determine the short and long-term impacts of the COVID-19 pandemic on building energy consumption, incrementing the resilience of electrical systems through the use of tools such as HEMS and artificial intelligence.

Keywords: concept drift, forecasting, home energy management system (HEMS), light gradient boosting model (LGBM)

Procedia PDF Downloads 82
60 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines

Authors: Alexander Guzman Urbina, Atsushi Aoyama

Abstract:

The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.

Keywords: deep learning, risk assessment, neuro fuzzy, pipelines

Procedia PDF Downloads 267
59 Measuring Firms’ Patent Management: Conceptualization, Validation, and Interpretation

Authors: Mehari Teshome, Lara Agostini, Anna Nosella

Abstract:

The current knowledge-based economy extends intellectual property rights (IPRs) legal research themes into a more strategic and organizational perspectives. From the diverse types of IPRs, patents are the strongest and well-known form of legal protection that influences commercial success and market value. Indeed, from our pilot survey, we understood that firms are less likely to manage their patents and actively used it as a tool for achieving competitive advantage rather they invest resource and efforts for patent application. To this regard, the literature also confirms that insights into how firms manage their patents from a holistic, strategic perspective, and how the portfolio value of patents can be optimized are scarce. Though patent management is an important business tool and there exist few scales to measure some dimensions of patent management, at the best of our knowledge, no systematic attempt has been made to develop a valid and comprehensive measure of it. Considering this theoretical and practical point of view, the aim of this article is twofold: to develop a framework for patent management encompassing all relevant dimensions with their respective constructs and measurement items, and to validate the measurement using survey data from practitioners. Methodology: We used six-step methodological approach (i.e., specify the domain of construct, item generation, scale purification, internal consistency assessment, scale validation, and replication). Accordingly, we carried out a systematic review of 182 articles on patent management, from ISI Web of Science. For each article, we mapped relevant constructs, their definition, and associated features, as well as items used to measure these constructs, when provided. This theoretical analysis was complemented by interviews with experts in patent management to get feedbacks that are more practical on how patent management is carried out in firms. Afterwards, we carried out a questionnaire survey to purify our scales and statistical validation. Findings: The analysis allowed us to design a framework for patent management, identifying its core dimensions (i.e., generation, portfolio-management, exploitation and enforcement, intelligence) and support dimensions (i.e., strategy and organization). Moreover, we identified the relevant activities for each dimension, as well as the most suitable items to measure them. For example, the core dimension generation includes constructs as: state-of-the-art analysis, freedom-to-operate analysis, patent watching, securing freedom-to-operate, patent potential and patent-geographical-scope. Originality and the Study Contribution: This study represents a first step towards the development of sound scales to measure patent management with an overarching approach, thus laying the basis for developing a recognized landmark within the research area of patent management. Practical Implications: The new scale can be used to assess the level of sophistication of the patent management of a company and compare it with other firms in the industry to evaluate their ability to manage the different activities involved in patent management. In addition, the framework resulting from this analysis can be used as a guide that supports managers to improve patent management in firms.

Keywords: patent, management, scale, development, intellectual property rights (IPRs)

Procedia PDF Downloads 122
58 Knowledge Based Software Model for the Management and Treatment of Malaria Patients: A Case of Kalisizo General Hospital

Authors: Mbonigaba Swale

Abstract:

Malaria is an infection or disease caused by parasites (Plasmodium Falciparum — causes severe Malaria, plasmodium Vivax, Plasmodium Ovale, and Plasmodium Malariae), transmitted by bites of infected anopheles (female) mosquitoes to humans. These vectors comprise of two types in Africa, particularly in Uganda, i.e. anopheles fenestus and Anopheles gambaie (‘example Anopheles arabiensis,,); feeds on man inside the house mainly at dusk, mid-night and dawn and rests indoors and makes them effective transmitters (vectors) of the disease. People in both urban and rural areas have consistently become prone to repetitive attacks of malaria, causing a lot of deaths and significantly increasing the poverty levels of the rural poor. Malaria is a national problem; it causes a lot of maternal pre-natal and antenatal disorders, anemia in pregnant mothers, low birth weights for the newly born, convulsions and epilepsy among the infants. Cumulatively, it kills about one million children every year in sub-Saharan Africa. It has been estimated to account for 25-35% of all outpatient visits, 20-45% of acute hospital admissions and 15-35% of hospital deaths. Uganda is the leading victim country, for which Rakai and Masaka districts are the most affected. So, it is not clear whether these abhorrent situations and episodes of recurrences and failure to cure from the disease are a result of poor diagnosis, prescription and dosing, treatment habits and compliance of the patients to the drugs or the ethical domain of the stake holders in relation to the main stream methodology of malaria management. The research is aimed at offering an alternative approach to manage and deal absolutely with problem by using a knowledge based software model of Artificial Intelligence (Al) that is capable of performing common-sense and cognitive reasoning so as to take decisions like the human brain would do to provide instantaneous expert solutions so as to avoid speculative simulation of the problem during differential diagnosis in the most accurate and literal inferential aspect. This system will assist physicians in many kinds of medical diagnosis, prescribing treatments and doses, and in monitoring patient responses, basing on the body weight and age group of the patient, it will be able to provide instantaneous and timely information options, alternative ways and approaches to influence decision making during case analysis. The computerized system approach, a new model in Uganda termed as “Software Aided Treatment” (SAT) will try to change the moral and ethical approach and influence conduct so as to improve the skills, experience and values (social and ethical) in the administration and management of the disease and drugs (combination therapy and generics) by both the patient and the health worker.

Keywords: knowledge based software, management, treatment, diagnosis

Procedia PDF Downloads 31
57 Radish Sprout Growth Dependency on LED Color in Plant Factory Experiment

Authors: Tatsuya Kasuga, Hidehisa Shimada, Kimio Oguchi

Abstract:

Recent rapid progress in ICT (Information and Communication Technology) has advanced the penetration of sensor networks (SNs) and their attractive applications. Agriculture is one of the fields well able to benefit from ICT. Plant factories control several parameters related to plant growth in closed areas such as air temperature, humidity, water, culture medium concentration, and artificial lighting by using computers and AI (Artificial Intelligence) is being researched in order to obtain stable and safe production of vegetables and medicinal plants all year anywhere, and attain self-sufficiency in food. By providing isolation from the natural environment, a plant factory can achieve higher productivity and safe products. However, the biggest issue with plant factories is the return on investment. Profits are tenuous because of the large initial investments and running costs, i.e. electric power, incurred. At present, LED (Light Emitting Diode) lights are being adopted because they are more energy-efficient and encourage photosynthesis better than the fluorescent lamps used in the past. However, further cost reduction is essential. This paper introduces experiments that reveal which color of LED lighting best enhances the growth of cultured radish sprouts. Radish sprouts were cultivated in the experimental environment formed by a hydroponics kit with three cultivation shelves (28 samples per shelf) each with an artificial lighting rack. Seven LED arrays of different color (white, blue, yellow green, green, yellow, orange, and red) were compared with a fluorescent lamp as the control. Lighting duration was set to 12 hours a day. Normal water with no fertilizer was circulated. Seven days after germination, the length, weight and area of leaf of each sample were measured. Electrical power consumption for all lighting arrangements was also measured. Results and discussions: As to average sample length, no clear difference was observed in terms of color. As regards weight, orange LED was less effective and the difference was significant (p < 0.05). As to leaf area, blue, yellow and orange LEDs were significantly less effective. However, all LEDs offered higher productivity per W consumed than the fluorescent lamp. Of the LEDs, the blue LED array attained the best results in terms of length, weight and area of leaf per W consumed. Conclusion and future works: An experiment on radish sprout cultivation under 7 different color LED arrays showed no clear difference in terms of sample size. However, if electrical power consumption is considered, LEDs offered about twice the growth rate of the fluorescent lamp. Among them, blue LEDs showed the best performance. Further cost reduction e.g. low power lighting remains a big issue for actual system deployment. An automatic plant monitoring system with sensors is another study target.

Keywords: electric power consumption, LED color, LED lighting, plant factory

Procedia PDF Downloads 165
56 Assessment of Neurodevelopmental Needs in Duchenne Muscular Dystrophy

Authors: Mathula Thangarajh

Abstract:

Duchenne muscular dystrophy (DMD) is a severe form of X-linked muscular dystrophy caused by mutations in the dystrophin gene resulting in progressive skeletal muscle weakness. Boys with DMD also have significant cognitive disabilities. The intelligence quotient of boys with DMD, compared to peers, is approximately one standard deviation below average. Detailed neuropsychological testing has demonstrated that boys with DMD have a global developmental impairment, with verbal memory and visuospatial skills most significantly affected. Furthermore, the total brain volume and gray matter volume are lower in children with DMD compared to age-matched controls. These results are suggestive of a significant structural and functional compromise to the developing brain as a result of absent dystrophin protein expression. There is also some genetic evidence to suggest that mutations in the 3’ end of the DMD gene are associated with more severe neurocognitive problems. Our working hypothesis is that (i) boys with DMD do not make gains in neurodevelopmental skills compared to typically developing children and (ii) women carriers of DMD mutations may have subclinical cognitive deficits. We also hypothesize that there may be an intergenerational vulnerability of cognition, with boys of DMD-carrier mothers being more affected cognitively than boys of non-DMD-carrier mothers. The objectives of this study are: 1. Assess the neurodevelopment in boys with DMD at 4-time points and perform baseline neuroradiological assessment, 2. Assess cognition in biological mothers of DMD participants at baseline, 3. Assess possible correlation between DMD mutation and cognitive measures. This study also explores functional brain abnormalities in people with DMD by exploring how regional and global connectivity of the brain underlies executive function deficits in DMD. Such research can contribute to a better holistic understanding of the cognition alterations due to DMD and could potentially allow clinicians to create better-tailored treatment plans for the DMD population. There are four study visits for each participant (baseline, 2-4 weeks, 1 year, 18 months). At each visit, the participant completes the NIH Toolbox Cognition Battery, a validated psychometric measure that is recommended by NIH Common Data Elements for use in DMD. Visits 1, 3, and 4 also involve the administration of the BRIEF-2, ABAS-3, PROMIS/NeuroQoL, PedsQL Neuromuscular module 3.0, Draw a Clock Test, and an optional fMRI scan with the N-back matching task. We expect to enroll 52 children with DMD, 52 mothers of children with DMD, and 30 healthy control boys. This study began in 2020 during the height of the COVID-19 pandemic. Due to this, there were subsequent delays in recruitment because of travel restrictions. However, we have persevered and continued to recruit new participants for the study. We partnered with the Muscular Dystrophy Association (MDA) and helped advertise the study to interested families. Since then, we have had families from across the country contact us about their interest in the study. We plan to continue to enroll a diverse population of DMD participants to contribute toward a better understanding of Duchenne Muscular Dystrophy.

Keywords: neurology, Duchenne muscular dystrophy, muscular dystrophy, cognition, neurodevelopment, x-linked disorder, DMD, DMD gene

Procedia PDF Downloads 74
55 Facilitating the Learning Environment as a Servant Leader: Empowering Self-Directed Student Learning

Authors: Thomas James Bell III

Abstract:

Pedagogy is thought of as one's philosophy, theory, or teaching method. This study examines the science of learning, considering the forced reconsideration of effective pedagogy brought on by the aftermath of the 2020 coronavirus pandemic. With the aid of various technologies, online education holds challenges and promises to enhance the learning environment if implemented to facilitate student learning. Behaviorism centers around the belief that the instructor is the sage on the classroom stage using repetition techniques as the primary learning instrument. This approach to pedagogy ascribes complete control of the learning environment and works best for students to learn by allowing students to answer questions with immediate feedback. Such structured learning reinforcement tends to guide students' learning without considering learners' independence and individual reasoning. And such activities may inadvertently stifle the student's ability to develop critical thinking and self-expression skills. Fundamentally liberationism pedagogy dismisses the concept that education is merely about students learning things and more about the way students learn. Alternatively, the liberationist approach democratizes the classroom by redefining the role of the teacher and student. The teacher is no longer viewed as the sage on the stage but as a guide on the side. Instead, this approach views students as creators of knowledge and not empty vessels to be filled with knowledge. Moreover, students are well suited to decide how best to learn and which areas improvements are needed. This study will explore the classroom instructor as a servant leader in the twenty-first century, which allows students to integrate technology that encapsulates more individual learning styles. The researcher will examine the Professional Scrum Master (PSM I) exam pass rate results of 124 students in six sections of an Agile scrum course. The students will be separated into two groups; the first group will follow a structured instructor-led course outlined by a course syllabus. The second group will consist of several small teams (ten or fewer) of self-led and self-empowered students. The teams will conduct several event meetings that include sprint planning meetings, daily scrums, sprint reviews, and retrospective meetings throughout the semester will the instructor facilitating the teams' activities as needed. The methodology for this study will use the compare means t-test to compare the mean of an exam pass rate in one group to the mean of the second group. A one-tailed test (i.e., less than or greater than) will be used with the null hypothesis, for the difference between the groups in the population will be set to zero. The major findings will expand the pedagogical approach that suggests pedagogy primarily exist in support of teacher-led learning, which has formed the pillars of traditional classroom teaching. But in light of the fourth industrial revolution, there is a fusion of learning platforms across the digital, physical, and biological worlds with disruptive technological advancements in areas such as the Internet of Things (IoT), artificial intelligence (AI), 3D printing, robotics, and others.

Keywords: pedagogy, behaviorism, liberationism, flipping the classroom, servant leader instructor, agile scrum in education

Procedia PDF Downloads 110
54 Comparison of the Effect of Heart Rate Variability Biofeedback and Slow Breathing Training on Promoting Autonomic Nervous Function Related Performance

Authors: Yi Jen Wang, Yu Ju Chen

Abstract:

Background: Heart rate variability (HRV) biofeedback can promote autonomic nervous function, sleep quality and reduce psychological stress. In HRV biofeedback training, it is hoped that through the guidance of machine video or audio, the patient can breathe slowly according to his own heart rate changes so that the heart and lungs can achieve resonance, thereby promoting the related effects of autonomic nerve function; while, it is also pointed out that if slow breathing of 6 times per minute can also guide the case to achieve the effect of cardiopulmonary resonance. However, there is no relevant research to explore the comparison of the effectiveness of cardiopulmonary resonance by using video or audio HRV biofeedback training and metronome-guided slow breathing. Purpose: To compare the promotion of autonomic nervous function performance between using HRV biofeedback and slow breathing guided by a metronome. Method: This research is a kind of experimental design with convenient sampling; the cases are randomly divided into the heart rate variability biofeedback training group and the slow breathing training group. The HRV biofeedback training group will conduct HRV biofeedback training in a four-week laboratory and use the home training device for autonomous training; while the slow breathing training group will conduct slow breathing training in the four-week laboratory using the mobile phone APP breathing metronome to guide the slow breathing training, and use the mobile phone APP for autonomous training at home. After two groups were enrolled and four weeks after the intervention, the autonomic nervous function-related performance was repeatedly measured. Using the chi-square test, student’s t-test and other statistical methods to analyze the results, and use p <0.05 as the basis for statistical significance. Results: A total of 27 subjects were included in the analysis. After four weeks of training, the HRV biofeedback training group showed significant improvement in the HRV indexes (SDNN, RMSSD, HF, TP) and sleep quality. Although the stress index also decreased, it did not reach statistical significance; the slow breathing training group was not statistically significant after four weeks of training, only sleep quality improved significantly, while the HRV indexes (SDNN, RMSSD, TP) all increased. Although HF and stress indexes decreased, they were not statistically significant. Comparing the difference between the two groups after training, it was found that the HF index improved significantly and reached statistical significance in the HRV biofeedback training group. Although the sleep quality of the two groups improved, it did not reach that level in a statistically significant difference. Conclusion: HRV biofeedback training is more effective in promoting autonomic nervous function than slow breathing training, but the effects of reducing stress and promoting sleep quality need to be explored after increasing the number of samples. The results of this study can provide a reference for clinical or community health promotion. In the future, it can also be further designed to integrate heart rate variability biological feedback training into the development of AI artificial intelligence wearable devices, which can make it more convenient for people to train independently and get effective feedback in time.

Keywords: autonomic nervous function, HRV biofeedback, heart rate variability, slow breathing

Procedia PDF Downloads 148
53 The Role of Metaheuristic Approaches in Engineering Problems

Authors: Ferzat Anka

Abstract:

Many types of problems can be solved using traditional analytical methods. However, these methods take a long time and cause inefficient use of resources. In particular, different approaches may be required in solving complex and global engineering problems that we frequently encounter in real life. The bigger and more complex a problem, the harder it is to solve. Such problems are called Nondeterministic Polynomial time (NP-hard) in the literature. The main reasons for recommending different metaheuristic algorithms for various problems are the use of simple concepts, the use of simple mathematical equations and structures, the use of non-derivative mechanisms, the avoidance of local optima, and their fast convergence. They are also flexible, as they can be applied to different problems without very specific modifications. Thanks to these features, it can be easily embedded even in many hardware devices. Accordingly, this approach can also be used in trend application areas such as IoT, big data, and parallel structures. Indeed, the metaheuristic approaches are algorithms that return near-optimal results for solving large-scale optimization problems. This study is focused on the new metaheuristic method that has been merged with the chaotic approach. It is based on the chaos theorem and helps relevant algorithms to improve the diversity of the population and fast convergence. This approach is based on Chimp Optimization Algorithm (ChOA), that is a recently introduced metaheuristic algorithm inspired by nature. This algorithm identified four types of chimpanzee groups: attacker, barrier, chaser, and driver, and proposed a suitable mathematical model for them based on the various intelligence and sexual motivations of chimpanzees. However, this algorithm is not more successful in the convergence rate and escaping of the local optimum trap in solving high-dimensional problems. Although it and some of its variants use some strategies to overcome these problems, it is observed that it is not sufficient. Therefore, in this study, a newly expanded variant is described. In the algorithm called Ex-ChOA, hybrid models are proposed for position updates of search agents, and a dynamic switching mechanism is provided for transition phases. This flexible structure solves the slow convergence problem of ChOA and improves its accuracy in multidimensional problems. Therefore, it tries to achieve success in solving global, complex, and constrained problems. The main contribution of this study is 1) It improves the accuracy and solves the slow convergence problem of the ChOA. 2) It proposes new hybrid movement strategy models for position updates of search agents. 3) It provides success in solving global, complex, and constrained problems. 4) It provides a dynamic switching mechanism between phases. The performance of the Ex-ChOA algorithm is analyzed on a total of 8 benchmark functions, as well as a total of 2 classical and constrained engineering problems. The proposed algorithm is compared with the ChoA, and several well-known variants (Weighted-ChoA, Enhanced-ChoA) are used. In addition, an Improved algorithm from the Grey Wolf Optimizer (I-GWO) method is chosen for comparison since the working model is similar. The obtained results depict that the proposed algorithm performs better or equivalently to the compared algorithms.

Keywords: optimization, metaheuristic, chimp optimization algorithm, engineering constrained problems

Procedia PDF Downloads 53
52 Elements of Creativity and Innovation

Authors: Fadwa Al Bawardi

Abstract:

In March 2021, the Saudi Arabian Council of Ministers issued a decision to form a committee called the "Higher Committee for Research, Development and Innovation," a committee linked to the Council of Economic and Development Affairs, chaired by the Chairman of the Council of Economic and Development Affairs, and concerned with the development of the research, development and innovation sector in the Kingdom. In order to talk about the dimensions of this wonderful step, let us first try to answer the following questions. Is there a difference between creativity and innovation..? What are the factors of creativity in the individual. Are they mental genetic factors or are they factors that an individual acquires through learning..? The methodology included surveys that have been conducted on more than 500 individuals, males and females, between the ages of 18 till 60. And the answer is. "Creativity" is the creation of a new idea, while "Innovation" is the development of an already existing idea in a new, successful way. They are two sides of the same coin, as the "creative idea" needs to be developed and transformed into an "innovation" in order to achieve either strategic achievements at the level of countries and institutions to enhance organizational intelligence, or achievements at the level of individuals. For example, the beginning of smart phones was just a creative idea from IBM in 1994, but the actual successful innovation for the manufacture, development and marketing of these phones was through Apple later. Nor does creativity have to be hereditary. There are three basic factors for creativity: The first factor is "the presence of a challenge or an obstacle" that the individual faces and seeks thinking to find solutions to overcome, even if thinking requires a long time. The second factor is the "environment surrounding" of the individual, which includes science, training, experience gained, the ability to use techniques, as well as the ability to assess whether the idea is feasible or otherwise. To achieve this factor, the individual must be aware of own skills, strengths, hobbies, and aspects in which one can be creative, and the individual must also be self-confident and courageous enough to suggest those new ideas. The third factor is "Experience and the Ability to Accept Risk and Lack of Initial Success," and then learn from mistakes and try again tirelessly. There are some tools and techniques that help the individual to reach creative and innovative ideas, such as: Mind Maps tool, through which the available information is drawn by writing a short word for each piece of information and arranging all other relevant information through clear lines, which helps in logical thinking and correct vision. There is also a tool called "Flow Charts", which are graphics that show the sequence of data and expected results according to an ordered scenario of events and workflow steps, giving clarity to the ideas, their sequence, and what is expected of them. There are also other great tools such as the Six Hats tool, a useful tool to be applied by a group of people for effective planning and detailed logical thinking, and the Snowball tool. And all of them are tools that greatly help in organizing and arranging mental thoughts, and making the right decisions. It is also easy to learn, apply and use all those tools and techniques to reach creative and innovative solutions. The detailed figures and results of the conducted surveys are available upon request, with charts showing the %s based on gender, age groups, and job categories.

Keywords: innovation, creativity, factors, tools

Procedia PDF Downloads 31
51 Statistical Models and Time Series Forecasting on Crime Data in Nepal

Authors: Dila Ram Bhandari

Abstract:

Throughout the 20th century, new governments were created where identities such as ethnic, religious, linguistic, caste, communal, tribal, and others played a part in the development of constitutions and the legal system of victim and criminal justice. Acute issues with extremism, poverty, environmental degradation, cybercrimes, human rights violations, crime against, and victimization of both individuals and groups have recently plagued South Asian nations. Everyday massive number of crimes are steadfast, these frequent crimes have made the lives of common citizens restless. Crimes are one of the major threats to society and also for civilization. Crime is a bone of contention that can create a societal disturbance. The old-style crime solving practices are unable to live up to the requirement of existing crime situations. Crime analysis is one of the most important activities of the majority of intelligent and law enforcement organizations all over the world. The South Asia region lacks such a regional coordination mechanism, unlike central Asia of Asia Pacific regions, to facilitate criminal intelligence sharing and operational coordination related to organized crime, including illicit drug trafficking and money laundering. There have been numerous conversations in recent years about using data mining technology to combat crime and terrorism. The Data Detective program from Sentient as a software company, uses data mining techniques to support the police (Sentient, 2017). The goals of this internship are to test out several predictive model solutions and choose the most effective and promising one. First, extensive literature reviews on data mining, crime analysis, and crime data mining were conducted. Sentient offered a 7-year archive of crime statistics that were daily aggregated to produce a univariate dataset. Moreover, a daily incidence type aggregation was performed to produce a multivariate dataset. Each solution's forecast period lasted seven days. Statistical models and neural network models were the two main groups into which the experiments were split. For the crime data, neural networks fared better than statistical models. This study gives a general review of the applied statistics and neural network models. A detailed image of each model's performance on the available data and generalizability is provided by a comparative analysis of all the models on a comparable dataset. Obviously, the studies demonstrated that, in comparison to other models, Gated Recurrent Units (GRU) produced greater prediction. The crime records of 2005-2019 which was collected from Nepal Police headquarter and analysed by R programming. In conclusion, gated recurrent unit implementation could give benefit to police in predicting crime. Hence, time series analysis using GRU could be a prospective additional feature in Data Detective.

Keywords: time series analysis, forecasting, ARIMA, machine learning

Procedia PDF Downloads 139
50 Leveraging Advanced Technologies and Data to Eliminate Abandoned, Lost, or Otherwise Discarded Fishing Gear and Derelict Fishing Gear

Authors: Grant Bifolchi

Abstract:

As global environmental problems continue to have highly adverse effects, finding long-term, sustainable solutions to combat ecological distress are of growing paramount concern. Ghost Gear—also known as abandoned, lost or otherwise discarded fishing gear (ALDFG) and derelict fishing gear (DFG)—represents one of the greatest threats to the world’s oceans, posing a significant hazard to human health, livelihoods, and global food security. In fact, according to the UN Food and Agriculture Organization (FAO), abandoned, lost and discarded fishing gear represents approximately 10% of marine debris by volume. Around the world, many governments, governmental and non-profit organizations are doing their best to manage the reporting and retrieval of nets, lines, ropes, traps, floats and more from their respective bodies of water. However, these organizations’ ability to effectively manage files and documents about the environmental problem further complicates matters. In Ghost Gear monitoring and management, organizations face additional complexities. Whether it’s data ingest, industry regulations and standards, garnering actionable insights into the location, security, and management of data, or the application of enforcement due to disparate data—all of these factors are placing massive strains on organizations struggling to save the planet from the dangers of Ghost Gear. In this 90-minute educational session, globally recognized Ghost Gear technology expert Grant Bifolchi CET, BBA, Bcom, will provide real-world insight into how governments currently manage Ghost Gear and the technology that can accelerate success in combatting ALDFG and DFG. In this session, attendees will learn how to: • Identify specific technologies to solve the ingest and management of Ghost Gear data categories, including type, geo-location, size, ownership, regional assignment, collection and disposal. • Provide enhanced access to authorities, fisheries, independent fishing vessels, individuals, etc., while securely controlling confidential and privileged data to globally recognized standards. • Create and maintain processing accuracy to effectively track ALDFG/DFG reporting progress—including acknowledging receipt of the report and sharing it with all pertinent stakeholders to ensure approvals are secured. • Enable and utilize Business Intelligence (BI) and Analytics to store and analyze data to optimize organizational performance, maintain anytime-visibility of report status, user accountability, scheduling, management, and foster governmental transparency. • Maintain Compliance Reporting through highly defined, detailed and automated reports—enabling all stakeholders to share critical insights with internal colleagues, regulatory agencies, and national and international partners.

Keywords: ghost gear, ALDFG, DFG, abandoned, lost or otherwise discarded fishing gear, data, technology

Procedia PDF Downloads 72
49 Emotion and Risk Taking in a Casino Game

Authors: Yulia V. Krasavtseva, Tatiana V. Kornilova

Abstract:

Risk-taking behaviors are not only dictated by cognitive components but also involve emotional aspects. Anticipatory emotions, involving both cognitive and affective mechanisms, are involved in decision-making in general, and risk-taking in particular. Affective reactions are prompted when an expectation or prediction is either validated or invalidated in the achieved result. This study aimed to combine predictions, anticipatory emotions, affective reactions, and personality traits in the context of risk-taking behaviors. An experimental online method Emotion and Prediction In a Casino (EPIC) was used, based on a casino-like roulette game. In a series of choices, the participant is presented with progressively riskier roulette combinations, where the potential sums of wins and losses increase with each choice and the participant is given a choice: to 'walk away' with the current sum of money or to 'play' the displayed roulette, thus accepting the implicit risk. Before and after the result is displayed, participants also rate their emotions, using the Self-Assessment Mannequin [Bradley, Lang, 1994], picking a picture, representing the intensity of pleasure, arousal, and dominance. The following personality measures were used: 1) Personal Decision-Making Factors [Kornilova, 2003] assessing risk and rationality; 2) I7 – Impulsivity Questionnaire [Kornilova, 1995] assessing impulsiveness, risk readiness, and empathy and 3) Subjective Risk Intelligence Scale [Craparo et al., 2018] assessing negative attitude toward uncertainty, emotional stress vulnerability, imaginative capability, and problem-solving self-efficacy. Two groups of participants took part in the study: 1) 98 university students (Mage=19.71, SD=3.25; 72% female) and 2) 94 online participants (Mage=28.25, SD=8.25; 89% female). Online participants were recruited via social media. Students with high rationality rated their pleasure and dominance before and after choices as lower (ρ from -2.6 to -2.7, p < 0.05). Those with high levels of impulsivity rated their arousal lower before finding out their result (ρ from 2.5 - 3.7, p < 0.05), while also rating their dominance as low (ρ from -3 to -3.7, p < 0.05). Students prone to risk-rated their pleasure and arousal before and after higher (ρ from 2.5 - 3.6, p < 0.05). High empathy was positively correlated with arousal after learning the result. High emotional stress vulnerability positively correlates with arousal and pleasure after the choice (ρ from 3.9 - 5.7, p < 0.05). Negative attitude to uncertainty is correlated with high anticipatory and reactive arousal (ρ from 2.7 - 5.7, p < 0.05). High imaginative capability correlates negatively with anticipatory and reactive dominance (ρ from - 3.4 to - 4.3, p < 0.05). Pleasure (.492), arousal (.590), and dominance (.551) before and after the result were positively correlated. Higher predictions positively correlated with reactive pleasure and arousal. In a riskier scenario (6/8 chances to win), anticipatory arousal was negatively correlated with the pleasure emotion (-.326) and vice versa (-.265). Correlations occur regardless of the roulette outcome. In conclusion, risk-taking behaviors are linked not only to personality traits but also to anticipatory emotions and affect in a modeled casino setting. Acknowledgment: The study was supported by the Russian Foundation for Basic Research, project 19-29-07069.

Keywords: anticipatory emotions, casino game, risk taking, impulsiveness

Procedia PDF Downloads 113
48 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland

Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski

Abstract:

PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.

Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks

Procedia PDF Downloads 112
47 Fiber Stiffness Detection of GFRP Using Combined ABAQUS and Genetic Algorithms

Authors: Gyu-Dong Kim, Wuk-Jae Yoo, Sang-Youl Lee

Abstract:

Composite structures offer numerous advantages over conventional structural systems in the form of higher specific stiffness and strength, lower life-cycle costs, and benefits such as easy installation and improved safety. Recently, there has been a considerable increase in the use of composites in engineering applications and as wraps for seismic upgrading and repairs. However, these composites deteriorate with time because of outdated materials, excessive use, repetitive loading, climatic conditions, manufacturing errors, and deficiencies in inspection methods. In particular, damaged fibers in a composite result in significant degradation of structural performance. In order to reduce the failure probability of composites in service, techniques to assess the condition of the composites to prevent continual growth of fiber damage are required. Condition assessment technology and nondestructive evaluation (NDE) techniques have provided various solutions for the safety of structures by means of detecting damage or defects from static or dynamic responses induced by external loading. A variety of techniques based on detecting the changes in static or dynamic behavior of isotropic structures has been developed in the last two decades. These methods, based on analytical approaches, are limited in their capabilities in dealing with complex systems, primarily because of their limitations in handling different loading and boundary conditions. Recently, investigators have introduced direct search methods based on metaheuristics techniques and artificial intelligence, such as genetic algorithms (GA), simulated annealing (SA) methods, and neural networks (NN), and have promisingly applied these methods to the field of structural identification. Among them, GAs attract our attention because they do not require a considerable amount of data in advance in dealing with complex problems and can make a global solution search possible as opposed to classical gradient-based optimization techniques. In this study, we propose an alternative damage-detection technique that can determine the degraded stiffness distribution of vibrating laminated composites made of Glass Fiber-reinforced Polymer (GFRP). The proposed method uses a modified form of the bivariate Gaussian distribution function to detect degraded stiffness characteristics. In addition, this study presents a method to detect the fiber property variation of laminated composite plates from the micromechanical point of view. The finite element model is used to study free vibrations of laminated composite plates for fiber stiffness degradation. In order to solve the inverse problem using the combined method, this study uses only first mode shapes in a structure for the measured frequency data. In particular, this study focuses on the effect of the interaction among various parameters, such as fiber angles, layup sequences, and damage distributions, on fiber-stiffness damage detection.

Keywords: stiffness detection, fiber damage, genetic algorithm, layup sequences

Procedia PDF Downloads 240
46 Optimal Framework of Policy Systems with Innovation: Use of Strategic Design for Evolution of Decisions

Authors: Yuna Lee

Abstract:

In the current policy process, there has been a growing interest in more open approaches that incorporate creativity and innovation based on the forecasting groups composed by the public and experts together into scientific data-driven foresight methods to implement more effective policymaking. Especially, citizen participation as collective intelligence in policymaking with design and deep scale of innovation at the global level has been developed and human-centred design thinking is considered as one of the most promising methods for strategic foresight. Yet, there is a lack of a common theoretical foundation for a comprehensive approach for the current situation of and post-COVID-19 era, and substantial changes in policymaking practice are insignificant and ongoing with trial and error. This project hypothesized that rigorously developed policy systems and tools that support strategic foresight by considering the public understanding could maximize ways to create new possibilities for a preferable future, however, it must involve a better understating of Behavioural Insights, including individual and cultural values, profit motives and needs, and psychological motivations, for implementing holistic and multilateral foresight and creating more positive possibilities. To what extent is the policymaking system theoretically possible that incorporates the holistic and comprehensive foresight and policy process implementation, assuming that theory and practice, in reality, are different and not connected? What components and environmental conditions should be included in the strategic foresight system to enhance the capacity of decision from policymakers to predict alternative futures, or detect uncertainties of the future more accurately? And, compared to the required environmental condition, what are the environmental vulnerabilities of the current policymaking system? In this light, this research contemplates the question of how effectively policymaking practices have been implemented through the synthesis of scientific, technology-oriented innovation with the strategic design for tackling complex societal challenges and devising more significant insights to make society greener and more liveable. Here, this study conceptualizes the notions of a new collaborative way of strategic foresight that aims to maximize mutual benefits between policy actors and citizens through the cooperation stemming from evolutionary game theory. This study applies mixed methodology, including interviews of policy experts, with the case in which digital transformation and strategic design provided future-oriented solutions or directions to cities’ sustainable development goals and society-wide urgent challenges such as COVID-19. As a result, artistic and sensual interpreting capabilities through strategic design promote a concrete form of ideas toward a stable connection from the present to the future and enhance the understanding and active cooperation among decision-makers, stakeholders, and citizens. Ultimately, an improved theoretical foundation proposed in this study is expected to help strategically respond to the highly interconnected future changes of the post-COVID-19 world.

Keywords: policymaking, strategic design, sustainable innovation, evolution of cooperation

Procedia PDF Downloads 166