Search results for: multimedia retrieval
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 524

Search results for: multimedia retrieval

14 Centrality and Patent Impact: Coupled Network Analysis of Artificial Intelligence Patents Based on Co-Cited Scientific Papers

Authors: Xingyu Gao, Qiang Wu, Yuanyuan Liu, Yue Yang

Abstract:

In the era of the knowledge economy, the relationship between scientific knowledge and patents has garnered significant attention. Understanding the intricate interplay between the foundations of science and technological innovation has emerged as a pivotal challenge for both researchers and policymakers. This study establishes a coupled network of artificial intelligence patents based on co-cited scientific papers. Leveraging centrality metrics from network analysis offers a fresh perspective on understanding the influence of information flow and knowledge sharing within the network on patent impact. The study initially obtained patent numbers for 446,890 granted US AI patents from the United States Patent and Trademark Office’s artificial intelligence patent database for the years 2002-2020. Subsequently, specific information regarding these patents was acquired using the Lens patent retrieval platform. Additionally, a search and deduplication process was performed on scientific non-patent references (SNPRs) using the Web of Science database, resulting in the selection of 184,603 patents that cited 37,467 unique SNPRs. Finally, this study constructs a coupled network comprising 59,379 artificial intelligence patents by utilizing scientific papers co-cited in patent backward citations. In this network, nodes represent patents, and if patents reference the same scientific papers, connections are established between them, serving as edges within the network. Nodes and edges collectively constitute the patent coupling network. Structural characteristics such as node degree centrality, betweenness centrality, and closeness centrality are employed to assess the scientific connections between patents, while citation count is utilized as a quantitative metric for patent influence. Finally, a negative binomial model is employed to test the nonlinear relationship between these network structural features and patent influence. The research findings indicate that network structural features such as node degree centrality, betweenness centrality, and closeness centrality exhibit inverted U-shaped relationships with patent influence. Specifically, as these centrality metrics increase, patent influence initially shows an upward trend, but once these features reach a certain threshold, patent influence starts to decline. This discovery suggests that moderate network centrality is beneficial for enhancing patent influence, while excessively high centrality may have a detrimental effect on patent influence. This finding offers crucial insights for policymakers, emphasizing the importance of encouraging moderate knowledge flow and sharing to promote innovation when formulating technology policies. It suggests that in certain situations, data sharing and integration can contribute to innovation. Consequently, policymakers can take measures to promote data-sharing policies, such as open data initiatives, to facilitate the flow of knowledge and the generation of innovation. Additionally, governments and relevant agencies can achieve broader knowledge dissemination by supporting collaborative research projects, adjusting intellectual property policies to enhance flexibility, or nurturing technology entrepreneurship ecosystems.

Keywords: centrality, patent coupling network, patent influence, social network analysis

Procedia PDF Downloads 24
13 Design and Implementation of a Hardened Cryptographic Coprocessor with 128-bit RISC-V Core

Authors: Yashas Bedre Raghavendra, Pim Vullers

Abstract:

This study presents the design and implementation of an abstract cryptographic coprocessor, leveraging AMBA(Advanced Microcontroller Bus Architecture) protocols - APB (Advanced Peripheral Bus) and AHB (Advanced High-performance Bus), to enable seamless integration with the main CPU(Central processing unit) and enhance the coprocessor’s algorithm flexibility. The primary objective is to create a versatile coprocessor that can execute various cryptographic algorithms, including ECC(Elliptic-curve cryptography), RSA(Rivest–Shamir–Adleman), and AES (Advanced Encryption Standard) while providing a robust and secure solution for modern secure embedded systems. To achieve this goal, the coprocessor is equipped with a tightly coupled memory (TCM) for rapid data access during cryptographic operations. The TCM is placed within the coprocessor, ensuring quick retrieval of critical data and optimizing overall performance. Additionally, the program memory is positioned outside the coprocessor, allowing for easy updates and reconfiguration, which enhances adaptability to future algorithm implementations. Direct links are employed instead of DMA(Direct memory access) for data transfer, ensuring faster communication and reducing complexity. The AMBA-based communication architecture facilitates seamless interaction between the coprocessor and the main CPU, streamlining data flow and ensuring efficient utilization of system resources. The abstract nature of the coprocessor allows for easy integration of new cryptographic algorithms in the future. As the security landscape continues to evolve, the coprocessor can adapt and incorporate emerging algorithms, making it a future-proof solution for cryptographic processing. Furthermore, this study explores the addition of custom instructions into RISC-V ISE (Instruction Set Extension) to enhance cryptographic operations. By incorporating custom instructions specifically tailored for cryptographic algorithms, the coprocessor achieves higher efficiency and reduced cycles per instruction (CPI) compared to traditional instruction sets. The adoption of RISC-V 128-bit architecture significantly reduces the total number of instructions required for complex cryptographic tasks, leading to faster execution times and improved overall performance. Comparisons are made with 32-bit and 64-bit architectures, highlighting the advantages of the 128-bit architecture in terms of reduced instruction count and CPI. In conclusion, the abstract cryptographic coprocessor presented in this study offers significant advantages in terms of algorithm flexibility, security, and integration with the main CPU. By leveraging AMBA protocols and employing direct links for data transfer, the coprocessor achieves high-performance cryptographic operations without compromising system efficiency. With its TCM and external program memory, the coprocessor is capable of securely executing a wide range of cryptographic algorithms. This versatility and adaptability, coupled with the benefits of custom instructions and the 128-bit architecture, make it an invaluable asset for secure embedded systems, meeting the demands of modern cryptographic applications.

Keywords: abstract cryptographic coprocessor, AMBA protocols, ECC, RSA, AES, tightly coupled memory, secure embedded systems, RISC-V ISE, custom instructions, instruction count, cycles per instruction

Procedia PDF Downloads 43
12 Early Impact Prediction and Key Factors Study of Artificial Intelligence Patents: A Method Based on LightGBM and Interpretable Machine Learning

Authors: Xingyu Gao, Qiang Wu

Abstract:

Patents play a crucial role in protecting innovation and intellectual property. Early prediction of the impact of artificial intelligence (AI) patents helps researchers and companies allocate resources and make better decisions. Understanding the key factors that influence patent impact can assist researchers in gaining a better understanding of the evolution of AI technology and innovation trends. Therefore, identifying highly impactful patents early and providing support for them holds immeasurable value in accelerating technological progress, reducing research and development costs, and mitigating market positioning risks. Despite the extensive research on AI patents, accurately predicting their early impact remains a challenge. Traditional methods often consider only single factors or simple combinations, failing to comprehensively and accurately reflect the actual impact of patents. This paper utilized the artificial intelligence patent database from the United States Patent and Trademark Office and the Len.org patent retrieval platform to obtain specific information on 35,708 AI patents. Using six machine learning models, namely Multiple Linear Regression, Random Forest Regression, XGBoost Regression, LightGBM Regression, Support Vector Machine Regression, and K-Nearest Neighbors Regression, and using early indicators of patents as features, the paper comprehensively predicted the impact of patents from three aspects: technical, social, and economic. These aspects include the technical leadership of patents, the number of citations they receive, and their shared value. The SHAP (Shapley Additive exPlanations) metric was used to explain the predictions of the best model, quantifying the contribution of each feature to the model's predictions. The experimental results on the AI patent dataset indicate that, for all three target variables, LightGBM regression shows the best predictive performance. Specifically, patent novelty has the greatest impact on predicting the technical impact of patents and has a positive effect. Additionally, the number of owners, the number of backward citations, and the number of independent claims are all crucial and have a positive influence on predicting technical impact. In predicting the social impact of patents, the number of applicants is considered the most critical input variable, but it has a negative impact on social impact. At the same time, the number of independent claims, the number of owners, and the number of backward citations are also important predictive factors, and they have a positive effect on social impact. For predicting the economic impact of patents, the number of independent claims is considered the most important factor and has a positive impact on economic impact. The number of owners, the number of sibling countries or regions, and the size of the extended patent family also have a positive influence on economic impact. The study primarily relies on data from the United States Patent and Trademark Office for artificial intelligence patents. Future research could consider more comprehensive data sources, including artificial intelligence patent data, from a global perspective. While the study takes into account various factors, there may still be other important features not considered. In the future, factors such as patent implementation and market applications may be considered as they could have an impact on the influence of patents.

Keywords: patent influence, interpretable machine learning, predictive models, SHAP

Procedia PDF Downloads 18
11 Rethinking Urban Voids: An Investigation beneath the Kathipara Flyover, Chennai into a Transit Hub by Adaptive Utilization of Space

Authors: V. Jayanthi

Abstract:

Urbanization and pace of urbanization have increased tremendously in last few decades. More towns are now getting converted into cities. Urbanization trend is seen all over the world but is becoming most dominant in Asia. Today, the scale of urbanization in India is so huge that Indian cities are among the fastest-growing in the world, including Bangalore, Hyderabad, Pune, Chennai, Delhi, and Mumbai. Urbanization remains a single predominant factor that is continuously linked to the destruction of urban green spaces. With reference to Chennai as a case study, which is suffering from rapid deterioration of its green spaces, this paper sought to fill this gap by exploring key factors aside urbanization that is responsible for the destruction of green spaces. The paper relied on a research approach and triangulated data collection techniques such as interviews, focus group discussion, personal observation and retrieval of archival data. It was observed that apart from urbanization, problem of ownership of green space lands, low priority to green spaces, poor maintenance, enforcement of development controls, wastage of underpass spaces, and uncooperative attitudes of the general public, play a critical role in the destruction of urban green spaces. Therefore the paper narrows down to a point, that for a city to have a proper sustainable urban green space, broader city development plans are essential. Though rapid urbanization is an indicator of positive development, it is also accompanied by a host of challenges. Chennai lost a lot of greenery, as the city urbanized rapidly that led to a steep fall in vegetation cover. Environmental deterioration will be the big price we pay if Chennai continues to grow at the expense of greenery. Soaring skyscrapers, multistoried complexes, gated communities, and villas, frame the iconic skyline of today’s Chennai city which reveals that we overlook the importance of our green cover, which is important to balance our urban and lung spaces. Chennai, with a clumped landscape at the center of the city, is predicted to convert 36% of its total area into urban areas by 2026. One major issue is that a city designed and planned in isolation creates underused spaces all around the cities which are of negligence. These urban voids are dead, underused, unused spaces in the cities that are formed due to inefficient decision making, poor land management, and poor coordination. Urban voids have huge potential of creating a stronger urban fabric, exploited as public gathering spaces, pocket parks or plazas or just enhance public realm, rather than dumping of debris and encroachments. Flyovers need to justify their existence themselves by being more than just traffic and transport solutions. The vast, unused space below the Kathipara flyover is a case in point. This flyover connects three major routes: Tambaram, Koyambedu, and Adyar. This research will focus on the concept of urban voids, how these voids under the flyovers, can be used for place making process, how this space beneath flyovers which are neglected, can be a part of the urban realm through urban design and landscaping.

Keywords: landscape design, flyovers, public spaces, reclaiming lost spaces, urban voids

Procedia PDF Downloads 225
10 The Istrian Istrovenetian-Croatian Bilingual Corpus

Authors: Nada Poropat Jeletic, Gordana Hrzica

Abstract:

Bilingual conversational corpora represent a meaningful and the most comprehensive data source for investigating the genuine contact phenomena in non-monitored bi-lingual speech productions. They can be particularly useful for bilingual research since some features of bilingual interaction can hardly be accessed with more traditional methodologies (e.g., elicitation tasks). The method of language sampling provides the resources for describing language interaction in a bilingual community and/or in bilingual situations (e.g. code-switching, amount of languages used, number of languages used, etc.). To capture these phenomena in genuine communication situations, such sampling should be as close as possible to spontaneous communication. Bilingual spoken corpus design is methodologically demanding. Therefore this paper aims at describing the methodological challenges that apply to the corpus design of the conversational corpus design of the Istrian Istrovenetian-Croatian Bilingual Corpus. Croatian is the first official language of the Croatian-Italian officially bilingual Istria County, while Istrovenetian is a diatopic subvariety of Venetian, a longlasting lingua franca in the Istrian peninsula, the mother tongue of the members of the Italian National Community in Istria and the primary code of informal everyday communication among the Istrian Italophone population. Within the CLARIN infrastructure, TalkBank is being used, as it provides relevant procedures for designing and analyzing bilingual corpora. Furthermore, it allows public availability allows for easy replication of studies and cumulative progress as a research community builds up around the corpus, while the tools developed within the field of corpus linguistics enable easy retrieval and analysis of information. The method of language sampling employed is kept at the level of spontaneous communication, in order to maximise the naturalness of the collected conversational data. All speakers have provided written informed consent in which they agree to be recorded at a random point within the period of one month after signing the consent. Participants are administered a background questionnaire providing information about the socioeconomic status and the exposure and language usage in the participants social networks. Recording data are being transcribed, phonologically adapted within a standard-sized orthographic form, coded and segmented (speech streams are being segmented into communication units based on syntactic criteria) and are being marked following the CHAT transcription system and its associated CLAN suite of programmes within the TalkBank toolkit. The corpus consists of transcribed sound recordings of 36 bilingual speakers, while the target is to publish the whole corpus by the end of 2020, by sampling spontaneous conversations among approximately 100 speakers from all the bilingual areas of Istria for ensuring representativeness (the participants are being recruited across three generations of native bilingual speakers in all the bilingual areas of the peninsula). Conversational corpora are still rare in TalkBank, so the Corpus will contribute to BilingBank as a highly relevant and scientifically reliable resource for an internationally established and active research community. The impact of the research of communities with societal bilingualism will contribute to the growing body of research on bilingualism and multilingualism, especially regarding topics of language dominance, language attrition and loss, interference and code-switching etc.

Keywords: conversational corpora, bilingual corpora, code-switching, language sampling, corpus design methodology

Procedia PDF Downloads 109
9 Familiarity with Intercultural Conflicts and Global Work Performance: Testing a Theory of Recognition Primed Decision-Making

Authors: Thomas Rockstuhl, Kok Yee Ng, Guido Gianasso, Soon Ang

Abstract:

Two meta-analyses show that intercultural experience is not related to intercultural adaptation or performance in international assignments. These findings have prompted calls for a deeper grounding of research on international experience in the phenomenon of global work. Two issues, in particular, may limit current understanding of the relationship between international experience and global work performance. First, intercultural experience is too broad a construct that may not sufficiently capture the essence of global work, which to a large part involves sensemaking and managing intercultural conflicts. Second, the psychological mechanisms through which intercultural experience affects performance remains under-explored, resulting in a poor understanding of how experience is translated into learning and performance outcomes. Drawing on recognition primed decision-making (RPD) research, the current study advances a cognitive processing model to highlight the importance of intercultural conflict familiarity. Compared to intercultural experience, intercultural conflict familiarity is a more targeted construct that captures individuals’ previous exposure to dealing with intercultural conflicts. Drawing on RPD theory, we argue that individuals’ intercultural conflict familiarity enhances their ability to make accurate judgments and generate effective responses when intercultural conflicts arise. In turn, the ability to make accurate situation judgements and effective situation responses is an important predictor of global work performance. A relocation program within a multinational enterprise provided the context to test these hypotheses using a time-lagged, multi-source field study. Participants were 165 employees (46% female; with an average of 5 years of global work experience) from 42 countries who relocated from country to regional offices as part a global restructuring program. Within the first two weeks of transfer to the regional office, employees completed measures of their familiarity with intercultural conflicts, cultural intelligence, cognitive ability, and demographic information. They also completed an intercultural situational judgment test (iSJT) to assess their situation judgment and situation response. The iSJT comprised four validated multimedia vignettes of challenging intercultural work conflicts and prompted employees to provide protocols of their situation judgment and situation response. Two research assistants, trained in intercultural management but blind to the study hypotheses, coded the quality of employee’s situation judgment and situation response. Three months later, supervisors rated employees’ global work performance. Results using multilevel modeling (vignettes nested within employees) support the hypotheses that greater familiarity with intercultural conflicts is positively associated with better situation judgment, and that situation judgment mediates the effect of intercultural familiarity on situation response quality. Also, aggregated situation judgment and situation response quality both predicted supervisor-rated global work performance. Theoretically, our findings highlight the important but under-explored role of familiarity with intercultural conflicts; a shift in attention from the general nature of international experience assessed in terms of number and length of overseas assignments. Also, our cognitive approach premised on RPD theory offers a new theoretical lens to understand the psychological mechanisms through which intercultural conflict familiarity affects global work performance. Third, and importantly, our study contributes to the global talent identification literature by demonstrating that the cognitive processes engaged in resolving intercultural conflicts predict actual performance in the global workplace.

Keywords: intercultural conflict familiarity, job performance, judgment and decision making, situational judgment test

Procedia PDF Downloads 145
8 Thematic Analysis of Ramayana Narrative Scroll Paintings: A Need for Knowledge Preservation

Authors: Shatarupa Thakurta Roy

Abstract:

Along the limelight of mainstream academic practices in Indian art, exist a significant lot of habitual art practices that are mutually susceptible in their contemporary forms. Narrative folk paintings of regional India has successfully dispersed to its audience social messages through pulsating pictures and orations. The paper consists of images from narrative scroll paintings on ‘Ramayana’ theme from various neighboring states as well as districts in India, describing their subtle differences in style of execution, method, and use of material. Despite sharing commonness in the choice of subject matter, habitual and ceremonial Indian folk art in its formative phase thrived within isolated locations to yield in remarkable variety in the art styles. The differences in style took place district wise, cast wise and even gender wise. An open flow is only evident in the contemporary expressions as a result of substantial changes in social structures, mode of communicative devices, cross-cultural exposures and multimedia interactivities. To decipher the complex nature of popular cultural taste of contemporary India it is important to categorically identify its root in vernacular symbolism. The realization of modernity through European primitivism was rather elevated as a perplexed identity in Indian cultural margin in the light of nationalist and postcolonial ideology. To trace the guiding factor that has still managed to obtain ‘Indianness’ in today’s Indian art, researchers need evidences from the past that are yet to be listed in most instances. They are commonly created on ephemeral foundations. The artworks are also found in endangered state and hence, not counted much friendly for frequent handling. The museums are in dearth of proper technological guidelines to preserve them. Even though restoration activities are emerging in the country, the existing withered and damaged artworks are in threat to perish. An immediacy of digital achieving is therefore envisioned as an alternative to save this cultural legacy. The method of this study is, two folded. It primarily justifies the richness of the evidences by conducting categorical aesthetic analysis. The study is supported by comments on the stylistic variants, thematic aspects, and iconographic identities alongside its anthropological and anthropomorphic significance. Further, it explores the possible ways of cultural preservation to ensure cultural sustainability that includes technological intervention in the form of digital transformation as an altered paradigm for better accessibility to the available recourses. The study duly emphasizes on visual description in order to culturally interpret and judge the rare visual evidences following Feldman’s four-stepped method of formal analysis combined with thematic explanation. A habitual design that emerges and thrives within complex social circumstances may experience change placing its principle philosophy at risk by shuffling and altering with time. A tradition that respires in the modern setup struggles to maintain timeless values that operate its creative flow. Thus, the paper hypothesizes the survival and further growth of this practice within the dynamics of time and concludes in realization of the urgency to transform the implicitness of its knowledge into explicit records.

Keywords: aesthetic, identity, implicitness, paradigm

Procedia PDF Downloads 340
7 Design and Implementation of an Affordable Electronic Medical Records in a Rural Healthcare Setting: A Qualitative Intrinsic Phenomenon Case Study

Authors: Nitika Sharma, Yogesh Jain

Abstract:

Introduction: An efficient Information System helps in improving the service delivery as well provides the foundation for policy and regulation of other building blocks of Health System. Health care organizations require an integrated working of its various sub-systems. An efficient EMR software boosts the teamwork amongst the various sub-systems thereby resulting in improved service delivery. Although there has been a huge impetus to EMR under the Digital India initiative, it has still not been mandated in India. It is generally implemented in huge funded public or private healthcare organizations only. Objective: The study was conducted to understand the factors that lead to the successful adoption of an affordable EMR in the low level healthcare organization. It intended to understand the design of the EMR and address the solutions to the challenges faced in adoption of the EMR. Methodology: The study was conducted in a non-profit registered Healthcare organization that has been providing healthcare facilities to more than 2500 villages including certain areas that are difficult to access. The data was collected with help of field notes, in-depth interviews and participant observation. A total of 16 participants using the EMR from different departments were enrolled via purposive sampling technique. The participants included in the study were working in the organization before the implementation of the EMR system. The study was conducted in one month period from 25 June-20 July 2018. The Ethical approval was taken from the institute along with prior approval of the participants. Data analysis: A word document of more than 4000 words was obtained after transcribing and translating the answers of respondents. It was further analyzed by focused coding, a line by line review of the transcripts, underlining words, phrases or sentences that might suggest themes to do thematic narrative analysis. Results: Based on the answers the results were thematically grouped under four headings: 1. governance of organization, 2. architecture and design of the software, 3. features of the software, 4. challenges faced in adoption and the solutions to address them. It was inferred that the successful implementation was attributed to the easy and comprehensive design of the system which has facilitated not only easy data storage and retrieval but contributes in constructing a decision support system for the staff. Portability has lead to increased acceptance by physicians. The proper division of labor, increased efficiency of staff, incorporation of auto-correction features and facilitation of task shifting has lead to increased acceptance amongst the users of various departments. Geographical inhibitions, low computer literacy and high patient load were the major challenges faced during its implementation. Despite of dual efforts made both by the architects and administrators to combat these challenges, there are still certain ongoing challenges faced by organization. Conclusion: Whenever any new technology is adopted there are certain innovators, early adopters, late adopters and laggards. The same pattern was followed in adoption of this software. He challenges were overcome with joint efforts of organization administrators and users as well. Thereby this case study provides a framework of implementing similar systems in public sector of countries that are struggling for digitizing the healthcare in presence of crunch of human and financial resources.

Keywords: EMR, healthcare technology, e-health, EHR

Procedia PDF Downloads 79
6 Cloud-Based Multiresolution Geodata Cube for Efficient Raster Data Visualization and Analysis

Authors: Lassi Lehto, Jaakko Kahkonen, Juha Oksanen, Tapani Sarjakoski

Abstract:

The use of raster-formatted data sets in geospatial analysis is increasing rapidly. At the same time, geographic data are being introduced into disciplines outside the traditional domain of geoinformatics, like climate change, intelligent transport, and immigration studies. These developments call for better methods to deliver raster geodata in an efficient and easy-to-use manner. Data cube technologies have traditionally been used in the geospatial domain for managing Earth Observation data sets that have strict requirements for effective handling of time series. The same approach and methodologies can also be applied in managing other types of geospatial data sets. A cloud service-based geodata cube, called GeoCubes Finland, has been developed to support online delivery and analysis of most important geospatial data sets with national coverage. The main target group of the service is the academic research institutes in the country. The most significant aspects of the GeoCubes data repository include the use of multiple resolution levels, cloud-optimized file structure, and a customized, flexible content access API. Input data sets are pre-processed while being ingested into the repository to bring them into a harmonized form in aspects like georeferencing, sampling resolutions, spatial subdivision, and value encoding. All the resolution levels are created using an appropriate generalization method, selected depending on the nature of the source data set. Multiple pre-processed resolutions enable new kinds of online analysis approaches to be introduced. Analysis processes based on interactive visual exploration can be effectively carried out, as the level of resolution most close to the visual scale can always be used. In the same way, statistical analysis can be carried out on resolution levels that best reflect the scale of the phenomenon being studied. Access times remain close to constant, independent of the scale applied in the application. The cloud service-based approach, applied in the GeoCubes Finland repository, enables analysis operations to be performed on the server platform, thus making high-performance computing facilities easily accessible. The developed GeoCubes API supports this kind of approach for online analysis. The use of cloud-optimized file structures in data storage enables the fast extraction of subareas. The access API allows for the use of vector-formatted administrative areas and user-defined polygons as definitions of subareas for data retrieval. Administrative areas of the country in four levels are available readily from the GeoCubes platform. In addition to direct delivery of raster data, the service also supports the so-called virtual file format, in which only a small text file is first downloaded. The text file contains links to the raster content on the service platform. The actual raster data is downloaded on demand, from the spatial area and resolution level required in each stage of the application. By the geodata cube approach, pre-harmonized geospatial data sets are made accessible to new categories of inexperienced users in an easy-to-use manner. At the same time, the multiresolution nature of the GeoCubes repository facilitates expert users to introduce new kinds of interactive online analysis operations.

Keywords: cloud service, geodata cube, multiresolution, raster geodata

Procedia PDF Downloads 104
5 Framework to Organize Community-Led Project-Based Learning at a Massive Scale of 900 Indian Villages

Authors: Ayesha Selwyn, Annapoorni Chandrashekar, Kumar Ashwarya, Nishant Baghel

Abstract:

Project-based learning (PBL) activities are typically implemented in technology-enabled schools by highly trained teachers. In rural India, students have limited access to technology and quality education. Implementing typical PBL activities is challenging. This study details how Pratham Education Foundation’s Hybrid Learning model was used to implement two PBL activities related to music in 900 remote Indian villages with 46,000 students aged 10-14. The activities were completed by 69% of groups that submitted a total of 15,000 videos (completed projects). Pratham’s H-Learning model reaches 100,000 students aged 3-14 in 900 Indian villages. The community-driven model engages students in 20,000 self-organized groups outside of school. The students are guided by 6,000 youth volunteers and 100 facilitators. The students partake in learning activities across subjects with the support of community stakeholders and offline digital content on shared Android tablets. A training and implementation toolkit for PBL activities is designed by subject experts. This toolkit is essential in ensuring efficient implementation of activities as facilitators aren’t highly skilled and have limited access to training resources. The toolkit details the activity at three levels of student engagement - enrollment, participation, and completion. The subject experts train project leaders and facilitators who train youth volunteers. Volunteers need to be trained on how to execute the activity and guide students. The training is focused on building the volunteers’ capacity to enable students to solve problems, rather than developing the volunteers’ subject-related knowledge. This structure ensures that continuous intervention of subject matter experts isn’t required, and the onus of judging creativity skills is put on community members. 46,000 students in the H-Learning program were engaged in two PBL activities related to Music from April-June 2019. For one activity, students had to conduct a “musical survey” in their village by designing a survey and shooting and editing a video. This activity aimed to develop students’ information retrieval, data gathering, teamwork, communication, project management, and creativity skills. It also aimed to identify talent and document local folk music. The second activity, “Pratham Idol”, was a singing competition. Students participated in performing, producing, and editing videos. This activity aimed to develop students’ teamwork and creative skills and give students a creative outlet. Students showcased their completed projects at village fairs wherein a panel of community members evaluated the videos. The shortlisted videos from all villages were further evaluated by experts who identified students and adults to participate in advanced music workshops. The H-Learning framework enables students in low resource settings to engage in PBL and develop relevant skills by leveraging community support and using video creation as a tool. In rural India, students do not have access to high-quality education or infrastructure. Therefore designing activities that can be implemented by community members after limited training is essential. The subject experts have minimal intervention once the activity is initiated, which significantly reduces the cost of implementation and allows the activity to be implemented at a massive scale.

Keywords: community supported learning, project-based learning, self-organized learning, education technology

Procedia PDF Downloads 149
4 Challenging Airway Management for Tracheal Compression Due to a Rhabdomyosarcoma

Authors: Elena Parmentier, Henrik Endeman

Abstract:

Introduction: Large mediastinal masses often present with diagnostic and clinical challenges due to compression of the respiratory and hemodynamic system. We present a case of a mediastinal mass with symptomatic mechanical compression of the trachea, resulting in challenging airway management. Methods: We present a case of 66-year-old male, complaining of progressive dysphagia. Initial esophagogastroscopy revealed a stenosis secondary to external compression, biopsies were inconclusive. Additional CT scan showed a large mediastinal mass of unknown origin, situated between the vertebrae and esophagus. Symptoms progressed and patient developed dyspnea and stridor. A new CT showed quick growth of the mass with compression of the trachea, subglottic to just above the carina. A tracheal covered stent was successfully placed. Endobronchial ultrasound revealed a large irregular mass without tracheal invasion, biopsies were taken. 4 days after stent placement, the patients’ condition deteriorated with worsening of stridor, dyspnea and desaturation. Migration of the tracheal stent into the right main bronchus was seen on chest X ray, with obstruction of the left main bronchus and secondary atelectasis. Different methods have been described in the literature for tracheobronchial stent removal (surgical, endoscopic, fluoroscopyguided), our first choice in this case was flexible bronchoscopy. However, this revealed tracheal compression above the migrated stent and passage of the scope occurred impossible. Patient was admitted to the ICU, high-flow nasal oxygen therapy was started and the situation stabilized, giving time for extensive assessment and preparation of the airway management approach. Close cooperation between the intensivist, pulmonologist, anesthesiologist and otorhinolaryngologist was essential. Results: In case of sudden deterioration, a protocol for emergency situations was made. Given the increased risk of additional tracheal compression after administration of neuromuscular blocking agents, an approach with awake fiberoptic intubation maintaining spontaneous ventilation was proposed. However, intubation without retrieval of the tracheal stent was found undesirable due to expected massive shunting over the left atelectatic lung. As rescue option, assistance of extracorporeal circulation was considered and perfusionist was kept on standby. The patient stayed stable and was transferred to the operating theatre. High frequency jet ventilation under general anesthesia resulted in desaturations up to 50%, making rigid bronchoscopy impossible. Subsequently an endotracheal tube size 8 could be placed successfully and the stent could be retrieved via bronchoscopy over (and with) the tube, after which the patient was reintubated. Finally, a tracheostomy (Shiley™ Tracheostomy Tube With Cuff, size 8) was placed, fiberoptic control showed a patent airway. Patient was readmitted to the ICU and could be quickly weaned of the ventilator. Pathology was positive for rhabdomyosarcoma, without indication for systemic therapy. Extensive surgery (laryngectomy, esophagectomy) was suggested, but patient refused and palliative care was started. Conclusion: Due to meticulous planning in an interdisciplinary team, we showed a successful airway management approach in this complicated case of critical airway compression secondary to a rare rhabdomyosarcoma, complicated by tracheal stent migration. Besides presenting our thoughts and considerations, we support exploring other possible approaches of this specific clinical problem.

Keywords: airway management, rhabdomyosarcoma, stent displacement, tracheal stenosis

Procedia PDF Downloads 64
3 Interference of Polymers Addition in Wastewaters Microbial Survey: Case Study of Viral Retention in Sludges

Authors: Doriane Delafosse, Dominique Fontvieille

Abstract:

Background: Wastewater treatment plants (WWTPs) generally display significant efficacy in virus retention yet, are sometimes highly variable, partly in relation to large fluctuating loads at the head of the plant and partly because of episodic dysfunctions in some treatment processes. The problem is especially sensitive when human enteric viruses, such as human Noroviruses Genogroup I or Adenoviruses, are in concern: their release downstream WWTP, in environments often interconnected to recreational areas, may be very harmful to human communities even at low concentrations. It points out the importance of WWTP permanent monitoring from which their internal treatment processes could be adjusted. One way to adjust primary treatments is to add coagulants and flocculants to sewage ahead settling tanks to improve decantation. In this work, sludge produced by three coagulants (two organics, one mineral), four flocculants (three cationic, one anionic), and their combinations were studied for their efficacy in human enteric virus retention. Sewage samples were coming from a WWTP in the vicinity of the laboratory. All experiments were performed three times and in triplicates in laboratory pilots, using Murine Norovirus (MNV-1), a surrogate of human Norovirus, as an internal control (spiking). Viruses were quantified by (RT-)qPCR after nucleic acid extraction from both treated water and sediment. Results: Low values of sludge virus retention (from 4 to 8% of the initial sewage concentration) were observed with each cationic organic flocculant added to wastewater and no coagulant. The largest part of the virus load was detected in the treated water (48 to 90%). However, it was not counterbalancing the amount of the introduced virus (MNV-1). The results pertained to two types of cationic flocculants, branched and linear, and in the last case, to two percentages of cations. Results were quite similar to the association of a linear cationic organic coagulant and an anionic flocculant, though suggesting that differences between water and sludges would sometimes be related to virus size or virus origins (autochthonous/allochthonous). FeCl₃, as a mineral coagulant associated with an anionic flocculant, significantly increased both auto- and allochthonous virus retention in the sediments (15 to 34%). Accordingly, virus load in treated water was lower (14 to 48%) but with a total that still does not reach the amount of the introduced virus (MNV-1). It also appeared that the virus retrieval in a bare 0.1M NaCl suspension varied rather strongly according to the FeCl₃ concentration, suggesting an inhibiting effect on the molecular analysis used to detect the virus. Finally, no viruses were detected in both phases (sediment and water) with the combination branched cationic coagulant-linear anionic flocculant, which was later demonstrated as an effect, here also, of polymers on the virus detection-molecular analysis. Conclusions: The combination of FeCl₃-anionic flocculant gave its highest performance to the decantation-based virus removal process. However, large unbalanced values in spiking experiments were observed, suggesting that polymers cast additional obstacles to both elution buffer and lysis buffer on their way to reach the virus. The situation was probably even worse with autochthonous viruses already embedded into sewage's particulate matter. Polymers and FeCl₃ also appeared to interfere in some steps of molecular analyses. More attention should be paid to such impediments wherever chemical additives are considered to be used to enhance WWTP processes. Acknowledgments: This research was supported by the ABIOLAB laboratory (Montbonnot Saint-Martin, France) and by the ASPOSAN association. Field experiments were possible thanks to the Grand Chambéry WWTP authorities (Chambéry, France).

Keywords: flocculants-coagulants, polymers, enteric viruses, wastewater sedimentation treatment plant

Procedia PDF Downloads 94
2 Advancing Dialysis Care Access And Health Information Management: A Blueprint For Nairobi Hospital

Authors: Kimberly Winnie Achieng Otieno

Abstract:

Nairobi Hospital plays a pivotal role in healthcare provision in East and Central Africa, yet it faces challenges in providing accessible dialysis care and managing health information efficiently. This paper explores strategic interventions to enhance dialysis care, access and streamline health information management, fostering an integrated and patient-centered healthcare system. Challenges at Nairobi Hospital: The Nairobi Hospital currently grapples with insufficient dialysis machines, resulting in extended turn around time in between dialysis sessions for patients. This issue stems from both staffing bottle necks and infrastructural limitations given our growing demand for renal care services. Paper-based records and fragmented information systems hinder the hospital’s ability to manage health data effectively. A lack of hospital systems integration with other facilities jeopardizes patient care access by posing challenges. These inefficiencies hinder collaborative efforts within the healthcare network. An investment in the expanding Nairobi Hospital dialysis facilities to communities is crucial with the high number of new cases of patients with chronic kidney disease. Setting up satellite clinics that are closer to people who live in areas far from the main hospital will ensure better access. This includes acquiring physical space within the greater Nairobi region, and the incorporation of mobile dialysis units to reach underserved areas. By decentralizing services, Nairobi Hospital can extend its reach and cater to a larger patient population. Community Outreach and Education: Implementing educational programs on kidney health within local communities is vital for early detection and prevention. Collaborating with local leaders and organizations can establish a proactive approach to renal health hence reducing the demand for acute dialysis interventions. it can amplify this effort by expanding Nairobi Hospital’s corporate social responsibility outreach program. Increasing the hospital’s footprint would also require an equal ramp up of staff recruitment. Support for continuous training programs will ensure that healthcare providers stay abreast of evolving practices, contributing to improved patient outcomes and service quality. Streamlining Health Information Management: Fully embracing a shift to 100% Electronic Health Records (EHRs) is a transformative step toward efficient health information management. Customizing these systems to Nairobi Hospital’s specific needs allows for seamless data recording, retrieval, and sharing among healthcare professionals. Doing so will help the hospital guarantee a continuum of care for patients transferring from other facilities. A 100% transition to digital record will also pose its own security threats. Ensuring robust security measures protects patient data and builds trust. Adherence to healthcare data privacy regulations is non-negotiable, and a comprehensive strategy for encryption, access controls, and regular audits should be implemented. Integrating systems to enable interoperability with other healthcare providers facilitates a cohesive healthcare network. Shared information promotes a holistic understanding of patients’ medical history, minimizing redundancies and enhancing overall care quality. Implementation Strategies: To manage the transition to community-based care and EHRs effectively, a phased implementation approach is recommended. Prioritizing dialysis care improvements, at a local level, in the initial stages allows the hospital to address immediate patient needs, followed by the integration of health information management changes. Engaging hospital staff, patients, and local communities is paramount. Collaboration with government agencies, non-governmental organizations (NGOs), and international partners enhances support and resources for successful implementation. Conclusion: By strategically enhancing dialysis care access and streamlining health information management, Nairobi Hospital can strengthen its position as a leading healthcare institution in both East and Central Africa. This comprehensive approach aligns with the hospital’s commitment to providing high-quality, accessible, and patient-centered care in the evolving landscape of healthcare delivery.

Keywords: Africa, urology, diaylsis, healthcare

Procedia PDF Downloads 18
1 The Road Ahead: Merging Human Cyber Security Expertise with Generative AI

Authors: Brennan Lodge

Abstract:

Cybersecurity professionals have long been embroiled in a digital arms race, confronting increasingly sophisticated threats with innovative solutions. The field of cybersecurity is in an unending race against malicious adversaries. As threats evolve in complexity, the tools used to defend against them need to advance even faster. Burdened with a vast arsenal of tools and an expansive scope of threat intelligence, analysts frequently navigate a complex web, trying to discern patterns amidst information overload. Herein lies the potential of Retrieval Augmented Generation (RAG). By combining the capabilities of Large Language Models (LLMs) with a generative AI facet, RAG brings to the table an unparalleled ability for real-time cross-referencing, bridging the gap between raw data and actionable insights. Imagine an analyst named Sarah working at a global Fortune 500 company. Every day, Sarah navigates a maze of diverse knowledge bases, real-time threat intelligence, and her company's vast proprietary data, from network specifics to intricate technical blueprints. One day, she's challenged by a potential breach through a personal device due to the company's global "Bring Your Own Device" policy. With the clock ticking, Sarah has mere minutes to trace the malware's origin, all while considering complex regional regulations. As she races against the benchmark of Mean Time To Resolution (MTTR), she wonders: Could "Cozy Bear" with its notorious malware tactic, HAMMERTOSS, be behind this? Balancing policy intricacies, global network considerations, and ever-emerging cyber threats, Sarah's role epitomizes the intense challenges faced by today's cybersecurity analysts. While analysts grapple with this array of intricate, time-sensitive challenges, the necessity for precision and efficiency is key. RAG technology—a cutting-edge advancement in Gen AI—is a promising solution. Designed to assimilate diverse data sources such as cyber advisory notices, phishing email sentiment, secure and insecure code examples, information security policy documentation, and the MITRE ATT&CK framework, RAG equips analysts with real-time querying capabilities through a vector database and a cross referenced concise response from a Gen AI model. Traditional relational databases often necessitate a tedious process of filtering through numerous entries. Now, with the synergy of vector databases and Gen AI models, analysts can rapidly access both contextually or semantically akin data points. This augmented approach equips analysts with a comprehensive understanding of the prevailing cyber threats, elevating the robustness of cybersecurity defenses and upskilling the analyst and team, too. Vector databases underpin the knowledge translation in Gen AI. They bridge the gap between raw data and translation into meaningful insights, ensuring that analysts are equipped with comprehensive and relevant information. This superior capability of the RAG framework, with its impressive depth and precision, finds application across a broad spectrum of cybersecurity challenges. Let's delve into some use cases where its potential becomes particularly evident: Phishing Email Sentiment Analysis: Phishing remains a predominant vector for cybersecurity breaches. Leveraging RAG's capabilities, analysts can not only assess the potential malevolence of an email but can also understand the context behind it. By cross-referencing patterns from varied data sources in real-time, the detection process evolves from a mere content evaluation to a holistic understanding of attacker tactics, behaviors, and evolving profiles. This allows for the identification of nuanced phishing strategies that might otherwise go undetected. Insecure Code Analysis: Software vulnerabilities form a critical entry point for cyber adversaries. With RAG, the process of code evaluation undergoes a transformation. Instead of manual code reviews, the system pulls insights from vector databases and historical code snippets marked as insecure, enabling detection of vulnerabilities based on historical patterns, emerging threat vectors, and even predictive threat modeling. This ensures that even the most obfuscated or embedded vulnerabilities are identified, and corrective measures can be promptly implemented. Vulnerability and Upskill Advisory: In the fast-paced world of cybersecurity, staying updated is paramount. Through RAG's capabilities, analysts are not only made aware of real-time vulnerabilities but are also guided on the necessary skills and tools needed to combat them. By dynamically sourcing data through vulnerability advisories, news on advanced persistent threats, and tactics to defend, RAG ensures that analysts are not only reactive to threats but are also proactively upskilled, thereby bolstering their defense mechanisms. Information Security Policies for Compliance Teams: Compliance remains at the heart of many organizational cybersecurity strategies. However, with ever-shifting regulatory landscapes, staying compliant becomes a moving target. RAG's ability to source real-time data ensures that compliance teams always have access to the latest policy changes, guidelines, and best practices. This not only facilitates adherence to current standards but also anticipates future shifts, assists with audits, and ensures that organizations remain ahead of the compliance curve. Fusing a RAG architecture with platforms like Slack amplifies its practical utility. Slack, known for its real-time communication prowess, seamlessly evolves into more than just a messaging platform in this context. Cybersecurity analysts can pose intricate queries within Slack and, almost instantaneously, receive comprehensive feedback powered by the harmonious interplay of RAG and Gen AI. This integration effectively transforms Slack into an AI-augmented chatbot-like assistant for cybersecurity professionals, always ready to provide informed insights on-demand, making it an indispensable ally in the ever-evolving cyber battlefield. Navigating the vast landscape of cybersecurity, analysts often encounter unfamiliar terminologies and techniques., analysts require tools that not only detect or inform them of threats, like CISA (U.S Cybersecurity Infrastructure Security Agency) Advisories, but also interpret and communicate them effectively. Consider a junior cybersecurity analyst named Alex, who comes across the term "Kerberoasting" while reviewing a network log. Unfamiliar with its intricacies, Alex turns to Slack to pose a query: "chat explain is Kerberoasting, using CISA." Almost instantaneously, Slack, powered by the harmonious interplay of RAG and Gen AI, provides a detailed response, cross-referencing a recent cyber advisory on the technique. It explains how attackers can exploit the Kerberos Ticket Granting Service to decipher service account passwords, potentially compromising a network. In this dynamic realm of cybersecurity, the blend of RAG and Generative AI represents more than just a technological leap. It embodies a paradigm shift, promising a future where human expertise and AI-driven precision join forces. As cyber threats continue their relentless advance, this synergy ensures that defenders are equipped with an arsenal that's not just reactive, but also profoundly insightful. No longer should analysts be submerged in a deluge of data without direction. Instead, they should be empowered, to discern, act, and preempt with unparalleled clarity and confidence. By harmoniously intertwining human discernment with AI capabilities, we should chart a path towards a future where cybersecurity is not just about defense, but about achieving a strategic advantage, paving the way for a safer, informed and a more secure digital horizon.

Keywords: cybersecurity, gen AI, retrieval augmented generation, cybersecurity defense strategies

Procedia PDF Downloads 47