Search results for: retrieval
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 320

Search results for: retrieval

50 Aerosol Direct Radiative Forcing Over the Indian Subcontinent: A Comparative Analysis from the Satellite Observation and Radiative Transfer Model

Authors: Shreya Srivastava, Sagnik Dey

Abstract:

Aerosol direct radiative forcing (ADRF) refers to the alteration of the Earth's energy balance from the scattering and absorption of solar radiation by aerosol particles. India experiences substantial ADRF due to high aerosol loading from various sources. These aerosols' radiative impact depends on their physical characteristics (such as size, shape, and composition) and atmospheric distribution. Quantifying ADRF is crucial for understanding aerosols’ impact on the regional climate and the Earth's radiative budget. In this study, we have taken radiation data from Clouds and the Earth’s Radiant Energy System (CERES, spatial resolution=1ox1o) for 22 years (2000-2021) over the Indian subcontinent. Except for a few locations, the short-wave DARF exhibits aerosol cooling at the TOA (values ranging from +2.5 W/m2 to -22.5W/m2). Cooling due to aerosols is more pronounced in the absence of clouds. Being an aerosol hotspot, higher negative ADRF is observed over the Indo-Gangetic Plain (IGP). Aerosol Forcing Efficiency (AFE) shows a decreasing seasonal trend in winter (DJF) over the entire study region while an increasing trend over IGP and western south India during the post-monsoon season (SON) in clear-sky conditions. Analysing atmospheric heating and AOD trends, we found that only the aerosol loading is not governing the change in atmospheric heating but also the aerosol composition and/or their vertical profile. We used a Multi-angle Imaging Spectro-Radiometer (MISR) Level-2 Version 23 aerosol products to look into aerosol composition. MISR incorporates 74 aerosol mixtures in its retrieval algorithm based on size, shape, and absorbing properties. This aerosol mixture information was used for analysing long-term changes in aerosol composition and dominating aerosol species corresponding to the aerosol forcing value. Further, ADRF derived from this method is compared with around 35 studies across India, where a plane parallel Radiative transfer model was used, and the model inputs were taken from the OPAC (Optical Properties of Aerosols and Clouds) utilizing only limited aerosol parameter measurements. The result shows a large overestimation of TOA warming by the latter (i.e., Model-based method).

Keywords: aerosol radiative forcing (ARF), aerosol composition, MISR, CERES, SBDART

Procedia PDF Downloads 24
49 A Review of Type 2 Diabetes and Diabetes-Related Cardiovascular Disease in Zambia

Authors: Mwenya Mubanga, Sula Mazimba

Abstract:

Background: In Zambia, much of the focus on nutrition and health has been on reducing micronutrient deficiencies, wasting and underweight malnutrition and not on the rising global projections of trends in obesity and type 2 diabetes. The aim of this review was to identify and collate studies on the prevalence of obesity, diabetes and diabetes-related cardiovascular disease conducted in Zambia, to summarize their findings and to identify areas that need further research. Methods: The Medical Literature Analysis and Retrieval System (MEDLINE) database was searched for peer-reviewed articles on the prevalence of, and factors associated with obesity, type 2 diabetes, and diabetes-related cardiovascular disease amongst Zambian residents using a combination of search terms. The period of search was from 1 January 2000 to 31 December 2016. We expanded the search terms to include all possible synonyms and spellings obtained in the search strategy. Additionally, we performed a manual search for other articles and references of peer-reviewed articles. Results: In Zambia, the current prevalence of Obesity and Type 2 diabetes is estimated at 13%-16% and 2.0 – 3.0% respectively. Risk factors such as the adoption of western dietary habits, the social stigmatization associated with rapid weight loss due to Tuberculosis and/ or the human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS) and rapid urbanization have all been blamed for fueling the increased risk of obesity and type 2 diabetes. However, unlike traditional Western populations, those with no formal education were less likely to be obese than those who attained secondary or tertiary level education. Approximately 30% of those surveyed were unaware of their diabetes diagnosis and more than 60% were not on treatment despite a known diabetic status. Socio-demographic factors such as older age, female sex, urban dwelling, lack of tobacco use and marital status were associated with an increased risk of obesity, impaired glucose tolerance and type 2 diabetes. We were unable to identify studies that specifically looked at diabetes-related cardiovascular disease. Conclusion: Although the prevalence of Obesity and Type 2 diabetes in Zambia appears low, more representative studies focusing on parts of the country outside of the main industrial zone need to be conducted. There also needs to be research on diabetes-related cardiovascular disease. National surveillance, monitoring and evaluation on all non-communicable diseases need to be prioritized and policies that address underweight, obesity and type 2 diabetes developed.

Keywords: type 2 diabetes, Zambia, obesity, cardiovascular disease

Procedia PDF Downloads 211
48 Ubuntombi (Virginity) Among the Zulus: An Exploration of a Cultural Identity and Difference from a Postcolonial Feminist Perspective

Authors: Goodness Thandi Ntuli

Abstract:

The cultural practice of ubuntombi (virginity) among the Zulus is not easily understood from the outside of its cultural context. The empirical study that was conducted through the interviews and focus group discussions about the retrieval of ubuntombi as a cultural practice within the Zulu cultural community indicated that there is a particular cultural identity and difference that can be unearthed from this cultural practice. Being explored from the postcolonial feminist perspective, this cultural identity and difference is discerned in the way in which a Zulu young woman known as intombi (virgin) exercises her power and authority over her own sexuality. Taking full control of her own sexuality from the cultural viewpoint enables her not only to exercise her uniqueness in the midst of multiculturalism and pluralism but also to assert her cultural identity of being intombi. The assertion of the Zulu young woman’s cultural identity does not only empower her to stand on her life principles but also empowers her to lift herself up from the margins of the patriarchal society that otherwise would have kept her on the periphery. She views this as an opportunity for self-development and enhancement through educational opportunities that will enable her to secure a future with financial independence. The underlying belief is that once she has been educationally successful, she would secure a better job opportunity that will enable her to be self-sufficient and not to rely on any male provision for her sustenance. In this, she stands better chances of not being victimized by social patriarchal influences that generally keep women at the bottom of the socio-economic and political ladder. Consequently, ubuntombi (virginity) as a Zulu heritage and cultural identity becomes instrumental in the empowerment of the young women who choose this cultural practice as their adopted lifestyle. In addition, it is the kind of self-empowerment with the intrinsic motivation that works with the innate ability to resist any distraction from an individual’s set goals. It is thus concluded that this kind of motivation is a rare characteristic of the achievers in life. Once these young women adhere to their specified life principles, nothing can stop them from achieving the dreams of their hearts. This includes socio-economic autonomy that will ensure their liberation and emancipation as women in the midst of social and patriarchal challenges that militate against them in the hostile communities of their residence. Another hidden achievement would be to turn around the perception of being viewed as the “other”; instead, they will have to be viewed differently. Their difference lies in the turning around of the archaic kind of cultural practice into a modern tool of self-development and enhancement in contemporary society.

Keywords: cultural, difference, identity, postcolonial, ubuntombi, zulus

Procedia PDF Downloads 153
47 Barriers and Facilitators for Telehealth Use during Cervical Cancer Screening and Care: A Literature Review

Authors: Reuben Mugisha, Stella Bakibinga

Abstract:

The cervical cancer burden is a global threat, but more so in low income settings where more than 85% of mortality cases occur due to lack of sufficient screening programs. There is consequently a lack of early detection of cancer and precancerous cells among women. Studies show that 3% to 35% of deaths could have been avoided through early screening depending on prognosis, disease progression, environmental and lifestyle factors. In this study, a systematic literature review is undertaken to understand potential barriers and facilitators as documented in previous studies that focus on the application of telehealth in cervical cancer screening programs for early detection of cancer and precancerous cells. The study informs future studies especially those from low income settings about lessons learned from previous studies and how to be best prepared while planning to implement telehealth for cervical cancer screening. It further identifies the knowledge gaps in the research area and makes recommendations. Using a specified selection criterion, 15 different articles are analyzed based on the study’s aim, theory or conceptual framework used, method applied, study findings and conclusion. Results are then tabulated and presented thematically to better inform readers about emerging facts on barriers and facilitators to telehealth implementation as documented in the reviewed articles, and how they consequently lead to evidence informed conclusions that are relevant to telehealth implementation for cervical cancer screening. Preliminary findings of this study underscore that use of low cost mobile colposcope is an appealing option in cervical cancer screening, particularly when coupled with onsite treatment of suspicious lesions. These tools relay cervical images to the online databases for storage and retrieval, they permit integration of connected devices at the point of care to rapidly collect clinical data for further analysis of the prevalence of cervical dysplasia and cervical cancer. Results however reveal the need for population sensitization prior to use of mobile colposcopies among patients, standardization of mobile colposcopy programs across screening partners, sufficient logistics and good connectivity, experienced experts to review image cases at the point-of-care as important facilitators to the implementation of mobile colposcope as a telehealth cervical cancer screening mechanism.

Keywords: cervical cancer screening, digital technology, hand-held colposcopy, knowledge-sharing

Procedia PDF Downloads 195
46 A Study Investigating Word Association Behaviour in People with Acquired Language and Communication Disorders

Authors: Angela Maria Fenu

Abstract:

The aim of this study was to better characterize the nature of word association responses in people with aphasia. The participants selected for the experimental group were 4 individuals with mild Broca’s aphasia. The control group consisted of 51 cognitively intact age- and gender-matched individuals. The participants were asked to perform a word association task in which they had to say the first word they thought of when hearing each cue. The cue words (n= 16) were the translation in Italian of the set of English cue words of a published study. The participants from the experimental group were administered the word association test every two weeks for a period of two months when they received speech-language therapy A combination of analytical approaches to measure the data was used. To analyse different patterns of word association responses in both groups, the nature of the relationship between the cue and the response was examined: responses were divided into five categories of association. To investigate the similarity between aphasic and non-aphasic subjects, the stereotypy of responses was examined.While certain stimulus words (nouns, adjectives) elicited responses from Broca’s aphasics that tended to resemble those made by non-aphasic subjects; others (adverbs, verbs) showed the tendency to elicit responses different from the ones given by normal subjects. This suggests that some mechanisms underlying certain types of associations are degraded in aphasics individuals, while others display little evidence of disruption. The high number of paradigmatic associations given in response to a noun or an adjective might imply that the mechanisms, largely semantic, underlying paradigmatic associations are relatively preserved in Broca’s aphasia, but it might also mean that some words are more easily processed depending on their grammatical class (nouns, adjectives). The most significant variation was noticed when the grammatical class of the cue word was an adverb. Unlike the normal individuals, the experimental subjects gave the most idiosyncratic associations, which are often produced when the attempt to give a paradigmatic response fails. In turn, the failure to retrieve paradigmatic responses when the cue is an adverb might suggest that Broca’s aphasics are more sensitive to this grammatical class.The findings from this study suggest that, from research on word associations in people with aphasia, important data can arise concerning the specific lexical retrieval impairments that characterize the different types of aphasia and the various treatments that might positively influence the kinds of word association responses affected by language disruption.

Keywords: aphasia therapy, clinical linguistics, word-association behaviour, mental lexicon

Procedia PDF Downloads 48
45 Learning with Music: The Effects of Musical Tension on Long-Term Declarative Memory Formation

Authors: Nawras Kurzom, Avi Mendelsohn

Abstract:

The effects of background music on learning and memory are inconsistent, partly due to the intrinsic complexity and variety of music and partly to individual differences in music perception and preference. A prominent musical feature that is known to elicit strong emotional responses is musical tension. Musical tension can be brought about by building anticipation of rhythm, harmony, melody, and dynamics. Delaying the resolution of dominant-to-tonic chord progressions, as well as using dissonant harmonics, can elicit feelings of tension, which can, in turn, affect memory formation of concomitant information. The aim of the presented studies was to explore how forming declarative memory is influenced by musical tension, brought about within continuous music as well as in the form of isolated chords with varying degrees of dissonance/consonance. The effects of musical tension on long-term memory of declarative information were studied in two ways: 1) by evoking tension within continuous music pieces by delaying the release of harmonic progressions from dominant to tonic chords, and 2) by using isolated single complex chords with various degrees of dissonance/roughness. Musical tension was validated through subjective reports of tension, as well as physiological measurements of skin conductance response (SCR) and pupil dilation responses to the chords. In addition, music information retrieval (MIR) was used to quantify musical properties associated with tension and its release. Each experiment included an encoding phase, wherein individuals studied stimuli (words or images) with different musical conditions. Memory for the studied stimuli was tested 24 hours later via recognition tasks. In three separate experiments, we found positive relationships between tension perception and physiological measurements of SCR and pupil dilation. As for memory performance, we found that background music, in general, led to superior memory performance as compared to silence. We detected a trade-off effect between tension perception and memory, such that individuals who perceived musical tension as such displayed reduced memory performance for images encoded during musical tension, whereas tense music benefited memory for those who were less sensitive to the perception of musical tension. Musical tension exerts complex interactions with perception, emotional responses, and cognitive performance on individuals with and without musical training. Delineating the conditions and mechanisms that underlie the interactions between musical tension and memory can benefit our understanding of musical perception at large and the diverse effects that music has on ongoing processing of declarative information.

Keywords: musical tension, declarative memory, learning and memory, musical perception

Procedia PDF Downloads 69
44 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 90
43 Evaluation of Diagnostic Values of Culture, Rapid Urease Test, and Histopathology in the Diagnosis of Helicobacter pylori Infection and in vitro Effects of Various Antimicrobials against Helicobacter pylori

Authors: Recep Kesli, Huseyin Bilgin, Yasar Unlu, Gokhan Gungor

Abstract:

Aim: The aim of this study, was to investigate the presence of Helicobacter pylori (H. pylori) infection by culture, histology, and RUT (Rapid Urease Test) in gastric antrum biopsy samples taken from patients presented with dyspeptic complaints and to determine resistance rates of amoxicillin, clarithromycin, levofloxacin and metronidazole against the H. pylori strains by E-test. Material and Methods: A total of 278 patients who admitted to Konya Education and Research Hospital Department of Gastroenterology with dyspeptic complaints, between January 2011-July 2013, were included in the study. Microbiological and histopathological examinations of biopsy specimens taken from antrum and corpus regions were performed. The presence of H. pylori in biopsy samples was investigated by culture (Portagerm pylori-PORT PYL, Pylori agar-PYL, GENbox microaer, bioMerieux, France), histology (Giemsa, Hematoxylin and Eosin staining), and RUT(CLOtest, Cimberly-Clark, USA). Antimicrobial resistance of isolates against amoxicillin, clarithromycin, levofloxacin, and metronidazole was determined by E-test method (bioMerieux, France). As a gold standard in the diagnosis of H. pylori; it was accepted that the culture method alone was positive or both histology and RUT were positive together. Sensitivity and specificity for histology and RUT were calculated by taking the culture as a gold standard. Sensitivity and specificity for culture were also calculated by taking the co-positivity of both histology and RUT as a gold standard. Results: H. pylori was detected in 140 of 278 of patients with culture and 174 of 278 of patients with histology in the study. H. pylori positivity was also found in 191 patients with RUT. According to the gold standard criteria, a false negative result was found in 39 cases by culture method, 17 cases by histology, and 8 cases by RUT. Sensitivity and specificity of the culture, histology, and RUT methods of the patients were 76.5 % and 88.3 %, 87.8 % and 63 %, 94.2 % and 57.2 %, respectively. Antibiotic resistance was investigated by E-test in 140 H. pylori strains isolated from culture. The resistance rates of H. pylori strains to the amoxicillin, clarithromycin, levofloxacin, and metronidazole was detected as 9 (6.4 %), 22 (15.7 %), 17 (12.1 %), 57 (40.7 %), respectively. Conclusion: In our study, RUT was found to be the most sensitive, culture was the most specific test between culture, histology, and RUT methods. Although we detected the specificity of the culture method as high, its sensitivity was found to be quite low compared to other methods. The low sensitivity of H. pylori culture may be caused by the factors affect the chances of direct isolation such as spoild bacterium, difficult-to-breed microorganism, clinical sample retrieval, and transport conditions.

Keywords: antimicrobial resistance, culture, histology, H. pylori, RUT

Procedia PDF Downloads 140
42 Electronic Waste Analysis And Characterization Study: Management Input For Highly Urbanized Cities

Authors: Jilbert Novelero, Oliver Mariano

Abstract:

In a world where technological evolution and competition to create innovative products are at its peak, problems on Electronic Waste (E-Waste) are now becoming a global concern. E-waste is said to be any electrical or electronic devices that have reached the terminal of its useful life. The major issue are the volume and the raw materials used in crafting E-waste which is non-biodegradable and contains hazardous substances that are toxic to human health and the environment. The objective of this study is to gather baseline data in terms of the composition of E-waste in the solid waste stream and to determine the top 5 E-waste categories in a highly urbanized city. Recommendations in managing these wastes for its reduction were provided which may serve as a guide for acceptance and implementation in the locality. Pasig City was the chosen beneficiary of the research output and through the collaboration of the City Government of Pasig and its Solid Waste Management Office (SWMO); the researcher successfully conducted the Electronic Waste Analysis and Characterization Study (E-WACS) to achieve the objectives. E-WACS that was conducted on April 2019 showed that E-waste ranked 4th which comprises the 10.39% of the overall solid waste volume. Out of 345, 127.24kg which is the total daily domestic waste generation in the city, E-waste covers 35,858.72kg. Moreover, an average of 40 grams was determined to be the E-waste generation per person per day. The top 5 E-waste categories were then classified after the analysis. The category which ranked first is the office and telecommunications equipment that contained the 63.18% of the total generated E-waste. Second in ranking was the household appliances category with 21.13% composition. Third was the lighting devices category with 8.17%. Fourth on ranking was the consumer electronics and batteries category which was composed of 5.97% and fifth was the wires and cables category where it comprised the 1.41% of the average generated E-waste samples. One of the recommendations provided in this research is the implementation of the Pasig City Waste Advantage Card. The card can be used as a privilege card and earned points can be converted to avail of and enjoy services such as haircut, massage, dental services, medical check-up, and etc. Another recommendation raised is for the LGU to encourage a communication or dialogue with the technology and electronics manufacturers and distributors and international and local companies to plan the retrieval and disposal of the E-wastes in accordance with the Extended Producer Responsibility (EPR) policy where producers are given significant responsibilities for the treatment and disposal of post-consumer products.

Keywords: E-waste, E-WACS, E-waste characterization, electronic waste, electronic waste analysis

Procedia PDF Downloads 93
41 Virtual Team Performance: A Transactive Memory System Perspective

Authors: Belbaly Nassim

Abstract:

Virtual teams (VT) initiatives, in which teams are geographically dispersed and communicate via modern computer-driven technologies, have attracted increasing attention from researchers and professionals. The growing need to examine how to balance and optimize VT is particularly important given the exposure experienced by companies when their employees encounter globalization and decentralization pressures to monitor VT performance. Hence, organization is regularly limited due to misalignment between the behavioral capabilities of the team’s dispersed competences and knowledge capabilities and how trust issues interplay and influence these VT dimensions and the effects of such exchanges. In fact, the future success of business depends on the extent to which VTs are managing efficiently their dispersed expertise, skills and knowledge to stimulate VT creativity. Transactive memory system (TMS) may enhance VT creativity using its three dimensons: knowledge specialization, credibility and knowledge coordination. TMS can be understood as a composition of both a structural component residing of individual knowledge and a set of communication processes among individuals. The individual knowledge is shared while being retrieved, applied and the learning is coordinated. TMS is driven by the central concept that the system is built on the distinction between internal and external memory encoding. A VT learns something new and catalogs it in memory for future retrieval and use. TMS uses the role of information technology to explain VT behaviors by offering VT members the possibility to encode, store, and retrieve information. TMS considers the members of a team as a processing system in which the location of expertise both enhances knowledge coordination and builds trust among members over time. We build on TMS dimensions to hypothesize the effects of specialization, coordination, and credibility on VT creativity. In fact, VTs consist of dispersed expertise, skills and knowledge that can positively enhance coordination and collaboration. Ultimately, this team composition may lead to recognition of both who has expertise and where that expertise is located; over time, the team composition may also build trust among VT members over time developing the ability to coordinate their knowledge which can stimulate creativity. We also assess the reciprocal relationship between TMS dimensions and VT creativity. We wish to use TMS to provide researchers with a theoretically driven model that is empirically validated through survey evidence. We propose that TMS provides a new way to enhance and balance VT creativity. This study also provides researchers insight into the use of TMS to influence positively VT creativity. In addition to our research contributions, we provide several managerial insights into how TMS components can be used to increase performance within dispersed VTs.

Keywords: virtual team creativity, transactive memory systems, specialization, credibility, coordination

Procedia PDF Downloads 137
40 Expression of CASK Antibody in Non-Mucionus Colorectal Adenocarcinoma and Its Relation to Clinicopathological Prognostic Factors

Authors: Reham H. Soliman, Noha Noufal, Howayda AbdelAal

Abstract:

Calcium/calmodulin-dependent serine protein kinase (CASK) belongs to the membrane-associated guanylate kinase (MAGUK) family and has been proposed as a mediator of cell-cell adhesion and proliferation, which can contribute to tumorogenesis. CASK has been linked as a good prognostic factor with some tumor subtypes, while considered as a poor prognostic marker in others. To our knowledge, no sufficient evidence of CASK role in colorectal cancer is available. The aim of this study is to evaluate the expression of Calcium/calmodulin-dependent serine protein kinase (CASK) in non-mucinous colorectal adenocarcinoma and adenomatous polyps as precursor lesions and assess its prognostic significance. The study included 42 cases of conventional colorectal adenocarcinoma and 15 biopsies of adenomatous polyps with variable degrees of dysplasia. They were reviewed for clinicopathological prognostic factors and stained by CASK; mouse, monoclonal antibody using heat-induced antigen retrieval immunohistochemical techniques. The results showed that CASK protein was significantly overexpressed (p <0.05) in CRC compared with adenoma samples. The CASK protein was overexpressed in the majority of CRC samples with 85.7% of cases showing moderate to strong expression, while 46.7% of adenomas were positive. CASK overexpression was significantly correlated with both TNM stage and grade of differentiation (p <0.05). There was a significantly higher expression in tumor samples with early stages (I/II) rather than advanced stage (III/IV) and with low grade (59.5%) rather than high grade (40.5%). Another interesting finding was found among the adenomas group, where the stronger intensity of staining was observed in samples with high grade dysplasia (33.3%) than those of lower grades (13.3%). In conclusion, this study shows that there is significant overexpression of CASK protein in CRC as well as in adenomas with high grade dysplasia. This indicates that CASK is involved in the process of carcinogenesis and functions as a potential trigger of the adenoma-carcinoma cascade. CASK was significantly overexpressed in early stage and low-grade tumors rather than tumors with advanced stage and higher histological grades. This suggests that CASK protein is a good prognostic factor. We suggest that CASK affects CRC in two different ways derived from its physiology. CASK as part of MAGUK family can stimulate proliferation and through its cell membrane localization and as a mediator of cell-cell adhesion might contribute in tumor confinement and localization.

Keywords: CASK, colorectal cancer, overexpression, prognosis

Procedia PDF Downloads 255
39 Decolonizing Print Culture and Bibliography Through Digital Visualizations of Artists’ Books at the University of Miami

Authors: Alejandra G. Barbón, José Vila, Dania Vazquez

Abstract:

This study seeks to contribute to the advancement of library and archival sciences in the areas of records management, knowledge organization, and information architecture, particularly focusing on the enhancement of bibliographical description through the incorporation of visual interactive designs aimed to enrich the library users’ experience. In an era of heightened awareness about the legacy of hiddenness across special and rare collections in libraries and archives, along with the need for inclusivity in academia, the University of Miami Libraries has embarked on an innovative project that intersects the realms of print culture, decolonization, and digital technology. This proposal presents an exciting initiative to revitalize the study of Artists’ Books collections by employing digital visual representations to decolonize bibliographic records of some of the most unique materials and foster a more holistic understanding of cultural heritage. Artists' Books, a dynamic and interdisciplinary art form, challenge conventional bibliographic classification systems, making them ripe for the exploration of alternative approaches. This project involves the creation of a digital platform that combines multimedia elements for digital representations, interactive information retrieval systems, innovative information architecture, trending bibliographic cataloging and metadata initiatives, and collaborative curation to transform how we engage with and understand these collections. By embracing the potential of technology, we aim to transcend traditional constraints and address the historical biases that have influenced bibliographic practices. In essence, this study showcases a groundbreaking endeavor at the University of Miami Libraries that seeks to not only enhance bibliographic practices but also confront the legacy of hiddenness across special and rare collections in libraries and archives while strengthening conventional bibliographic description. By embracing digital visualizations, we aim to provide new pathways for understanding Artists' Books collections in a manner that is more inclusive, dynamic, and forward-looking. This project exemplifies the University’s dedication to fostering critical engagement, embracing technological innovation, and promoting diverse and equitable classifications and representations of cultural heritage.

Keywords: decolonizing bibliographic cataloging frameworks, digital visualizations information architecture platforms, collaborative curation and inclusivity for records management, engagement and accessibility increasing interaction design and user experience

Procedia PDF Downloads 46
38 Innovative Fabric Integrated Thermal Storage Systems and Applications

Authors: Ahmed Elsayed, Andrew Shea, Nicolas Kelly, John Allison

Abstract:

In northern European climates, domestic space heating and hot water represents a significant proportion of total primary total primary energy use and meeting these demands from a national electricity grid network supplied by renewable energy sources provides an opportunity for a significant reduction in EU CO2 emissions. However, in order to adapt to the intermittent nature of renewable energy generation and to avoid co-incident peak electricity usage from consumers that may exceed current capacity, the demand for heat must be decoupled from its generation. Storage of heat within the fabric of dwellings for use some hours, or days, later provides a route to complete decoupling of demand from supply and facilitates the greatly increased use of renewable energy generation into a local or national electricity network. The integration of thermal energy storage into the building fabric for retrieval at a later time requires much evaluation of the many competing thermal, physical, and practical considerations such as the profile and magnitude of heat demand, the duration of storage, charging and discharging rate, storage media, space allocation, etc. In this paper, the authors report investigations of thermal storage in building fabric using concrete material and present an evaluation of several factors that impact upon performance including heating pipe layout, heating fluid flow velocity, storage geometry, thermo-physical material properties, and also present an investigation of alternative storage materials and alternative heat transfer fluids. Reducing the heating pipe spacing from 200 mm to 100 mm enhances the stored energy by 25% and high-performance Vacuum Insulation results in heat loss flux of less than 3 W/m2, compared to 22 W/m2 for the more conventional EPS insulation. Dense concrete achieved the greatest storage capacity, relative to medium and light-weight alternatives, although a material thickness of 100 mm required more than 5 hours to charge fully. Layers of 25 mm and 50 mm thickness can be charged in 2 hours, or less, facilitating a fast response that could, aggregated across multiple dwellings, provide significant and valuable reduction in demand from grid-generated electricity in expected periods of high demand and potentially eliminate the need for additional new generating capacity from conventional sources such as gas, coal, or nuclear.

Keywords: fabric integrated thermal storage, FITS, demand side management, energy storage, load shifting, renewable energy integration

Procedia PDF Downloads 139
37 Reading and Writing of Biscriptal Children with and Without Reading Difficulties in Two Alphabetic Scripts

Authors: Baran Johansson

Abstract:

This PhD dissertation aimed to explore children’s writing and reading in L1 (Persian) and L2 (Swedish). It adds new perspectives to reading and writing studies of bilingual biscriptal children with and without reading and writing difficulties (RWD). The study used standardised tests to examine linguistic and cognitive skills related to word reading and writing fluency in both languages. Furthermore, all participants produced two texts (one descriptive and one narrative) in each language. The writing processes and the writing product of these children were explored using logging methodologies (Eye and Pen) for both languages. Furthermore, this study investigated how two bilingual children with RWD presented themselves through writing across their languages. To my knowledge, studies utilizing standardised tests and logging tools to investigate bilingual children’s word reading and writing fluency across two different alphabetic scripts are scarce. There have been few studies analysing how bilingual children construct meaning in their writing, and none have focused on children who write in two different alphabetic scripts or those with RWD. Therefore, some aspects of the systemic functional linguistics (SFL) perspective were employed to examine how two participants with RWD created meaning in their written texts in each language. The results revealed that children with and without RWD had higher writing fluency in all measures (e.g. text lengths, writing speed) in their L2 compared to their L1. Word reading abilities in both languages were found to influence their writing fluency. The findings also showed that bilingual children without reading difficulties performed 1 standard deviation below the mean when reading words in Persian. However, their reading performance in Swedish aligned with the expected age norms, suggesting greater efficient in reading Swedish than in Persian. Furthermore, the results showed that the level of orthographic depth, consistency between graphemes and phonemes, and orthographic features can probably explain these differences across languages. The analysis of meaning-making indicated that the participants with RWD exhibited varying levels of difficulty, which influenced their knowledge and usage of writing across languages. For example, the participant with poor word recognition (PWR) presented himself similarly across genres, irrespective of the language in which he wrote. He employed the listing technique similarly across his L1 and L2. However, the participant with mixed reading difficulties (MRD) had difficulties with both transcription and text production. He produced spelling errors and frequently paused in both languages. He also struggled with word retrieval and producing coherent texts, consistent with studies of monolingual children with poor comprehension or with developmental language disorder. The results suggest that the mother tongue instruction provided to the participants has not been sufficient for them to become balanced biscriptal readers and writers in both languages. Therefore, increasing the number of hours dedicated to mother tongue instruction and motivating the children to participate in these classes could be potential strategies to address this issue.

Keywords: reading, writing, reading and writing difficulties, bilingual children, biscriptal

Procedia PDF Downloads 37
36 Efficacy of Preimplantation Genetic Screening in Women with a Spontaneous Abortion History with Eukaryotic or Aneuploidy Abortus

Authors: Jayeon Kim, Eunjung Yu, Taeki Yoon

Abstract:

Most spontaneous miscarriage is believed to be a consequence of embryo aneuploidies. Transferring eukaryotic embryos selected by PGS is expected to decrease the miscarriage rate. Current PGS indications include advanced maternal age, recurrent pregnancy loss, repeated implantation failure. Recently, use of PGS for healthy women without above indications for the purpose of improving in vitro fertilization (IVF) outcomes is on the rise. However, it is still controversy about the beneficial effect of PGS in this population, especially, in women with a history of no more than 2 miscarriages or miscarriage of eukaryotic abortus. This study aimed to investigate if karyotyping result of abortus is a good indicator of preimplantation genetic screening (PGS) in subsequent IVF cycle in women with a history of spontaneous abortion. A single-center retrospective cohort study was performed. Women who had spontaneous abortion(s) (less than 3) and dilatation and evacuation, and subsequent IVF from January 2016 to November 2016 were included. Their medical information was extracted from the charts. Clinical pregnancy was defined as presence of a gestational sac with fetal heart beat detected on ultrasound in week 7. Statistical analysis was performed using SPSS software. Total 234 women were included. 121 out of 234 (51.7%) underwent karyotyping of the abortus, and 113 did not have the abortus karyotyped. Embryo biopsy was performed on 3 or 5 days after oocyte retrieval, followed by embryo transfer (ET) on a fresh or frozen cycle. The biopsied materials were subjected to microarray comparative genomic hybridization. Clinical pregnancy rate per ET was compared between PGS and non-PGS group in each study group. Patients were grouped by two criteria: karyotype of the abortus from previous miscarriage (unknown fetal karyotype (n=89, Group 1), eukaryotic abortus (n=36, Group 2) or aneuploidy abortus (n=67, Group 3)), and pursuing PGS in subsequent IVF cycle (pursuing PGS (PGS group, n=105) or not pursuing PGS (non-PGS group, n=87)). The PGS group was significantly older and had higher number of retrieved oocytes and prior miscarriages compared to non-PGS group. There were no differences in BMI and AMH level between those two groups. In PGS group, the mean number of transferable embryos (eukaryotic embryo) was 1.3 ± 0.7, 1.5 ± 0.5 and 1.4 ± 0.5, respectively (p = 0.049). In 42 cases, ET was cancelled because all embryos biopsied turned out to be abnormal. In all three groups (group 1, 2, and 3), clinical pregnancy rates were not statistically different between PGS and non-PGS group (Group 1: 48.8% vs. 52.2% (p=0.858), Group 2: 70% vs. 73.1% (p=0.730), Group 3: 42.3% vs. 46.7% (p=0.640), in PGS and non-PGS group, respectively). In both groups who had miscarriage with eukaryotic and aneuploidy abortus, the clinical pregnancy rate between IVF cycles with and without PGS was not different. When we compare miscarriage and ongoing pregnancy rate, there were no significant differences between PGS and non-PGS group in all three groups. Our results show that the routine application of PGS in women who had less than 3 miscarriages would not be beneficial, even in cases that previous miscarriage had been caused by fetal aneuploidy.

Keywords: preimplantation genetic diagnosis, miscarriage, kpryotyping, in vitro fertilization

Procedia PDF Downloads 153
35 Working Memory and Phonological Short-Term Memory in the Acquisition of Academic Formulaic Language

Authors: Zhicheng Han

Abstract:

This study examines the correlation between knowledge of formulaic language, working memory (WM), and phonological short-term memory (PSTM) in Chinese L2 learners of English. This study investigates if WM and PSTM correlate differently to the acquisition of formulaic language, which may be relevant for the discourse around the conceptualization of formulas. Connectionist approaches have lead scholars to argue that formulas are form-meaning connections stored whole, making PSTM significant in the acquisitional process as it pertains to the storage and retrieval of chunk information. Generativist scholars, on the other hand, argued for active participation of interlanguage grammar in the acquisition and use of formulaic language, where formulas are represented in the mind but retain the internal structure built around a lexical core. This would make WM, especially the processing component of WM an important cognitive factor since it plays a role in processing and holding information for further analysis and manipulation. The current study asked L1 Chinese learners of English enrolled in graduate programs in China to complete a preference raking task where they rank their preference for formulas, grammatical non-formulaic expressions, and ungrammatical phrases with and without the lexical core in academic contexts. Participants were asked to rank the options in order of the likeliness of them encountering these phrases in the test sentences within academic contexts. Participants’ syntactic proficiency is controlled with a cloze test and grammar test. Regression analysis found a significant relationship between the processing component of WM and preference of formulaic expressions in the preference ranking task while no significant correlation is found for PSTM or syntactic proficiency. The correlational analysis found that WM, PSTM, and the two proficiency test scores have significant covariates. However, WM and PSTM have different predictor values for participants’ preference for formulaic language. Both storage and processing components of WM are significantly correlated with the preference for formulaic expressions while PSTM is not. These findings are in favor of the role of interlanguage grammar and syntactic knowledge in the acquisition of formulaic expressions. The differing effects of WM and PSTM suggest that selective attention to and processing of the input beyond simple retention play a key role in successfully acquiring formulaic language. Similar correlational patterns were found for preferring the ungrammatical phrase with the lexical core of the formula over the ones without the lexical core, attesting to learners’ awareness of the lexical core around which formulas are constructed. These findings support the view that formulaic phrases retain internal syntactic structures that are recognized and processed by the learners.

Keywords: formulaic language, working memory, phonological short-term memory, academic language

Procedia PDF Downloads 21
34 The Digital Microscopy in Organ Transplantation: Ergonomics of the Tele-Pathological Evaluation of Renal, Liver, and Pancreatic Grafts

Authors: Constantinos S. Mammas, Andreas Lazaris, Adamantia S. Mamma-Graham, Georgia Kostopanagiotou, Chryssa Lemonidou, John Mantas, Eustratios Patsouris

Abstract:

The process to build a better safety culture, methods of error analysis, and preventive measures, starts with an understanding of the effects when human factors engineering refer to remote microscopic diagnosis in surgery and specially in organ transplantation for the evaluation of the grafts. Α high percentage of solid organs arrive at the recipient hospitals and are considered as injured or improper for transplantation in the UK. Digital microscopy adds information on a microscopic level about the grafts (G) in Organ Transplant (OT), and may lead to a change in their management. Such a method will reduce the possibility that a diseased G will arrive at the recipient hospital for implantation. Aim: The aim of this study is to analyze the ergonomics of digital microscopy (DM) based on virtual slides, on telemedicine systems (TS) for tele-pathological evaluation (TPE) of the grafts (G) in organ transplantation (OT). Material and Methods: By experimental simulation, the ergonomics of DM for microscopic TPE of renal graft (RG), liver graft (LG) and pancreatic graft (PG) tissues is analyzed. In fact, this corresponded to the ergonomics of digital microscopy for TPE in OT by applying virtual slide (VS) system for graft tissue image capture, for remote diagnoses of possible microscopic inflammatory and/or neoplastic lesions. Experimentation included the development of an OTE-TS similar experimental telemedicine system (Exp.-TS) for simulating the integrated VS based microscopic TPE of RG, LG and PG Simulation of DM on TS based TPE performed by 2 specialists on a total of 238 human renal graft (RG), 172 liver graft (LG) and 108 pancreatic graft (PG) tissues digital microscopic images for inflammatory and neoplastic lesions on four electronic spaces of the four used TS. Results: Statistical analysis of specialist‘s answers about the ability to accurately diagnose the diseased RG, LG and PG tissues on the electronic space among four TS (A,B,C,D) showed that DM on TS for TPE in OT is elaborated perfectly on the ES of a desktop, followed by the ES of the applied Exp.-TS. Tablet and mobile-phone ES seem significantly risky for the application of DM in OT (p<.001). Conclusion: To make the largest reduction in errors and adverse events referring to the quality of the grafts, it will take application of human factors engineering to procurement, design, audit, and awareness-raising activities. Consequently, it will take an investment in new training, people, and other changes to management activities for DM in OT. The simulating VS based TPE with DM of RG, LG and PG tissues after retrieval, seem feasible and reliable and dependable on the size of the electronic space of the applied TS, for remote prevention of diseased grafts from being retrieved and/or sent to the recipient hospital and for post-grafting and pre-transplant planning.

Keywords: digital microscopy, organ transplantation, tele-pathology, virtual slides

Procedia PDF Downloads 253
33 A Comparative Study of Motion Events Encoding in English and Italian

Authors: Alfonsina Buoniconto

Abstract:

The aim of this study is to investigate the degree of cross-linguistic and intra-linguistic variation in the encoding of motion events (MEs) in English and Italian, these being typologically different languages both showing signs of disobedience to their respective types. As a matter of fact, the traditional typological classification of MEs encoding distributes languages into two macro-types, based on the preferred locus for the expression of Path, the main ME component (other components being Figure, Ground and Manner) characterized by conceptual and structural prominence. According to this model, Satellite-framed (SF) languages typically express Path information in verb-dependent items called satellites (e.g. preverbs and verb particles) with main verbs encoding Manner of motion; whereas Verb-framed languages (VF) tend to include Path information within the verbal locus, leaving Manner to adjuncts. Although this dichotomy is valid altogether, languages do not always behave according to their typical classification patterns. English, for example, is usually ascribed to the SF type due to the rich inventory of postverbal particles and phrasal verbs used to express spatial relations (i.e. the cat climbed down the tree); nevertheless, it is not uncommon to find constructions such as the fog descended slowly, which is typical of the VF type. Conversely, Italian is usually described as being VF (cf. Paolo uscì di corsa ‘Paolo went out running’), yet SF constructions like corse via in lacrime ‘She ran away in tears’ are also frequent. This paper will try to demonstrate that such a typological overlapping is due to the fact that the semantic units making up MEs are distributed within several loci of the sentence –not only verbs and satellites– thus determining a number of different constructions stemming from convergent factors. Indeed, the linguistic expression of motion events depends not only on the typological nature of languages in a traditional sense, but also on a series morphological, lexical, and syntactic resources, as well as on inferential, discursive, usage-related, and cultural factors that make semantic information more or less accessible, frequent, and easy to process. Hence, rather than describe English and Italian in dichotomic terms, this study focuses on the investigation of cross-linguistic and intra-linguistic variation in the use of all the strategies made available by each linguistic system to express motion. Evidence for these assumptions is provided by parallel corpora analysis. The sample texts are taken from two contemporary Italian novels and their respective English translations. The 400 motion occurrences selected (200 in English and 200 in Italian) were scanned according to the MODEG (an acronym for Motion Decoding Grid) methodology, which grants data comparability through the indexation and retrieval of combined morphosyntactic and semantic information at different levels of detail.

Keywords: construction typology, motion event encoding, parallel corpora, satellite-framed vs. verb-framed type

Procedia PDF Downloads 231
32 Comparison of Two Home Sleep Monitors Designed for Self-Use

Authors: Emily Wood, James K. Westphal, Itamar Lerner

Abstract:

Background: Polysomnography (PSG) recordings are regularly used in research and clinical settings to study sleep and sleep-related disorders. Typical PSG studies are conducted in professional laboratories and performed by qualified researchers. However, the number of sleep labs worldwide is disproportionate to the increasing number of individuals with sleep disorders like sleep apnea and insomnia. Consequently, there is a growing need to supply cheaper yet reliable means to measure sleep, preferably autonomously by subjects in their own home. Over the last decade, a variety of devices for self-monitoring of sleep became available in the market; however, very few have been directly validated against PSG to demonstrate their ability to perform reliable automatic sleep scoring. Two popular mobile EEG-based systems that have published validation results, the DREEM 3 headband and the Z-Machine, have never been directly compared one to the other by independent researchers. The current study aimed to compare the performance of DREEM 3 and the Z-Machine to help investigators and clinicians decide which of these devices may be more suitable for their studies. Methods: 26 participants have completed the study for credit or monetary compensation. Exclusion criteria included any history of sleep, neurological or psychiatric disorders. Eligible participants arrived at the lab in the afternoon and received the two devices. They then spent two consecutive nights monitoring their sleep at home. Participants were also asked to keep a sleep log, indicating the time they fell asleep, woke up, and the number of awakenings occurring during the night. Data from both devices, including detailed sleep hypnograms in 30-second epochs (differentiating Wake, combined N1/N2, N3; and Rapid Eye Movement sleep), were extracted and aligned upon retrieval. For analysis, the number of awakenings each night was defined as four or more consecutive wake epochs between sleep onset and termination. Total sleep time (TST) and the number of awakenings were compared to subjects’ sleep logs to measure consistency with the subjective reports. In addition, the sleep scores from each device were compared epoch-by-epoch to calculate the agreement between the two devices using Cohen’s Kappa. All analysis was performed using Matlab 2021b and SPSS 27. Results/Conclusion: Subjects consistently reported longer times spent asleep than the time reported by each device (M= 448 minutes for sleep logs compared to M= 406 and M= 345 minutes for the DREEM and Z-Machine, respectively; both ps<0.05). Linear correlations between the sleep log and each device were higher for the DREEM than the Z-Machine for both TST and the number of awakenings, and, likewise, the mean absolute bias between the sleep logs and each device was higher for the Z-Machine for both TST (p<0.001) and awakenings (p<0.04). There was some indication that these effects were stronger for the second night compared to the first night. Epoch-by-epoch comparisons showed that the main discrepancies between the devices were for detecting N2 and REM sleep, while N3 had a high agreement. Overall, the DREEM headband seems superior for reliably scoring sleep at home.

Keywords: DREEM, EEG, seep monitoring, Z-machine

Procedia PDF Downloads 77
31 Caring for Children with Intellectual Disabilities in Malawi: Parental Psychological Experiences and Needs

Authors: Charles Masulani Mwale

Abstract:

Background: It is argued that 85% of children with the disability live in resource-poor countries where there are few available disability services. A majority of these children, including their parents, suffer a lot as a result of the disability and its associated stigmatization, leading to a marginalized life. These parents also experience more stress and mental health problems such as depression, compared with families of normal developing children. There is little research from Africa addressing these issues especially among parents of intellectually disabled children. WHO encourages research on the impact that child with a disability have on their family and appropriate training and support to the families so that they can promote the child’s development and well-being. This study investigated the parenting experiences, mechanisms of coping with these challenges and psychosocial needs while caring for children with intellectual disabilities in both rural and urban settings of Lilongwe and Mzuzu. Methods: This is part of a larger Mixed-methods study aimed at developing a contextualized psychosocial intervention for parents of intellectually disabled children. 16 focus group discussions and four in-depth interviews were conducted with parents in catchments areas for St John of God and Children of Blessings in Mzuzu and Lilongwe cities respectively. Ethical clearance was obtained from COMREC. Data were stored in NVivo software for easy retrieval and management. All interviews were tape-recorded, transcribed and translated into English. Note-taking was performed during all the observations. Data triangulation from the interviews, note taking and the observations were done for validation and reliability. Results: Caring for intellectually disabled children comes with a number of challenges. Parents experience stigma and discrimination; fear for the child’s future; have self-blame and guilt; get coerced by neighbors to kill the disabled child; and fear violence by and to the child. Their needs include respite relief, improved access to disability services, education on disability management and financial support. For their emotional stability, parents cope by sharing with others and turning to God while other use poor coping mechanisms like alcohol use. Discussion and Recommendation: Apart from neighbors’ coercion to eliminate the child life, the findings of this study are similar to those done in other countries like Kenya and Pakistan. It is recommended that parents get educated on disability, its causes, and management to array fears of unknown. Community education is also crucial to promote community inclusiveness and correct prevailing myths associated with disability. Disability institutions ought to intensify individual as well as group counseling services to these parents. Further studies need to be done to design culturally appropriate and specific psychosocial interventions for the parents to promote their psychological resilience.

Keywords: psychological distress, intellectual disability, psychosocial interventions, mental health, psychological resilience, children

Procedia PDF Downloads 415
30 Fructose-Aided Cross-Linked Enzyme Aggregates of Laccase: An Insight on Its Chemical and Physical Properties

Authors: Bipasa Dey, Varsha Panwar, Tanmay Dutta

Abstract:

Laccase, a multicopper oxidase (EC 1.10.3.2) have been at the forefront as a superior industrial biocatalyst. They are versatile in terms of bestowing sustainable and ecological catalytic reactions such as polymerisation, xenobiotic degradation and bioremediation of phenolic and non-phenolic compounds. Regardless of the wide biotechnological applications, the critical limiting factors viz. reusability, retrieval, and storage stability still prevail. This can cause an impediment in their applicability. Crosslinked enzyme aggregates (CLEAs) have emerged as a promising technique that rehabilitates these essential facets, albeit at the expense of their enzymatic activity. The carrier free crosslinking method prevails over the carrier-bound immobilisation in conferring high productivity, low production cost owing to the absence of additional carrier and circumvent any non-catalytic ballast which could dilute the volumetric activity. To the best of our knowledge, the ε-amino group of lysyl residue is speculated as the best choice for forming Schiff’s base with glutaraldehyde. Despite being most preferrable, excess glutaraldehyde can bring about disproportionate and undesirable crosslinking within the catalytic site and hence could deliver undesirable catalytic losses. Moreover, the surface distribution of lysine residues in Trametes versicolor laccase is significantly less. Thus, to mitigate the adverse effect of glutaraldehyde in conjunction with scaling down the degradation or catalytic loss of the enzyme, crosslinking with inert substances like gelatine, collagen, Bovine serum albumin (BSA) or excess lysine is practiced. Analogous to these molecules, sugars have been well known as a protein stabiliser. It helps to retain the structural integrity, specifically secondary structure of the protein during aggregation by changing the solvent properties. They are comprehended to avert protein denaturation or enzyme deactivation during precipitation. We prepared crosslinked enzyme aggregates (CLEAs) of laccase from T. versicolor with the aid of sugars. The sugar CLEAs were compared with the classic BSA and glutaraldehyde laccase CLEAs concerning physico-chemical properties. The activity recovery for the fructose CLEAs were found to be ~20% higher than the non-sugar CLEA. Moreover, the 𝐾𝑐𝑎𝑡𝐾𝑚⁄ values of the CLEAs were two and three-fold higher than BSA-CLEA and GACLEA, respectively. The half-life (t1/2) deciphered by sugar-CLEA was higher than the t1/2 of GA-CLEAs and free enzyme, portraying more thermal stability. Besides, it demonstrated extraordinarily high pH stability, which was analogous to BSA-CLEA. The promising attributes of increased storage stability and recyclability (>80%) gives more edge to the sugar-CLEAs over conventional CLEAs of their corresponding free enzyme. Thus, sugar-CLEA prevails in furnishing the rudimentary properties required for a biocatalyst and holds many prospects.

Keywords: cross-linked enzyme aggregates, laccase immobilization, enzyme reusability, enzyme stability

Procedia PDF Downloads 52
29 Digital Advance Care Planning and Directives: Early Observations of Adoption Statistics and Responses from an All-Digital Consumer-Driven Approach

Authors: Robert L. Fine, Zhiyong Yang, Christy Spivey, Bonnie Boardman, Maureen Courtney

Abstract:

Importance: Barriers to traditional advance care planning (ACP) and advance directive (AD) creation have limited the promise of ACP/AD for individuals and families, the healthcare team, and society. Reengineering ACP by using a web-based, consumer-driven process has recently been suggested. We report early experience with such a process. Objective: Begin to analyze the potential of the creation and use of ACP/ADs as generated by a consumer-friendly, digital process by 1) assessing the likelihood that consumers would create ACP/ADs without structured intervention by medical or legal professionals, and 2) analyzing the responses to determine if the plans can help doctors better understand a person’s goals, preferences, and priorities for their medical treatments and the naming of healthcare agents. Design: The authors chose 900 users of MyDirectives.com, a digital ACP/AD tool, solely based on their state of residence in order to achieve proportional representation of all 50 states by population size and then reviewed their responses, summarizing these through descriptive statistics including treatment preferences, demographics, and revision of preferences. Setting: General United States population. Participants: The 900 participants had an average age of 50.8 years (SD = 16.6); 84.3% of the men and 91% of the women were in self-reported good health when signing their ADs. Main measures: Preferences regarding the use of life-sustaining treatments, where to spend final days, consulting a supportive and palliative care team, attempted cardiopulmonary resuscitation (CPR), autopsy, and organ and tissue donation. Results: Nearly 85% of respondents prefer cessation of life-sustaining treatments during their final days whenever those may be, 76% prefer to spend their final days at home or in a hospice facility, and 94% wanted their future doctors to consult a supportive and palliative care team. 70% would accept attempted CPR in certain limited circumstances. Most respondents would want an autopsy under certain conditions, and 62% would like to donate their organs. Conclusions and relevance: Analysis of early experience with an all-digital web-based ACP/AD platform demonstrates that individuals from a wide range of ages and conditions can engage in an interrogatory process about values, goals, preferences, and priorities for their medical treatments by developing advance directives and easily make changes to the AD created. Online creation, storage, and retrieval of advance directives has the potential to remove barriers to ACP/AD and, thus, to further improve patient-centered end-of-life care.

Keywords: Advance Care Plan, Advance Decisions, Advance Directives, Consumer; Digital, End of Life Care, Goals, Living Wills, Prefences, Universal Advance Directive, Statements

Procedia PDF Downloads 295
28 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System

Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee

Abstract:

This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.

Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation

Procedia PDF Downloads 73
27 An Improved Atmospheric Correction Method with Diurnal Temperature Cycle Model for MSG-SEVIRI TIR Data under Clear Sky Condition

Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yonggang Qian, Ning Wang

Abstract:

Knowledge of land surface temperature (LST) is of crucial important in energy balance studies and environment modeling. Satellite thermal infrared (TIR) imagery is the primary source for retrieving LST at the regional and global scales. Due to the combination of atmosphere and land surface of received radiance by TIR sensors, atmospheric effect correction has to be performed to remove the atmospheric transmittance and upwelling radiance. Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG) provides measurements every 15 minutes in 12 spectral channels covering from visible to infrared spectrum at fixed view angles with 3km pixel size at nadir, offering new and unique capabilities for LST, LSE measurements. However, due to its high temporal resolution, the atmosphere correction could not be performed with radiosonde profiles or reanalysis data since these profiles are not available at all SEVIRI TIR image acquisition times. To solve this problem, a two-part six-parameter semi-empirical diurnal temperature cycle (DTC) model has been applied to the temporal interpolation of ECMWF reanalysis data. Due to the fact that the DTC model is underdetermined with ECMWF data at four synoptic times (UTC times: 00:00, 06:00, 12:00, 18:00) in one day for each location, some approaches are adopted in this study. It is well known that the atmospheric transmittance and upwelling radiance has a relationship with water vapour content (WVC). With the aid of simulated data, the relationship could be determined under each viewing zenith angle for each SEVIRI TIR channel. Thus, the atmospheric transmittance and upwelling radiance are preliminary removed with the aid of instantaneous WVC, which is retrieved from the brightness temperature in the SEVIRI channels 5, 9 and 10, and a group of the brightness temperatures for surface leaving radiance (Tg) are acquired. Subsequently, a group of the six parameters of the DTC model is fitted with these Tg by a Levenberg-Marquardt least squares algorithm (denoted as DTC model 1). Although the retrieval error of WVC and the approximate relationships between WVC and atmospheric parameters would induce some uncertainties, this would not significantly affect the determination of the three parameters, td, ts and β (β is the angular frequency, td is the time where the Tg reaches its maximum, ts is the starting time of attenuation) in DTC model. Furthermore, due to the large fluctuation in temperature and the inaccuracy of the DTC model around sunrise, SEVIRI measurements from two hours before sunrise to two hours after sunrise are excluded. With the knowledge of td , ts, and β, a new DTC model (denoted as DTC model 2) is accurately fitted again with these Tg at UTC times: 05:57, 11:57, 17:57 and 23:57, which is atmospherically corrected with ECMWF data. And then a new group of the six parameters of the DTC model is generated and subsequently, the Tg at any given times are acquired. Finally, this method is applied to SEVIRI data in channel 9 successfully. The result shows that the proposed method could be performed reasonably without assumption and the Tg derived with the improved method is much more consistent with that from radiosonde measurements.

Keywords: atmosphere correction, diurnal temperature cycle model, land surface temperature, SEVIRI

Procedia PDF Downloads 245
26 Integrating Natural Language Processing (NLP) and Machine Learning in Lung Cancer Diagnosis

Authors: Mehrnaz Mostafavi

Abstract:

The assessment and categorization of incidental lung nodules present a considerable challenge in healthcare, often necessitating resource-intensive multiple computed tomography (CT) scans for growth confirmation. This research addresses this issue by introducing a distinct computational approach leveraging radiomics and deep-learning methods. However, understanding local services is essential before implementing these advancements. With diverse tracking methods in place, there is a need for efficient and accurate identification approaches, especially in the context of managing lung nodules alongside pre-existing cancer scenarios. This study explores the integration of text-based algorithms in medical data curation, indicating their efficacy in conjunction with machine learning and deep-learning models for identifying lung nodules. Combining medical images with text data has demonstrated superior data retrieval compared to using each modality independently. While deep learning and text analysis show potential in detecting previously missed nodules, challenges persist, such as increased false positives. The presented research introduces a Structured-Query-Language (SQL) algorithm designed for identifying pulmonary nodules in a tertiary cancer center, externally validated at another hospital. Leveraging natural language processing (NLP) and machine learning, the algorithm categorizes lung nodule reports based on sentence features, aiming to facilitate research and assess clinical pathways. The hypothesis posits that the algorithm can accurately identify lung nodule CT scans and predict concerning nodule features using machine-learning classifiers. Through a retrospective observational study spanning a decade, CT scan reports were collected, and an algorithm was developed to extract and classify data. Results underscore the complexity of lung nodule cohorts in cancer centers, emphasizing the importance of careful evaluation before assuming a metastatic origin. The SQL and NLP algorithms demonstrated high accuracy in identifying lung nodule sentences, indicating potential for local service evaluation and research dataset creation. Machine-learning models exhibited strong accuracy in predicting concerning changes in lung nodule scan reports. While limitations include variability in disease group attribution, the potential for correlation rather than causality in clinical findings, and the need for further external validation, the algorithm's accuracy and potential to support clinical decision-making and healthcare automation represent a significant stride in lung nodule management and research.

Keywords: lung cancer diagnosis, structured-query-language (SQL), natural language processing (NLP), machine learning, CT scans

Procedia PDF Downloads 36
25 Linguistic Analysis of Borderline Personality Disorder: Using Language to Predict Maladaptive Thoughts and Behaviours

Authors: Charlotte Entwistle, Ryan Boyd

Abstract:

Recent developments in information retrieval techniques and natural language processing have allowed for greater exploration of psychological and social processes. Linguistic analysis methods for understanding behaviour have provided useful insights within the field of mental health. One area within mental health that has received little attention though, is borderline personality disorder (BPD). BPD is a common mental health disorder characterised by instability of interpersonal relationships, self-image and affect. It also manifests through maladaptive behaviours, such as impulsivity and self-harm. Examination of language patterns associated with BPD could allow for a greater understanding of the disorder and its links to maladaptive thoughts and behaviours. Language analysis methods could also be used in a predictive way, such as by identifying indicators of BPD or predicting maladaptive thoughts, emotions and behaviours. Additionally, associations that are uncovered between language and maladaptive thoughts and behaviours could then be applied at a more general level. This study explores linguistic characteristics of BPD, and their links to maladaptive thoughts and behaviours, through the analysis of social media data. Data were collected from a large corpus of posts from the publicly available social media platform Reddit, namely, from the ‘r/BPD’ subreddit whereby people identify as having BPD. Data were collected using the Python Reddit API Wrapper and included all users which had posted within the BPD subreddit. All posts were manually inspected to ensure that they were not posted by someone who clearly did not have BPD, such as people posting about a loved one with BPD. These users were then tracked across all other subreddits of which they had posted in and data from these subreddits were also collected. Additionally, data were collected from a random control group of Reddit users. Disorder-relevant behaviours, such as self-harming or aggression-related behaviours, outlined within Reddit posts were coded to by expert raters. All posts and comments were aggregated by user and split by subreddit. Language data were then analysed using the Linguistic Inquiry and Word Count (LIWC) 2015 software. LIWC is a text analysis program that identifies and categorises words based on linguistic and paralinguistic dimensions, psychological constructs and personal concern categories. Statistical analyses of linguistic features could then be conducted. Findings revealed distinct linguistic features associated with BPD, based on Reddit posts, which differentiated these users from a control group. Language patterns were also found to be associated with the occurrence of maladaptive thoughts and behaviours. Thus, this study demonstrates that there are indeed linguistic markers of BPD present on social media. It also implies that language could be predictive of maladaptive thoughts and behaviours associated with BPD. These findings are of importance as they suggest potential for clinical interventions to be provided based on the language of people with BPD to try to reduce the likelihood of maladaptive thoughts and behaviours occurring. For example, by social media tracking or engaging people with BPD in expressive writing therapy. Overall, this study has provided a greater understanding of the disorder and how it manifests through language and behaviour.

Keywords: behaviour analysis, borderline personality disorder, natural language processing, social media data

Procedia PDF Downloads 300
24 Artificial Neural Network and Satellite Derived Chlorophyll Indices for Estimation of Wheat Chlorophyll Content under Rainfed Condition

Authors: Muhammad Naveed Tahir, Wang Yingkuan, Huang Wenjiang, Raheel Osman

Abstract:

Numerous models used in prediction and decision-making process but most of them are linear in natural environment, and linear models reach their limitations with non-linearity in data. Therefore accurate estimation is difficult. Artificial Neural Networks (ANN) found extensive acceptance to address the modeling of the complex real world for the non-linear environment. ANN’s have more general and flexible functional forms than traditional statistical methods can effectively deal with. The link between information technology and agriculture will become more firm in the near future. Monitoring crop biophysical properties non-destructively can provide a rapid and accurate understanding of its response to various environmental influences. Crop chlorophyll content is an important indicator of crop health and therefore the estimation of crop yield. In recent years, remote sensing has been accepted as a robust tool for site-specific management by detecting crop parameters at both local and large scales. The present research combined the ANN model with satellite-derived chlorophyll indices from LANDSAT 8 imagery for predicting real-time wheat chlorophyll estimation. The cloud-free scenes of LANDSAT 8 were acquired (Feb-March 2016-17) at the same time when ground-truthing campaign was performed for chlorophyll estimation by using SPAD-502. Different vegetation indices were derived from LANDSAT 8 imagery using ERADAS Imagine (v.2014) software for chlorophyll determination. The vegetation indices were including Normalized Difference Vegetation Index (NDVI), Green Normalized Difference Vegetation Index (GNDVI), Chlorophyll Absorbed Ratio Index (CARI), Modified Chlorophyll Absorbed Ratio Index (MCARI) and Transformed Chlorophyll Absorbed Ratio index (TCARI). For ANN modeling, MATLAB and SPSS (ANN) tools were used. Multilayer Perceptron (MLP) in MATLAB provided very satisfactory results. For training purpose of MLP 61.7% of the data, for validation purpose 28.3% of data and rest 10% of data were used to evaluate and validate the ANN model results. For error evaluation, sum of squares error and relative error were used. ANN model summery showed that sum of squares error of 10.786, the average overall relative error was .099. The MCARI and NDVI were revealed to be more sensitive indices for assessing wheat chlorophyll content with the highest coefficient of determination R²=0.93 and 0.90 respectively. The results suggested that use of high spatial resolution satellite imagery for the retrieval of crop chlorophyll content by using ANN model provides accurate, reliable assessment of crop health status at a larger scale which can help in managing crop nutrition requirement in real time.

Keywords: ANN, chlorophyll content, chlorophyll indices, satellite images, wheat

Procedia PDF Downloads 117
23 Artificial Intelligence Models for Detecting Spatiotemporal Crop Water Stress in Automating Irrigation Scheduling: A Review

Authors: Elham Koohi, Silvio Jose Gumiere, Hossein Bonakdari, Saeid Homayouni

Abstract:

Water used in agricultural crops can be managed by irrigation scheduling based on soil moisture levels and plant water stress thresholds. Automated irrigation scheduling limits crop physiological damage and yield reduction. Knowledge of crop water stress monitoring approaches can be effective in optimizing the use of agricultural water. Understanding the physiological mechanisms of crop responding and adapting to water deficit ensures sustainable agricultural management and food supply. This aim could be achieved by analyzing and diagnosing crop characteristics and their interlinkage with the surrounding environment. Assessments of plant functional types (e.g., leaf area and structure, tree height, rate of evapotranspiration, rate of photosynthesis), controlling changes, and irrigated areas mapping. Calculating thresholds of soil water content parameters, crop water use efficiency, and Nitrogen status make irrigation scheduling decisions more accurate by preventing water limitations between irrigations. Combining Remote Sensing (RS), the Internet of Things (IoT), Artificial Intelligence (AI), and Machine Learning Algorithms (MLAs) can improve measurement accuracies and automate irrigation scheduling. This paper is a review structured by surveying about 100 recent research studies to analyze varied approaches in terms of providing high spatial and temporal resolution mapping, sensor-based Variable Rate Application (VRA) mapping, the relation between spectral and thermal reflectance and different features of crop and soil. The other objective is to assess RS indices formed by choosing specific reflectance bands and identifying the correct spectral band to optimize classification techniques and analyze Proximal Optical Sensors (POSs) to control changes. The innovation of this paper can be defined as categorizing evaluation methodologies of precision irrigation (applying the right practice, at the right place, at the right time, with the right quantity) controlled by soil moisture levels and sensitiveness of crops to water stress, into pre-processing, processing (retrieval algorithms), and post-processing parts. Then, the main idea of this research is to analyze the error reasons and/or values in employing different approaches in three proposed parts reported by recent studies. Additionally, as an overview conclusion tried to decompose different approaches to optimizing indices, calibration methods for the sensors, thresholding and prediction models prone to errors, and improvements in classification accuracy for mapping changes.

Keywords: agricultural crops, crop water stress detection, irrigation scheduling, precision agriculture, remote sensing

Procedia PDF Downloads 39
22 Multimodal Biometric Cryptography Based Authentication in Cloud Environment to Enhance Information Security

Authors: D. Pugazhenthi, B. Sree Vidya

Abstract:

Cloud computing is one of the emerging technologies that enables end users to use the services of cloud on ‘pay per usage’ strategy. This technology grows in a fast pace and so is its security threat. One among the various services provided by cloud is storage. In this service, security plays a vital factor for both authenticating legitimate users and protection of information. This paper brings in efficient ways of authenticating users as well as securing information on the cloud. Initial phase proposed in this paper deals with an authentication technique using multi-factor and multi-dimensional authentication system with multi-level security. Unique identification and slow intrusive formulates an advanced reliability on user-behaviour based biometrics than conventional means of password authentication. By biometric systems, the accounts are accessed only by a legitimate user and not by a nonentity. The biometric templates employed here do not include single trait but multiple, viz., iris and finger prints. The coordinating stage of the authentication system functions on Ensemble Support Vector Machine (SVM) and optimization by assembling weights of base SVMs for SVM ensemble after individual SVM of ensemble is trained by the Artificial Fish Swarm Algorithm (AFSA). Thus it helps in generating a user-specific secure cryptographic key of the multimodal biometric template by fusion process. Data security problem is averted and enhanced security architecture is proposed using encryption and decryption system with double key cryptography based on Fuzzy Neural Network (FNN) for data storing and retrieval in cloud computing . The proposing scheme aims to protect the records from hackers by arresting the breaking of cipher text to original text. This improves the authentication performance that the proposed double cryptographic key scheme is capable of providing better user authentication and better security which distinguish between the genuine and fake users. Thus, there are three important modules in this proposed work such as 1) Feature extraction, 2) Multimodal biometric template generation and 3) Cryptographic key generation. The extraction of the feature and texture properties from the respective fingerprint and iris images has been done initially. Finally, with the help of fuzzy neural network and symmetric cryptography algorithm, the technique of double key encryption technique has been developed. As the proposed approach is based on neural networks, it has the advantage of not being decrypted by the hacker even though the data were hacked already. The results prove that authentication process is optimal and stored information is secured.

Keywords: artificial fish swarm algorithm (AFSA), biometric authentication, decryption, encryption, fingerprint, fusion, fuzzy neural network (FNN), iris, multi-modal, support vector machine classification

Procedia PDF Downloads 226
21 Reading and Writing Memories in Artificial and Human Reasoning

Authors: Ian O'Loughlin

Abstract:

Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.

Keywords: artificial reasoning, human memory, machine learning, neural networks

Procedia PDF Downloads 233