Search results for: Zhao Manqin’s fingering techniques
2187 Unraveling the Complexity of Hyperacusis: A Metric Dimension of a Graph Concept
Authors: Hassan Ibrahim
Abstract:
The prevalence of hyperacusis, an auditory condition characterized by heightened sensitivity to sounds, continues to rise, posing challenges for effective diagnosis and intervention. It is believed that this work deepens will deepens the understanding of hyperacusis etiology by employing graph theory as a novel analytical framework. it constructed a comprehensive graph wherein nodes represent various factors associated with hyperacusis, including aging, head or neck trauma, infection/virus, depression, migraines, ear infection, anxiety, and other potential contributors. Relationships between factors are modeled as edges, allowing us to visualize and quantify the interactions within the etiological landscape of hyperacusis. it employ the concept of the metric dimension of a connected graph to identify key nodes (landmarks) that serve as critical influencers in the interconnected web of hyperacusis causes. This approach offers a unique perspective on the relative importance and centrality of different factors, shedding light on the complex interplay between physiological, psychological, and environmental determinants. Visualization techniques were also employed to enhance the interpretation and facilitate the identification of the central nodes. This research contributes to the growing body of knowledge surrounding hyperacusis by offering a network-centric perspective on its multifaceted causes. The outcomes hold the potential to inform clinical practices, guiding healthcare professionals in prioritizing interventions and personalized treatment plans based on the identified landmarks within the etiological network. Through the integration of graph theory into hyperacusis research, the complexity of this auditory condition was unraveled and pave the way for more effective approaches to its management.Keywords: auditory condition, connected graph, hyperacusis, metric dimension
Procedia PDF Downloads 262186 Expansion of Cord Blood Cells Using a Mix of Neurotrophic Factors
Authors: Francisco Dos Santos, Diogo Fonseca-Pereira, Sílvia Arroz-Madeira, Henrique Veiga-Fernandes
Abstract:
Haematopoiesis is a developmental process that generates all blood cell lineages in health and disease. This relies on quiescent haematopoietic stem cells (HSCs) that are able to differentiate, self renew and expand upon physiological demand. HSCs have great interest in regenerative medicine, including haematological malignancies, immunodeficiencies and metabolic disorders. However, the limited yield from existing HSC sources drives the global need for reliable techniques to expand harvested HSCs at high quality and sufficient quantities. With the extensive use of cord blood progenitors for clinical applications, there is a demand for a safe and efficient expansion protocol that is able to overcome the limitations of the cord blood as a source of HSC. StemCell2MAXTM developed a technology that enhances the survival, proliferation and transplantation efficiency of HSC, leading the way to a more widespread use of HSC for research and clinical purposes. StemCell2MAXTM MIX is a solution that improves HSC expansion up to 20x, while preserving stemness, when compared to state-of-the-art. In a recent study by a leading cord blood bank, StemCell2MAX MIX was shown to support a selective 100-fold expansion of CD34+ Hematopoietic Stem and Progenitor Cells (when compared to a 10-fold expansion of Total Nucleated Cells), while maintaining their multipotent differentiative potential as assessed by CFU assays. The technology developed by StemCell2MAXTM opens new horizons for the usage of expanded hematopoietic progenitors for both research purposes (including quality and functional assays in Cord Blood Banks) and clinical applications.Keywords: cord blood, expansion, hematopoietic stem cell, transplantation
Procedia PDF Downloads 2672185 Traditional Practices and Indigenous Knowledge for Sustainable Food Waste Reduction: A Lesson from Africa
Authors: Gabriel Sunday Ayayia
Abstract:
Food waste has reached alarming levels worldwide, contributing to food insecurity, resource depletion, and environmental degradation. While numerous strategies exist to mitigate this issue, the role of traditional practices and indigenous knowledge remains underexplored. There is a need to investigate how these age-old practices can contribute to sustainable food waste reduction, particularly in the African context. This study explores the potential of traditional practices and indigenous knowledge in Africa to address this challenge sustainably. The study examines traditional African food management practices and indigenous knowledge related to food preservation and utilization; assess the impact of traditional practices on reducing food waste and its broader implications for sustainable development, and identify key factors influencing the continued use and effectiveness of traditional practices in contemporary African societies. Thus, the study argues that traditional practices and indigenous knowledge in Africa offer valuable insights and strategies for sustainable food waste reduction that can be adapted and integrated into global initiatives This research will employ a mixed-methods approach, combining qualitative and quantitative research techniques. Data collection will involve in-depth interviews, surveys, and participant observations in selected African communities. Moreover, a comprehensive review of literature on traditional food management practices and their impact on food waste reduction will be conducted. The significance of this study lies in its potential to bridge the gap between traditional knowledge and modern sustainability efforts. By uncovering the value of traditional practices in reducing food waste, this research can inform policies, interventions, and awareness campaigns aimed at achieving sustainable food systems worldwide.Keywords: traditional practices, indigenous knowledge, food waste reduction, sustainability
Procedia PDF Downloads 762184 Design of Two-Channel Quadrature Mirror Filter Banks Using a Transformation Approach
Authors: Ju-Hong Lee, Yi-Lin Shieh
Abstract:
Two-dimensional (2-D) quadrature mirror filter (QMF) banks have been widely considered for high-quality coding of image and video data at low bit rates. Without implementing subband coding, a 2-D QMF bank is required to have an exactly linear-phase response without magnitude distortion, i.e., the perfect reconstruction (PR) characteristics. The design problem of 2-D QMF banks with the PR characteristics has been considered in the literature for many years. This paper presents a transformation approach for designing 2-D two-channel QMF banks. Under a suitable one-dimensional (1-D) to two-dimensional (2-D) transformation with a specified decimation/interpolation matrix, the analysis and synthesis filters of the QMF bank are composed of 1-D causal and stable digital allpass filters (DAFs) and possess the 2-D doubly complementary half-band (DC-HB) property. This facilitates the design problem of the two-channel QMF banks by finding the real coefficients of the 1-D recursive DAFs. The design problem is formulated based on the minimax phase approximation for the 1-D DAFs. A novel objective function is then derived to obtain an optimization for 1-D minimax phase approximation. As a result, the problem of minimizing the objective function can be simply solved by using the well-known weighted least-squares (WLS) algorithm in the minimax (L∞) optimal sense. The novelty of the proposed design method is that the design procedure is very simple and the designed 2-D QMF bank achieves perfect magnitude response and possesses satisfactory phase response. Simulation results show that the proposed design method provides much better design performance and much less design complexity as compared with the existing techniques.Keywords: Quincunx QMF bank, doubly complementary filter, digital allpass filter, WLS algorithm
Procedia PDF Downloads 2252183 Using Open Source Data and GIS Techniques to Overcome Data Deficiency and Accuracy Issues in the Construction and Validation of Transportation Network: Case of Kinshasa City
Authors: Christian Kapuku, Seung-Young Kho
Abstract:
An accurate representation of the transportation system serving the region is one of the important aspects of transportation modeling. Such representation often requires developing an abstract model of the system elements, which also requires important amount of data, surveys and time. However, in some cases such as in developing countries, data deficiencies, time and budget constraints do not always allow such accurate representation, leaving opportunities to assumptions that may negatively affect the quality of the analysis. With the emergence of Internet open source data especially in the mapping technologies as well as the advances in Geography Information System, opportunities to tackle these issues have raised. Therefore, the objective of this paper is to demonstrate such application through a practical case of the development of the transportation network for the city of Kinshasa. The GIS geo-referencing was used to construct the digitized map of Transportation Analysis Zones using available scanned images. Centroids were then dynamically placed at the center of activities using an activities density map. Next, the road network with its characteristics was built using OpenStreet data and other official road inventory data by intersecting their layers and cleaning up unnecessary links such as residential streets. The accuracy of the final network was then checked, comparing it with satellite images from Google and Bing. For the validation, the final network was exported into Emme3 to check for potential network coding issues. Results show a high accuracy between the built network and satellite images, which can mostly be attributed to the use of open source data.Keywords: geographic information system (GIS), network construction, transportation database, open source data
Procedia PDF Downloads 1672182 The Implementation of Educational Partnerships for Undergraduate Students at Yogyakarta State University
Authors: Broto Seno
Abstract:
This study aims to describe and examine more in the implementation of educational partnerships for undergraduate students at Yogyakarta State University (YSU), which is more focused on educational partnerships abroad. This study used descriptive qualitative approach. The study subjects consisted of a vice-rector, two staff education partnerships, four vice-dean, nine undergraduate students and three foreign students. Techniques of data collection using interviews and document review. Validity test of the data source using triangulation. Data analysis using flow models Miles and Huberman, namely data reduction, data display, and conclusion. Results of this study showed that the implementation of educational partnerships abroad for undergraduate students at YSU meets six of the nine indicators of the success of strategic partnerships. Six indicators are long-term, strategic, mutual trust, sustainable competitive advantages, mutual benefit for all the partners, and the separate and positive impact. The indicator has not been achieved is cooperative development, successful, and world class / best practice. These results were obtained based on the discussion of the four formulation of the problem, namely: 1) Implementation and development of educational partnerships abroad has been running good enough, but not maximized. 2) Benefits of the implementation of educational partnerships abroad is providing learning experiences for students, institutions of experience in comparison to each faculty, and improving the network of educational partnerships for YSU toward World Class University. 3) The sustainability of educational partnerships abroad is pursuing a strategy of development through improved management of the partnership. 4) Supporting factors of educational partnerships abroad is the support of YSU, YSU’s partner and society. Inhibiting factors of educational partnerships abroad is not running optimally management.Keywords: partnership, education, YSU, institutions and faculties
Procedia PDF Downloads 3342181 Enzymatic Repair Prior To DNA Barcoding, Aspirations, and Restraints
Authors: Maxime Merheb, Rachel Matar
Abstract:
Retrieving ancient DNA sequences which in return permit the entire genome sequencing from fossils have extraordinarily improved in recent years, thanks to sequencing technology and other methodological advances. In any case, the quest to search for ancient DNA is still obstructed by the damage inflicted on DNA which accumulates after the death of a living organism. We can characterize this damage into three main categories: (i) Physical abnormalities such as strand breaks which lead to the presence of short DNA fragments. (ii) Modified bases (mainly cytosine deamination) which cause errors in the sequence due to an incorporation of a false nucleotide during DNA amplification. (iii) DNA modifications referred to as blocking lesions, will halt the PCR extension which in return will also affect the amplification and sequencing process. We can clearly see that the issues arising from breakage and coding errors were significantly decreased in recent years. Fast sequencing of short DNA fragments was empowered by platforms for high-throughput sequencing, most of the coding errors were uncovered to be the consequences of cytosine deamination which can be easily removed from the DNA using enzymatic treatment. The methodology to repair DNA sequences is still in development, it can be basically explained by the process of reintroducing cytosine rather than uracil. This technique is thus restricted to amplified DNA molecules. To eliminate any type of damage (particularly those that block PCR) is a process still pending the complete repair methodologies; DNA detection right after extraction is highly needed. Before using any resources into extensive, unreasonable and uncertain repair techniques, it is vital to distinguish between two possible hypotheses; (i) DNA is none existent to be amplified to begin with therefore completely un-repairable, (ii) the DNA is refractory to PCR and it is worth to be repaired and amplified. Hence, it is extremely important to develop a non-enzymatic technique to detect the most degraded DNA.Keywords: ancient DNA, DNA barcodong, enzymatic repair, PCR
Procedia PDF Downloads 4002180 Feeding Practices and Malnutrition among under Five Children in Communities of Kuje Area Council, Federal Capital Territory Abuja, Nigeria
Authors: Clementina Ebere Okoro, Olumuyiwa Adeyemi Owolabi, Doris Bola James, Aloysius Nwabugo Maduforo, Andrew Lingililani Mbewe, Christopher Osaruwanmwen Isokpunwu
Abstract:
Poor dietary practices and malnutrition, including severe acute malnutrition among under-five children in Nigeria has remained a great public health concern. This study assessed infant and young child feeding practices and nutritional status of under-five children to determine the prevalence of malnutrition of under-five children in Kuje area council, Abuja. The study was a cross-sectional study. Multi-stage sampling techniques was used in selecting the population that was studied. Probability proportion by size was applied in choosing 30 clusters for the survey using ENA for SMART software 2011 version. Questionnaires were used to obtain information from the population, while appropriate equipment was used for measurements of anthropometric parameters. The data was also subjected to statistical analysis. Results were presented in tables and figures. The result showed that 96.7% of the children were breastfed, 30.6% had early initiation to breastfeeding within first hour of birth and 22.4% were breastfed exclusively up to 6 months, 69.8% fed infants’ colostrum, while 30.2% discarded colostrum. About half of the respondents (49.1%) introduced complementary feeding before six months and 23.2% introduced it after six months while 27.7% had age appropriate timely introduction of complementary feeding. The anthropometric result showed that the prevalence of global acute malnutrition (GAM) was 12.8%, severe wasting prevalence was 5.4%, moderate wasting was 7.4%, underweight was 24.4%, stunting was 40.3% and overweight was 7.0%. The result showed that there is a high prevalence of malnutrition among under-five children in KujeKeywords: malnutrition, under five children, breastfeeding, complementary feeding
Procedia PDF Downloads 2672179 Large Core Silica Few-Mode Optical Fibers with Reduced Differential Mode Delay and Enhanced Mode Effective Area over 'C'-Band
Authors: Anton V. Bourdine, Vladimir A. Burdin, Oleg R. Delmukhametov
Abstract:
This work presents a fast and simple method for the design of large core silica optical fibers with differential mode delay (DMD) management. Some results are reported concerned with refractive index profile optimization for 42 µm core 16-LP-mode optical fiber for next-generation optical networks. Here special refractive index profile form provides total DMD reducing over all mode staff under desired enhanced mode effective area. Method for the simulation of 'real manufactured' few-mode optical fiber (FMF) core geometry differing from the desired optimized structure by core non-symmetrical ellipticity and refractive index profile deviation including local fluctuations is proposed. Results of the following analysis of optimized FMF with inserted geometry distortions performed by earlier on developed modification of rigorous mixed finite-element method showed strong DMD degradation that requires additional higher-order mode management. In addition, this work also presents a method for design mode division multiplexer channel precision spatial positioning scheme at FMF core end that provides one of the potentiality solutions of described DMD degradation problem concerned with 'distorted' core geometry due to features of optical fiber manufacturing techniques.Keywords: differential mode delay, few-mode optical fibers, nonlinear Shannon limit, optical fiber non-circularity, ‘real manufactured’ optical fiber core geometry simulation, refractive index profile optimization
Procedia PDF Downloads 1572178 COVID-19 Detection from Computed Tomography Images Using UNet Segmentation, Region Extraction, and Classification Pipeline
Authors: Kenan Morani, Esra Kaya Ayana
Abstract:
This study aimed to develop a novel pipeline for COVID-19 detection using a large and rigorously annotated database of computed tomography (CT) images. The pipeline consists of UNet-based segmentation, lung extraction, and a classification part, with the addition of optional slice removal techniques following the segmentation part. In this work, a batch normalization was added to the original UNet model to produce lighter and better localization, which is then utilized to build a full pipeline for COVID-19 diagnosis. To evaluate the effectiveness of the proposed pipeline, various segmentation methods were compared in terms of their performance and complexity. The proposed segmentation method with batch normalization outperformed traditional methods and other alternatives, resulting in a higher dice score on a publicly available dataset. Moreover, at the slice level, the proposed pipeline demonstrated high validation accuracy, indicating the efficiency of predicting 2D slices. At the patient level, the full approach exhibited higher validation accuracy and macro F1 score compared to other alternatives, surpassing the baseline. The classification component of the proposed pipeline utilizes a convolutional neural network (CNN) to make final diagnosis decisions. The COV19-CT-DB dataset, which contains a large number of CT scans with various types of slices and rigorously annotated for COVID-19 detection, was utilized for classification. The proposed pipeline outperformed many other alternatives on the dataset.Keywords: classification, computed tomography, lung extraction, macro F1 score, UNet segmentation
Procedia PDF Downloads 1312177 Grapevine Farmers’ Adaptation to Climate Change and its Implication to Human Health: A Case of Dodoma, Tanzania
Authors: Felix Y. Mahenge, Abiud L. Kaswamila, Davis G. Mwamfupe
Abstract:
Grapevine is a drought resistant crop, although in recent years it has been observed to be affect by climate change. This compelled investigation of grapevine farmers’ adaptation strategies to climate change in Dodoma, Tanzania. A mixed research approach was adopted. Likewise, purposive and random sampling techniques were used to select individuals for the study. About 248 grapevine farmers and 64 key informants and members of focus group discussions were involved. Primary data were collected through surveys, discussions, interviews, and observations, while secondary data were collected through documentary reviews. Quantitative data were analysed through descriptive statistics by means of IBM (SPSS) software while the qualitative data were analysed through content analysis. The findings indicate that climate change has adversely affected grapevine production leading to the occurrence of grapevine pests and diseases, drought which increases costs for irrigation and uncertainties which affect grapevine markets. For the purpose of lessening grapevine production constraints due to climate change, farmers have been using several adaptation strategies. Some of the strategies include application of pesticides, use of scarers to threaten birds, irrigation, timed pruning, manure fertilisers and diversification to other farm or non-farm activities. The use of pesticides and industrial fertilizers were regarded as increasing human health risks in the study area. The researchers recommend that the Tanzania government should strengthen the agricultural extension services in the study area so that the farmers undertake adaptation strategies with the consideration of human health safety.Keywords: grapevine farmers, adaptation, climate change, human health
Procedia PDF Downloads 912176 Development of a Geomechanical Risk Assessment Model for Underground Openings
Authors: Ali Mortazavi
Abstract:
The main objective of this research project is to delve into a multitude of geomechanical risks associated with various mining methods employed within the underground mining industry. Controlling geotechnical design parameters and operational factors affecting the selection of suitable mining techniques for a given underground mining condition will be considered from a risk assessment point of view. Important geomechanical challenges will be investigated as appropriate and relevant to the commonly used underground mining methods. Given the complicated nature of rock mass in-situ and complicated boundary conditions and operational complexities associated with various underground mining methods, the selection of a safe and economic mining operation is of paramount significance. Rock failure at varying scales within the underground mining openings is always a threat to mining operations and causes human and capital losses worldwide. Geotechnical design is a major design component of all underground mines and basically dominates the safety of an underground mine. With regard to uncertainties that exist in rock characterization prior to mine development, there are always risks associated with inappropriate design as a function of mining conditions and the selected mining method. Uncertainty often results from the inherent variability of rock masse, which in turn is a function of both geological materials and rock mass in-situ conditions. The focus of this research is on developing a methodology which enables a geomechanical risk assessment of given underground mining conditions. The outcome of this research is a geotechnical risk analysis algorithm, which can be used as an aid in selecting the appropriate mining method as a function of mine design parameters (e.g., rock in-situ properties, design method, governing boundary conditions such as in-situ stress and groundwater, etc.).Keywords: geomechanical risk assessment, rock mechanics, underground mining, rock engineering
Procedia PDF Downloads 1452175 Removal Capacity of Activated Carbon (AC) by Combining AC and Titanium Dioxide (TIO₂) in a Photocatalytically Regenerative Activated Carbon
Authors: Hanane Belayachi, Sarra Bourahla, Amel Belayachi, Fadela Nemchi, Mostefa Belhakem
Abstract:
The most used techniques to remove pollutants from wastewater are adsorption onto activated carbon (AC) and oxidation using a photocatalyst slurry. The aim of this work is to eliminate such drawbacks by combining AC and titanium dioxide (TiO₂) in a photocatalytically Regenerative Activated Carbon. Anatase titania was deposited on powder-activated carbon made from grape seeds by the impregnation method, and then the composite photocatalyst was employed for the removal of reactive black 5, which is an anionic azo dye, from water. The AGS/TiO₂ was characterized by BET, MEB, RDX and optical absorption spectroscopy. The BET surface area and the pore structure of composite photocatalysts (AGS/TiO₂) and activated grape seeds (AGS) were evaluated from nitrogen adsorption data at 77 K in relation to process conditions. Our results indicate that the photocatalytic activity of AGS/TiO₂ was much higher than single-phase titania. The adsorption equilibrium of reactive black 5 from aqueous solutions on the examined materials was investigated. Langmuir, Freundlich, and Redlich–Petersen models were fitted to experimental equilibrium data, and their goodness of fit is compared. The degradation kinetics fitted well to the Langmuir-Hinselwood pseudo first order rate low. The photocatalytic activity of AGS/TiO₂ was much higher than virgin TiO₂. Chemical oxygen demand (COD) removal was measured at regular intervals to quantify the mineralization of the dye. Above 96% mineralization was observed. These results suggest that UV-irradiated TiO₂ immobilized on activated carbon may be considered an adequate process for the treatment of diluted colored textile wastewater.Keywords: activated carbon, pollutant, catalysis, TiO₂
Procedia PDF Downloads 502174 Mammographic Multi-View Cancer Identification Using Siamese Neural Networks
Authors: Alisher Ibragimov, Sofya Senotrusova, Aleksandra Beliaeva, Egor Ushakov, Yuri Markin
Abstract:
Mammography plays a critical role in screening for breast cancer in women, and artificial intelligence has enabled the automatic detection of diseases in medical images. Many of the current techniques used for mammogram analysis focus on a single view (mediolateral or craniocaudal view), while in clinical practice, radiologists consider multiple views of mammograms from both breasts to make a correct decision. Consequently, computer-aided diagnosis (CAD) systems could benefit from incorporating information gathered from multiple views. In this study, the introduce a method based on a Siamese neural network (SNN) model that simultaneously analyzes mammographic images from tri-view: bilateral and ipsilateral. In this way, when a decision is made on a single image of one breast, attention is also paid to two other images – a view of the same breast in a different projection and an image of the other breast as well. Consequently, the algorithm closely mimics the radiologist's practice of paying attention to the entire examination of a patient rather than to a single image. Additionally, to the best of our knowledge, this research represents the first experiments conducted using the recently released Vietnamese dataset of digital mammography (VinDr-Mammo). On an independent test set of images from this dataset, the best model achieved an AUC of 0.87 per image. Therefore, this suggests that there is a valuable automated second opinion in the interpretation of mammograms and breast cancer diagnosis, which in the future may help to alleviate the burden on radiologists and serve as an additional layer of verification.Keywords: breast cancer, computer-aided diagnosis, deep learning, multi-view mammogram, siamese neural network
Procedia PDF Downloads 1382173 Simultaneous Bilateral Patella Tendon Rupture: A Systematic Review
Authors: André Rui Coelho Fernandes, Mariana Rufino, Divakar Hamal, Amr Sousa, Emma Fossett, Kamalpreet Cheema
Abstract:
Aim: A single patella tendon rupture is relatively uncommon, but a simultaneous bilateral event is a rare occurrence and has been scarcely reviewed in the literature. This review was carried out to analyse the existing literature on this event, with the aim of proposing a standardised approach to the diagnosis and management of this injury. Methods: A systematic review was conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Three independent reviewers conducted searches in PubMed, OvidSP for Medline and Embase, as well as Cochrane Library using the same search strategy. From a total of 183 studies, 45 were included, i.e. 90 patellas. Results: 46 patellas had a Type 1 Rupture equating to 51%, with Type 3 being the least common, with only 7 patellas sustaining this injury. The mean Insall-Salvio ratio for each knee was 1.62 (R) and 1.60 (L) Direct Primary Repair was the most common surgical technique compared to Tendon Reconstruction, with End to End and Transosseous techniques split almost equally. Brace immobilisation was preferred over cast, with a mean start to weight-bearing of 3.23 weeks post-op. Conclusions: Bilateral patellar tendon rupture is a rare injury that should be considered in patients with knee extensor mechanism disruption. The key limitation of this study was the low number of patients encompassed by the eligible literature. There is space for a higher level of evidence study, specifically regarding surgical treatment choice and methods, as well as post-operative management, which could potentially improve the outcomes in the management of this injury.Keywords: trauma and orthopaedic surgery, bilateral patella, tendon rupture, trauma
Procedia PDF Downloads 1362172 White Light Emitting Carbon Dots- Surface Modification of Carbon Dots Using Auxochromes
Authors: Manasa Perikala, Asha Bhardwaj
Abstract:
Fluorescent carbon dots (CDs), a young member of Carbon nanomaterial family, has gained a lot of research attention across the globe due to its highly luminescent emission properties, non-toxic behavior, stable emission properties, and zero re-absorption lose. These dots have the potential to replace the use of traditional semiconductor quantum dots in light-emitting devices (LED’s, fiber lasers) and other photonic devices (temperature sensor, UV detector). However, One major drawback of Carbon dots is that, till date, the actual mechanism of photoluminescence (PL) in carbon dots is still an open topic of discussion among various researchers across the globe. PL mechanism of CDs based on wide particle size distribution, the effect of surface groups, hybridization in carbon, and charge transfer mechanisms have been proposed. Although these mechanisms explain PL of CDs to an extent, no universally accepted mechanism to explain complete PL behavior of these dots is put forth. In our work, we report parameters affecting the size and surface of CDs, such as time of the reaction, synthesis temperature and concentration of precursors and their effects on the optical properties of the carbon dots. The effect of auxochromes on the emission properties and re-modification of carbon surface using an external surface functionalizing agent is discussed in detail. All the explanations have been supported by UV-Visible absorption, emission spectroscopies, Fourier transform infrared spectroscopy and Transmission electron microscopy and X-Ray diffraction techniques. Once the origin of PL in CDs is understood, parameters affecting PL centers can be modified to tailor the optical properties of these dots, which can enhance their applications in the fabrication of LED’s and other photonic devices out of these carbon dots.Keywords: carbon dots, photoluminescence, size effects on emission in CDs, surface modification of carbon dots
Procedia PDF Downloads 1352171 Biophysical Characterization of Archaeal Cyclophilin Like Chaperone Protein
Authors: Vineeta Kaushik, Manisha Goel
Abstract:
Chaperones are proteins that help other proteins fold correctly, and are found in all domains of life i.e., prokaryotes, eukaryotes and archaea. Various comparative genomic studies have suggested that the archaeal protein folding machinery appears to be highly similar to that found in eukaryotes. In case of protein folding; slow rotation of peptide prolyl-imide bond is often the rate limiting step. Formation of the prolyl-imide bond during the folding of a protein requires the assistance of other proteins, termed as peptide prolyl cis-trans isomerases (PPIases). Cyclophilins constitute the class of peptide prolyl isomerases with a wide range of biological function like protein folding, signaling and chaperoning. Most of the cyclophilins exhibit PPIase enzymatic activity and play active role in substrate protein folding which classifies them as a category of molecular chaperones. Till date, there is not very much data available in the literature on archaeal cyclophilins. We aim to compare the structural and biochemical features of the cyclophilin protein from within the three domains to elucidate the features affecting their stability and enzyme activity. In the present study, we carry out in-silico analysis of the cyclophilin proteins to predict their conserved residues, sites under positive selection and compare these proteins to their bacterial and eukaryotic counterparts to predict functional divergence. We also aim to clone and express these proteins in heterologous system and study their biophysical characteristics in detail using techniques like CD and fluorescence spectroscopy. Overall we aim to understand the features contributing to the folding, stability and dynamics of the archaeal cyclophilin proteins.Keywords: biophysical characterization, x-ray crystallography, chaperone-like activity, cyclophilin, PPIase activity
Procedia PDF Downloads 2132170 Cold Tomato Paste as an Alternative Therapy for Elderly Clients with Exacerbation of Arthritis
Authors: Mary Therese G. Caluna, Mark Justin B. Campanero, Erlin Maris T. Cantiller, Claudine Mae A. Cantillo, Nerissa L. Caño
Abstract:
Objective: The study determined the effectiveness of cold tomato paste in relieving pain caused by exacerbation of arthritis in the elderly, specifically on clients 60 years old and above. The study focused on alternative, cost-effective and non-pharmacological techniques in relieving pain experienced by the older people with osteoarthritis and rheumatoid arthritis. Methods: Using purposive non-probability sampling, the researchers gathered a total number of 40 subjects that passed the inclusion criteria provided by the researchers. The subjects were divided into two groups, experimental group (20 subjects) and control groups (20 subjects). The Numeric Rating 11-point Scale (NRS-11) was utilized to assess the pain level of the subject prior the application of the treatment and after the application of the treatment. Key findings: There is a significant difference in the pain levels of the experimental group before and after the application of cold tomato paste. This indicates that that the application of cold tomato paste alleviates the pain experienced by elderly clients with exacerbation of arthritis. Conclusion: The effectiveness of cold tomato paste in relieving pain experienced by elderly clients who are in exacerbation of arthritis was proven to be evidence-based. The cold tomato paste application has significant impact in the field of nursing and therefore, can be used in both clinical trials and practices. The effectiveness of cold tomato application promotes innovation in the field of nursing, thus encouraging further researches regarding other uses of tomato and other herbal interventions to relieve the pain caused by osteoarthritis and rheumatoid arthritis.Keywords: alternative therapy, arthritis, cold tomato paste, elderly clients, exacerbation
Procedia PDF Downloads 4242169 Hybrid Velocity Control Approach for Tethered Aerial Vehicle
Authors: Lovesh Goyal, Pushkar Dave, Prajyot Jadhav, GonnaYaswanth, Sakshi Giri, Sahil Dharme, Rushika Joshi, Rishabh Verma, Shital Chiddarwar
Abstract:
With the rising need for human-robot interaction, researchers have proposed and tested multiple models with varying degrees of success. A few of these models performed on aerial platforms are commonly known as Tethered Aerial Systems. These aerial vehicles may be powered continuously by a tether cable, which addresses the predicament of the short battery life of quadcopters. This system finds applications to minimize humanitarian efforts for industrial, medical, agricultural, and service uses. However, a significant challenge in employing such systems is that it necessities attaining smooth and secure robot-human interaction while ensuring that the forces from the tether remain within the standard comfortable range for the humans. To tackle this problem, a hybrid control method that could switch between two control techniques: constant control input and the steady-state solution, is implemented. The constant control approach is implemented when a person is far from the target location, and error is thought to be eventually constant. The controller switches to the steady-state approach when the person reaches within a specific range of the goal position. Both strategies take into account human velocity feedback. This hybrid technique enhances the outcomes by assisting the person to reach the desired location while decreasing the human's unwanted disturbance throughout the process, thereby keeping the interaction between the robot and the subject smooth.Keywords: unmanned aerial vehicle, tethered system, physical human-robot interaction, hybrid control
Procedia PDF Downloads 982168 [Keynote Talk]: Uptake of Co(II) Ions from Aqueous Solutions by Low-Cost Biopolymers and Their Hybrid
Authors: Kateryna Zhdanova, Evelyn Szeinbaum, Michelle Lo, Yeonjae Jo, Abel E. Navarro
Abstract:
Alginate hydrogel beads (AB), spent peppermint leaf (PM), and a hybrid adsorbent of these two materials (ABPM) were studied as potential biosorbents of Cobalt (II) ions from aqueous solutions. Cobalt ion is a commonly underestimated pollutant that is responsible for several health problems. Discontinuous batch experiments were conducted at room temperature to evaluate the effect of solution acidity, mass of adsorbent on the adsorption of Co(II) ions. The interfering effect of salinity, the presence of surfactants, an organic dye, and Pb(II) ions were also studied to resemble the application of these adsorbents in real wastewater. Equilibrium results indicate that Co(II) uptake is maximized at pH values higher than 5, with adsorbent doses of 200 mg, 200 mg, and 120 mg for AB, PM, and ABPM, respectively. Co(II) adsorption followed the trend AB > ABPM > PM with Adsorption percentages of 77%, 71% and 64%, respectively. Salts had a strong negative effect on the adsorption due to the increase of the ionic strength and the competition for adsorption sites. The presence of Pb(II) ions, surfactant, and dye BY57 had a slightly negative effect on the adsorption, apparently due to their interaction with different adsorption sites that do not interfere with the removal of Co(II). A polar-electrostatic adsorption mechanism is proposed based on the experimental results. Scanning electron microscopy indicates that adsorbent has appropriate morphological and textural properties, and also that ABPM encapsulated most of the PM inside of the hydrogel beads. These experimental results revealed that AB, PM, and ABPM are promising adsorbents for the elimination of Co(II) ions from aqueous solutions under different experimental conditions. These biopolymers are proposed as eco-friendly alternatives for the removal of heavy metal ions at lower costs than the conventional techniques.Keywords: adsorption, Co(II) ions, alginate hydrogel beads, spent peppermint leaf, pH
Procedia PDF Downloads 1282167 Modulation of Tamoxifen-Induced Cytotoxicity in Breast Cancer Cell Lines by 3-Bromopyruvate
Authors: Yasmin M. Attia, Hanan S. El-Abhar, Mahmoud M. Al Marzabani, Samia A. Shouman
Abstract:
Background: Tamoxifen (TAM) is the most commonly used hormone therapy for the treatment of early and metastatic breast cancer. Although it significantly decreases the tumor recurrence rate and provides an overall benefit, as much as 20–30% of women still relapse during or after long-term therapy. 3-Bromopyruvate (3-BP) is a promising agent with impressive antitumor effects in several models of animal tumors and cell lines. Aim: This study was designed to investigate the combined effect of (TAM) and (3-BP) in breast cancer cells and to explore their molecular interaction via assessment of apoptotic, angiogenic, and metastatic markers. Methods: In vitro cytotoxicity study was carried out for both compounds to determine the combination regimen producing a synergistic effect and mechanistic pathways were studied using RT-PCR and western techniques. Moreover, the anti-oncolytic and anti-angiogenic potentials were assessed in mice bearing solid Ehrlich carcinoma (SEC). Results: The combined treatment significantly increased the expressions and protein levels of caspase 7, 9, and 3 and decreased of angiogenic markers VEGF, HIF-1α, and HK2 compared to cells treated with either drug individually. However, there were no significant changes in MMP-2 and MMP-9 protein levels. Interestingly, the in vivo results supported the in vitro findings; there was a decrease in the tumor volume and VEFG using immunohistochemistry in the combination-treated groups compared to either TAM or 3-BP treated one. Conclusion: 3-BP synergizes the cytotoxic effect of TAM by increasing apoptosis and decreasing angiogenesis which makes this combination a promising regimen to be applied clinically.Keywords: tamoxifen, 3-bromopyruvate, breast cancer, cytotoxicity, angiogenesis
Procedia PDF Downloads 2272166 Fuzzy Logic for Control and Automatic Operation of Natural Ventilation in Buildings
Authors: Ekpeti Bukola Grace, Mahmoudi Sabar Esmail, Chaer Issa
Abstract:
Global energy consumption has been increasing steadily over the last half - century, and this trend is projected to continue. As energy demand rises in many countries throughout the world due to population growth, natural ventilation in buildings has been identified as a viable option for lowering these demands, saving costs, and also lowering CO2 emissions. However, natural ventilation is driven by forces that are generally unpredictable in nature thus, it is important to manage the resulting airflow in order to maintain pleasant indoor conditions, making it a complex system that necessitates specific control approaches. The effective application of fuzzy logic technique amidst other intelligent systems is one of the best ways to bridge this gap, as its control dynamics relates more to human reasoning and linguistic descriptions. This article reviewed existing literature and presented practical solutions by applying fuzzy logic control with optimized techniques, selected input parameters, and expert rules to design a more effective control system. The control monitors used indoor temperature, outdoor temperature, carbon-dioxide levels, wind velocity, and rain as input variables to the system, while the output variable remains the control of window opening. This is achieved through the use of fuzzy logic control tool box in MATLAB and running simulations on SIMULINK to validate the effectiveness of the proposed system. Comparison analysis model via simulation is carried out, and with the data obtained, an improvement in control actions and energy savings was recorded.Keywords: fuzzy logic, intelligent control systems, natural ventilation, optimization
Procedia PDF Downloads 1302165 Evaluating the Validity of CFD Model of Dispersion in a Complex Urban Geometry Using Two Sets of Experimental Measurements
Authors: Mohammad R. Kavian Nezhad, Carlos F. Lange, Brian A. Fleck
Abstract:
This research presents the validation study of a computational fluid dynamics (CFD) model developed to simulate the scalar dispersion emitted from rooftop sources around the buildings at the University of Alberta North Campus. The ANSYS CFX code was used to perform the numerical simulation of the wind regime and pollutant dispersion by solving the 3D steady Reynolds-averaged Navier-Stokes (RANS) equations on a building-scale high-resolution grid. The validation study was performed in two steps. First, the CFD model performance in 24 cases (eight wind directions and three wind speeds) was evaluated by comparing the predicted flow fields with the available data from the previous measurement campaign designed at the North Campus, using the standard deviation method (SDM), while the estimated results of the numerical model showed maximum average percent errors of approximately 53% and 37% for wind incidents from the North and Northwest, respectively. Good agreement with the measurements was observed for the other six directions, with an average error of less than 30%. In the second step, the reliability of the implemented turbulence model, numerical algorithm, modeling techniques, and the grid generation scheme was further evaluated using the Mock Urban Setting Test (MUST) dispersion dataset. Different statistical measures, including the fractional bias (FB), the geometric mean bias (MG), and the normalized mean square error (NMSE), were used to assess the accuracy of the predicted dispersion field. Our CFD results are in very good agreement with the field measurements.Keywords: CFD, plume dispersion, complex urban geometry, validation study, wind flow
Procedia PDF Downloads 1362164 Analysis of Kilistra (Gokyurt) Settlement within the Context of Traditional Residential Architecture
Authors: Esra Yaldız, Tugba Bulbul Bahtiyar, Dicle Aydın
Abstract:
Humans meet their need for shelter via housing which they structure in line with habits and necessities. In housing culture, traditional dwelling has an important role as a social and cultural transmitter. It provides concrete data by being planned in parallel with users’ life style and habits, having their own dynamics and components as well as their designs in harmony with nature, environment and the context they exist. Textures of traditional dwelling create a healthy and cozy living environment by means of adaptation to natural conditions, topography, climate, and context; utilization of construction materials found nearby and usage of traditional techniques and forms; and natural isolation of construction materials used. One of the examples of traditional settlements in Anatolia is Kilistra (Gökyurt) settlement of Konya province. Being among the important centers of Christianity in the past, besides having distinctive architecture, culture, natural features, and geographical differences (climate, geological structure, material), Kilistra can also be identified as a traditional settlement consisting of family, religious and economic structures as well as cultural interaction. The foundation of this study is the traditional residential texture of Kilistra with its unique features. The objective of this study is to assess the conformity of traditional residential texture of Kilistra with present topography, climatic data, and geographical values within the context of human scale construction, usage of green space, indigenous construction materials, construction form, building envelope, and space organization in housing.Keywords: traditional residential architecture, Kilistra, Anatolia, Konya
Procedia PDF Downloads 4122163 Voting Representation in Social Networks Using Rough Set Techniques
Authors: Yasser F. Hassan
Abstract:
Social networking involves use of an online platform or website that enables people to communicate, usually for a social purpose, through a variety of services, most of which are web-based and offer opportunities for people to interact over the internet, e.g. via e-mail and ‘instant messaging’, by analyzing the voting behavior and ratings of judges in a popular comments in social networks. While most of the party literature omits the electorate, this paper presents a model where elites and parties are emergent consequences of the behavior and preferences of voters. The research in artificial intelligence and psychology has provided powerful illustrations of the way in which the emergence of intelligent behavior depends on the development of representational structure. As opposed to the classical voting system (one person – one decision – one vote) a new voting system is designed where agents with opposed preferences are endowed with a given number of votes to freely distribute them among some issues. The paper uses ideas from machine learning, artificial intelligence and soft computing to provide a model of the development of voting system response in a simulated agent. The modeled development process involves (simulated) processes of evolution, learning and representation development. The main value of the model is that it provides an illustration of how simple learning processes may lead to the formation of structure. We employ agent-based computer simulation to demonstrate the formation and interaction of coalitions that arise from individual voter preferences. We are interested in coordinating the local behavior of individual agents to provide an appropriate system-level behavior.Keywords: voting system, rough sets, multi-agent, social networks, emergence, power indices
Procedia PDF Downloads 3942162 Developments in corporate governance and economic growth in Sub Saharan Africa
Authors: Martha Matashu
Abstract:
This study examined corporate governance and economic growth trends in Sub Saharan African (SSA) countries. The need for corporate governance arise from the fact that the day to day running of the business is done by management who in accordance with the neoclassical theory and agency theory have inborn tendencies to use the resources of the company to their advantage. This prevails against a background where the endogenous economic growth theory hold the assumption that economic growth is an outcome of the overall performance of all companies within an economy. This suggest that corporate governance at firm level determine economic growth through its impact on the overall performance. Nevertheless, insight into literature suggest that efforts to promote corporate governance in countries across SSA since the 1980s to date have not yet yielded desired outcomes. The board responsibilities, shareholder rights, disclosure and transparency, protection of minority shareholder, and liability of directors were thus used as proxies of corporate governance because these are believed to be mechanisms that are believed to enhance company performance their effect on enhancing accountability and transparency. Using panel data techniques, corporate governance and economic growth data for 29 SSA countries from the period of 2008 to 2019 was analysed. The findings revealed declining economic growth trend despite an increase in corporate governance aspects such as director liability, shareholders’ rights, and protection of minority shareholder in SSA countries. These findings are in contradiction to the popularly held theoretical principles of economic growth and corporate governance. The study reached the conclusion thata nonlinearrelationship exists between corporate governance and economic growth within the selectedSSA countries during the period under investigation. This study thus recommends that measures should be taken to create conditions for corporate governance that would bolster significant positive contributions to economic growth in the region.Keywords: corporate governance, economic growth, sub saharan Africa, agency theory, endogenous theory
Procedia PDF Downloads 1492161 Geophysical Methods of Mapping Groundwater Aquifer System: Perspectives and Inferences From Lisana Area, Western Margin of the Central Main Ethiopian Rift
Authors: Esubalew Yehualaw Melaku, Tigistu Haile Eritro
Abstract:
In this study, two basic geophysical methods are applied for mapping the groundwater aquifer system in the Lisana area along the Guder River, northeast of Hosanna town, near the western margin of the Central Main Ethiopian Rift. The main target of the study is to map the potential aquifer zone and investigate the groundwater potential for current and future development of the resource in the Gode area. The geophysical methods employed in this study include, Vertical Electrical Sounding (VES) and magnetic survey techniques. Electrical sounding was used to examine and map the depth to the potential aquifer zone of the groundwater and its distribution over the area. On the other hand, a magnetic survey was used to delineate contact between lithologic units and geological structures. The 2D magnetic modeling and the geoelectric sections are used for the identification of weak zones, which control the groundwater flow and storage system. The geophysical survey comprises of twelve VES readings collected by using a Schlumberger array along six profile lines and more than four hundred (400) magnetic readings at about 10m station intervals along four profiles and 20m along three random profiles. The study result revealed that the potential aquifer in the area is obtained at a depth range from 45m to 92m. This is the response of the highly weathered/ fractured ignimbrite and pumice layer with sandy soil, which is the main water-bearing horizon. Overall, in the neighborhood of four VES points, VES- 2, VES- 3, VES-10, and VES-11, shows good water-bearing zones in the study area.Keywords: vertical electrical sounding, magnetic survey, aquifer, groundwater potential
Procedia PDF Downloads 792160 A Review of Research on Pre-training Technology for Natural Language Processing
Authors: Moquan Gong
Abstract:
In recent years, with the rapid development of deep learning, pre-training technology for natural language processing has made great progress. The early field of natural language processing has long used word vector methods such as Word2Vec to encode text. These word vector methods can also be regarded as static pre-training techniques. However, this context-free text representation brings very limited improvement to subsequent natural language processing tasks and cannot solve the problem of word polysemy. ELMo proposes a context-sensitive text representation method that can effectively handle polysemy problems. Since then, pre-training language models such as GPT and BERT have been proposed one after another. Among them, the BERT model has significantly improved its performance on many typical downstream tasks, greatly promoting the technological development in the field of natural language processing, and has since entered the field of natural language processing. The era of dynamic pre-training technology. Since then, a large number of pre-trained language models based on BERT and XLNet have continued to emerge, and pre-training technology has become an indispensable mainstream technology in the field of natural language processing. This article first gives an overview of pre-training technology and its development history, and introduces in detail the classic pre-training technology in the field of natural language processing, including early static pre-training technology and classic dynamic pre-training technology; and then briefly sorts out a series of enlightening technologies. Pre-training technology, including improved models based on BERT and XLNet; on this basis, analyze the problems faced by current pre-training technology research; finally, look forward to the future development trend of pre-training technology.Keywords: natural language processing, pre-training, language model, word vectors
Procedia PDF Downloads 572159 Artificial Intelligence Based Predictive Models for Short Term Global Horizontal Irradiation Prediction
Authors: Kudzanayi Chiteka, Wellington Makondo
Abstract:
The whole world is on the drive to go green owing to the negative effects of burning fossil fuels. Therefore, there is immediate need to identify and utilise alternative renewable energy sources. Among these energy sources solar energy is one of the most dominant in Zimbabwe. Solar power plants used to generate electricity are entirely dependent on solar radiation. For planning purposes, solar radiation values should be known in advance to make necessary arrangements to minimise the negative effects of the absence of solar radiation due to cloud cover and other naturally occurring phenomena. This research focused on the prediction of Global Horizontal Irradiation values for the sixth day given values for the past five days. Artificial intelligence techniques were used in this research. Three models were developed based on Support Vector Machines, Radial Basis Function, and Feed Forward Back-Propagation Artificial neural network. Results revealed that Support Vector Machines gives the best results compared to the other two with a mean absolute percentage error (MAPE) of 2%, Mean Absolute Error (MAE) of 0.05kWh/m²/day root mean square (RMS) error of 0.15kWh/m²/day and a coefficient of determination of 0.990. The other predictive models had prediction accuracies of MAPEs of 4.5% and 6% respectively for Radial Basis Function and Feed Forward Back-propagation Artificial neural network. These two models also had coefficients of determination of 0.975 and 0.970 respectively. It was found that prediction of GHI values for the future days is possible using artificial intelligence-based predictive models.Keywords: solar energy, global horizontal irradiation, artificial intelligence, predictive models
Procedia PDF Downloads 2742158 A Tool for Facilitating an Institutional Risk Profile Definition
Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan
Abstract:
This paper presents an approach for the easy creation of an institutional risk profile for endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support risk factors set up with just the most important values that are important for a particular organisation. Subsequently, the risk profile employs fuzzy models and associated configurations for the file format metadata aggregator to support digital preservation experts with a semi-automatic estimation of endangerment level for file formats. Our goal is to make use of a domain expert knowledge base aggregated from a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation and analysis of risk factors for a requried dimension. The proposed methods improve the visibility of risk factor information and the quality of a digital preservation process. The presented approach is meant to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and automatically aggregated file format metadata from linked open data sources. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.Keywords: digital information management, file format, endangerment analysis, fuzzy models
Procedia PDF Downloads 404