Search results for: focus group research
1812 The Applicability of General Catholic Canon Law during the Ongoing Migration Crisis in Hungary
Authors: Lorand Ujhazi
Abstract:
The vast majority of existing canonical studies about migration are focused on examining the general pastoral and legal regulations of the Catholic Church. The weakness of this approach is that it ignores a number of important factors; like the financial, legal and personal circumstances of a particular church or the canonical position of certain organizations which actually look after the immigrants. This paper is a case study, which analyses the current and historical migration related policies and activities of the Catholic Church in Hungary. To achieve this goal the study uses canon law, historical publications, various instructions and communications issued by church superiors, Hungarian and foreign media reports and the relevant Hungarian legislation. The paper first examines how the Hungarian Catholic Church assisted migrants like Armenians fleeing from the Ottoman Empire, Poles escaping during the Second World War, East German and Romanian citizens in the 1980s and refugees from the former Yugoslavia in the 1990s. These events underline the importance of past historical experience in the development of contemporary pastoral and humanitarian policy of the Catholic Church in Hungary. Then the paper turns to the events of the ongoing crisis by describing the unique challenges faced by churches in transit countries like Hungary. Then the research contrasts these findings with the typical responsibilities of churches in countries which are popular destinations for immigrants. The next part of the case study focuses on the changes to the pre-crisis legal and canonical framework which influenced the actions of hierarchical and charity organizations in Hungary. Afterwards, the paper illustrates the dangers of operating in an unclear legal environment, where some charitable activities of the church like a fundraising campaign may be interpreted as a national security risk by state authorities. Then the paper presents the reactions of Hungarian academics to the current migration crisis and finally it offers some proposals how to improve parts of Canon Law which govern immigration. The conclusion of the paper is that during the formulation of the central refugee policy of the Catholic Church decision makers must take into consideration the peculiar circumstances of its particular churches. This approach may prevent disharmony between the existing central regulations, the policy of the Vatican and the operations of the local church organizations.Keywords: canon law, Catholic Church, civil law, Hungary, immigration, national security
Procedia PDF Downloads 3091811 Equivalences and Contrasts in the Morphological Formation of Echo Words in Two Indo-Aryan Languages: Bengali and Odia
Authors: Subhanan Mandal, Bidisha Hore
Abstract:
The linguistic process whereby repetition of all or part of the base word with or without internal change before or after the base itself takes place is regarded as reduplication. The reduplicated morphological construction annotates with itself a new grammatical category and meaning. Reduplication is a very frequent and abundant phenomenon in the eastern Indian languages from the states of West Bengal and Odisha, i.e. Bengali and Odia respectively. Bengali, an Indo-Aryan language and a part of the Indo-European language family is one of the largest spoken languages in India and is the national language of Bangladesh. Despite this classification, Bengali has certain influences in terms of vocabulary and grammar due to its geographical proximity to Tibeto-Burman and Austro-Asiatic language speaking communities. Bengali along with Odia belonged to a single linguistic branch. But with time and gradual linguistic changes due to various factors, Odia was the first to break away and develop as a separate distinct language. However, less of contrasts and more of similarities still exist among these languages along the line of linguistics, leaving apart the script. This paper deals with the procedure of echo word formations in Bengali and Odia. The morphological research of the two languages concerning the field of reduplication reveals several linguistic processes. The revelation is based on the information elicited from native language speakers and also on the analysis of echo words found in discourse and conversational patterns. For the purpose of partial reduplication analysis, prefixed class and suffixed class word formations are taken into consideration which show specific rule based changes. For example, in suffixed class categorization, both consonant and vowel alterations are found, following the rules: i) CVx à tVX, ii) CVCV à CVCi. Further classifications were also found on sentential studies of both languages which revealed complete reduplication complexities while forming echo words where the head word lose its original meaning. Complexities based on onomatopoetic/phonetic imitation of natural phenomena and not according to any rule-based occurrences were also found. Taking these aspects into consideration which are very prevalent in both the languages, inferences are drawn from the study which bring out many similarities in both the languages in this area in spite of branching away from each other several years ago.Keywords: consonant alteration, onomatopoetic, partial reduplication and complete reduplication, reduplication, vowel alteration
Procedia PDF Downloads 2431810 The Removal of Common Used Pesticides from Wastewater Using Golden Activated Charcoal
Authors: Saad Mohamed Elsaid Onaizah
Abstract:
One of the reasons for the intensive use of pesticides is to protect agricultural crops and orchards from pests or agricultural worms. The period of time that pesticides stay inside the soil is estimated at about (2) to (12) weeks. Perhaps the most important reason that led to groundwater pollution is the easy leakage of these harmful pesticides from the soil into the aquifers. This research aims to find the best ways to use trated activated charcoal with gold nitrate solution; For the purpose of removing the deadly pesticides from the aqueous solution by adsorption phenomenon. The most used pesticides in Egypt were selected, such as Malathion, Methomyl Abamectin and, Thiamethoxam. Activated charcoal doped with gold ions was prepared by applying chemical and thermal treatments to activated charcoal using gold nitrate solution. Adsorption of studied pesticide onto activated carbon /Au was mainly by chemical adsorption forming complex with the gold metal immobilised on activated carbon surfaces. Also, gold atom was considered as a catalyst to cracking the pesticide molecule. Gold activated charcoal is a low cost material due to the use of very low concentrations of gold nitrate solution. its notice the great ability of activated charcoal in removing selected pesticides due to the presence of the positive charge of the gold ion, in addition to other active groups such as functional oxygen and lignin cellulose. The presence of pores of different sizes on the surface of activated charcoal is the driving force for the good adsorption efficiency for the removal of the pesticides under study The surface area of the prepared char as well as the active groups were determined using infrared spectroscopy and scanning electron microscopy. Some factors affecting the ability of activated charcoal were applied in order to reach the highest adsorption capacity of activated charcoal, such as the weight of the charcoal, the concentration of the pesticide solution, the time of the experiment, and the pH. Experiments showed that the maximum limit revealed by the batch adsorption study for the adsorption of selected insecticides was in contact time (80) minutes at pH (7.70). These promising results were confirmed, and by establishing the practical application of the developed system, the effect of various operating factors with equilibrium, kinetic and thermodynamic studies is evident, using the Langmuir application on the effectiveness of the absorbent material with absorption capacities higher than most other adsorbents.Keywords: waste water, pesticides pollution, adsorption, activated carbon
Procedia PDF Downloads 831809 NFTs, between Opportunities and Absence of Legislation: A Study on the Effect of the Rulings of the OpenSea Case
Authors: Andrea Ando
Abstract:
The development of the blockchain has been a major innovation in the technology field. It opened the door to the creation of novel cyberassets and currencies. In more recent times, the non-fungible tokens have started to be at the centre of media attention. Their popularity has been increasing since 2021, and they represent the latest in the world of distributed ledger technologies and cryptocurrencies. It seems more and more likely that NFTs will play a more important role in our online interactions. They are indeed increasingly taking part in the arts and technology sectors. Their impact on society and the market is still very difficult to define, but it is very likely that there will be a turning point in the world of digital assets. There are some examples of their peculiar behaviour and effect in our contemporary tech-market: the former CEO of the famous social media site Twitter sold an NFT of his first tweet for around £2,1 million ($2,5 million), or the National Basketball Association has created a platform to sale unique moment and memorabilia from the history of basketball through the non-fungible token technology. Their growth, as imaginable, paved the way for civil disputes, mostly regarding their position under the current intellectual property law in each jurisdiction. In April 2022, the High Court of England and Wales ruled in the OpenSea case that non-fungible tokens can be considered properties. The judge, indeed, concluded that the cryptoasset had all the indicia of property under common law (National Provincial Bank v. Ainsworth). The research has demonstrated that the ruling of the High Court is not providing enough answers to the dilemma of whether minting an NFT is a violation or not of intellectual property and/or property rights. Indeed, if, on the one hand, the technology follows the framework set by the case law (e.g., the 4 criteria of Ainsworth), on the other hand, the question that arises is what is effectively protected and owned by both the creator and the purchaser. Then the question that arises is whether a person has ownership of the cryptographed code, that it is indeed definable, identifiable, intangible, distinct, and has a degree of permanence, or what is attached to this block-chain, hence even a physical object or piece of art. Indeed, a simple code would not have any financial importance if it were not attached to something that is widely recognised as valuable. This was demonstrated first through the analysis of the expectations of intellectual property law. Then, after having laid the foundation, the paper examined the OpenSea case, and finally, it analysed whether the expectations were met or not.Keywords: technology, technology law, digital law, cryptoassets, NFTs, NFT, property law, intellectual property law, copyright law
Procedia PDF Downloads 911808 Laser Powder Bed Fusion Awareness for Engineering Students in France and Qatar
Authors: Hiba Naccache, Rima Hleiss
Abstract:
Additive manufacturing AM or 3D printing is one of the pillars of Industry 4.0. Compared to traditional manufacturing, AM provides a prototype before production in order to optimize the design and avoid the stock market and uses strictly necessary material which can be recyclable, for the benefit of leaning towards local production, saving money, time and resources. Different types of AM exist and it has a broad range of applications across several industries like aerospace, automotive, medicine, education and else. The Laser Powder Bed Fusion (LPBF) is a metal AM technique that uses a laser to liquefy metal powder, layer by layer, to build a three-dimensional (3D) object. In industry 4.0 and aligned with the numbers 9 (Industry, Innovation and Infrastructure) and 12 (Responsible Production and Consumption) of the Sustainable Development Goals of the UNESCO 2030 Agenda, the AM’s manufacturers committed to minimizing the environmental impact by being sustainable in every production. The LPBF has several environmental advantages, like reduced waste production, lower energy consumption, and greater flexibility in creating components with lightweight and complex geometries. However, LPBF also have environmental drawbacks, like energy consumption, gas consumption and emissions. It is critical to recognize the environmental impacts of LPBF in order to mitigate them. To increase awareness and promote sustainable practices regarding LPBF, the researchers use the Elaboration Likelihood Model (ELM) theory where people from multiple universities in France and Qatar process information in two ways: peripherally and centrally. The peripheral campaigns use superficial cues to get attention, and the central campaigns provide clear and concise information. The authors created a seminar including a video showing LPBF production and a website with educational resources. The data is collected using questionnaire to test attitude about the public awareness before and after the seminar. The results reflected a great shift on the awareness toward LPBF and its impact on the environment. With no presence of similar research, to our best knowledge, this study will add to the literature on the sustainability of the LPBF production technique.Keywords: additive manufacturing, laser powder bed fusion, elaboration likelihood model theory, sustainable development goals, education-awareness, France, Qatar, specific energy consumption, environmental impact, lightweight components
Procedia PDF Downloads 911807 Preserving Wetlands: Legal and Ecological Challenges in the Face of Degradation: The Case Study of Miankaleh, Iran
Authors: Setareh Orak
Abstract:
Wetlands are essential guardians of global ecosystems, yet they remain vulnerable to increasing human interference and environmental stress. The Miankaleh wetland in northern Iran, designated as a Ramsar Convention site, represents a critical habitat known for its rich biodiversity and essential ecological functions. Despite the existence of national and international environmental laws aimed at preserving such critical ecosystems, the regulatory frameworks in place often fall short in terms of enforcement, monitoring, and overall effectiveness. Unfortunately, this wetland is undergoing severe degradation due to overexploitation, industrial contamination, unsustainable tourism, and land-use alterations. This study aims to assess the strengths and limitations of these regulations and examine their practical impacts on Miankaleh’s ecological health. Adopting a multi-method research approach, this study relies on a combination of case study analysis, legal and literature reviews, environmental data examination, stakeholder interviews, and comparative assessments. Through these methodologies, we scrutinize current national policies, international conventions, and their enforcement mechanisms, revealing the primary areas where they fail to protect Miankaleh effectively. The analysis is supported by two satellite maps linked to our tables, offering detailed visual representations of changes in land use, vegetation, and pollution sources over recent years. By connecting these visual data with quantitative measures, the study provides a comprehensive perspective on how human activities and regulatory shortcomings are contributing to environmental degradation. In conclusion, this study’s insights into the limitations of current environmental legislation and its recommendations for enhancing both policy and public engagement underscore the urgent need for integrated, multi-level efforts in conserving the Miankaleh wetland. Through strengthened legal frameworks, better enforcement, increased public awareness, and international cooperation, the hope is to establish a model of conservation that not only preserves Miankaleh but also serves as a template for protecting similar ecologically sensitive areas worldwide.Keywords: wetlands, tourism, industrial pollution, land use changes, Ramsar convention
Procedia PDF Downloads 151806 The Automatisation of Dictionary-Based Annotation in a Parallel Corpus of Old English
Authors: Ana Elvira Ojanguren Lopez, Javier Martin Arista
Abstract:
The aims of this paper are to present the automatisation procedure adopted in the implementation of a parallel corpus of Old English, as well as, to assess the progress of automatisation with respect to tagging, annotation, and lemmatisation. The corpus consists of an aligned parallel text with word-for-word comparison Old English-English that provides the Old English segment with inflectional form tagging (gloss, lemma, category, and inflection) and lemma annotation (spelling, meaning, inflectional class, paradigm, word-formation and secondary sources). This parallel corpus is intended to fill a gap in the field of Old English, in which no parallel and/or lemmatised corpora are available, while the average amount of corpus annotation is low. With this background, this presentation has two main parts. The first part, which focuses on tagging and annotation, selects the layouts and fields of lexical databases that are relevant for these tasks. Most information used for the annotation of the corpus can be retrieved from the lexical and morphological database Nerthus and the database of secondary sources Freya. These are the sources of linguistic and metalinguistic information that will be used for the annotation of the lemmas of the corpus, including morphological and semantic aspects as well as the references to the secondary sources that deal with the lemmas in question. Although substantially adapted and re-interpreted, the lemmatised part of these databases draws on the standard dictionaries of Old English, including The Student's Dictionary of Anglo-Saxon, An Anglo-Saxon Dictionary, and A Concise Anglo-Saxon Dictionary. The second part of this paper deals with lemmatisation. It presents the lemmatiser Norna, which has been implemented on Filemaker software. It is based on a concordance and an index to the Dictionary of Old English Corpus, which comprises around three thousand texts and three million words. In its present state, the lemmatiser Norna can assign lemma to around 80% of textual forms on an automatic basis, by searching the index and the concordance for prefixes, stems and inflectional endings. The conclusions of this presentation insist on the limits of the automatisation of dictionary-based annotation in a parallel corpus. While the tagging and annotation are largely automatic even at the present stage, the automatisation of alignment is pending for future research. Lemmatisation and morphological tagging are expected to be fully automatic in the near future, once the database of secondary sources Freya and the lemmatiser Norna have been completed.Keywords: corpus linguistics, historical linguistics, old English, parallel corpus
Procedia PDF Downloads 2141805 Anti-Leishmanial Compounds from the Seaweed Padina pavonica
Authors: Nahal Najafi, Afsaneh Yegdaneh, Sedigheh Saberi
Abstract:
Introduction: Leishmaniasis poses a substantial global risk, affecting millions and resulting in thousands of cases each year in endemic regions. Challenges in current leishmaniasis treatments include drug resistance, high toxicity, and pancreatitis. Marine compounds, particularly brown algae, serve as a valuable source of inspiration for discovering treatments against Leishmania. Material and method: Padina pavonica was collected from the Persian Gulf. The seaweeds were dried and extracted with methanol: ethylacetate (1:1). The extract was partitioned to hexane (Hex), dicholoromethane (DCM), butanol, and water by Kupchan partitioning method. Hex partition was fractionated by silica gel column chromatography to 10 fractions (Fr. 1-10). Fr. 6 was further separated by the normal phase HPLC method to yield compounds 1-3. The structures of isolated compounds were elucidated by NMR, Mass, and other spectroscopic methods. Hex and DCM partitions, Fr. 6 and compounds 1-3, were tested for anti-leishmanicidal activity. RAW cell lines were cultured in enriched RPMI (10% FBS, 1% pen-strep) in a 37°C CO2 5% incubator, while promastigote cells were initially cultured in NNN culture and subsequently transferred to the aforementioned medium. Cytotoxicity was assessed using MTT tests, anti-promastigote activity was evaluated through Hemocytometer chamber promastigote counting, and the impact of amastigote damage was determined by counting amastigotes within 100 macrophages. Results: NMR and Mass identified isolated compounds as fucosterol and two sulfoquinovosyldiacylglycerols (SQDG). Among the samples tested, Fr.6 exhibited the highest cytotoxicity (CC50=60.24), while compound 2 showed the lowest cytotoxicity (CC50=21984). Compound 1 and dichloromethane fraction demonstrated the highest and lowest anti-promastigote activity (IC50=115.7, IC50=16.42, respectively), and compound 1 and hexane fraction exhibited the highest and lowest anti-amastigote activity (IC50=7.874, IC50=40.18, respectively). Conclusion: All six samples, including Hex and DCM partitions, Fr.6, and compounds 1-3, demonstrate a noteworthy correlation between rising concentration and time, with a statistically significant P-value of ≤0.05. Considering the higher selectivity index of compound 2 compared to others, it can be inferred that the presence of sulfur groups and unsaturated chains potentially contributes to these effects by impeding the DNA polymerase, which, of course, needs more research.Keywords: Padina, leishmania, sulfoquinovosyldiacylglycerol, cytotoxicity
Procedia PDF Downloads 241804 Resonant Fluorescence in a Two-Level Atom and the Terahertz Gap
Authors: Nikolai N. Bogolubov, Andrey V. Soldatov
Abstract:
Terahertz radiation occupies a range of frequencies somewhere from 100 GHz to approximately 10 THz, just between microwaves and infrared waves. This range of frequencies holds promise for many useful applications in experimental applied physics and technology. At the same time, reliable, simple techniques for generation, amplification, and modulation of electromagnetic radiation in this range are far from been developed enough to meet the requirements of its practical usage, especially in comparison to the level of technological abilities already achieved for other domains of the electromagnetic spectrum. This situation of relative underdevelopment of this potentially very important range of electromagnetic spectrum is known under the name of the 'terahertz gap.' Among other things, technological progress in the terahertz area has been impeded by the lack of compact, low energy consumption, easily controlled and continuously radiating terahertz radiation sources. Therefore, development of new techniques serving this purpose as well as various devices based on them is of obvious necessity. No doubt, it would be highly advantageous to employ the simplest of suitable physical systems as major critical components in these techniques and devices. The purpose of the present research was to show by means of conventional methods of non-equilibrium statistical mechanics and the theory of open quantum systems, that a thoroughly studied two-level quantum system, also known as an one-electron two-level 'atom', being driven by external classical monochromatic high-frequency (e.g. laser) field, can radiate continuously at much lower (e.g. terahertz) frequency in the fluorescent regime if the transition dipole moment operator of this 'atom' possesses permanent non-equal diagonal matrix elements. This assumption contradicts conventional assumption routinely made in quantum optics that only the non-diagonal matrix elements persist. The conventional assumption is pertinent to natural atoms and molecules and stems from the property of spatial inversion symmetry of their eigenstates. At the same time, such an assumption is justified no more in regard to artificially manufactured quantum systems of reduced dimensionality, such as, for example, quantum dots, which are often nicknamed 'artificial atoms' due to striking similarity of their optical properties to those ones of the real atoms. Possible ways to experimental observation and practical implementation of the predicted effect are discussed too.Keywords: terahertz gap, two-level atom, resonant fluorescence, quantum dot, resonant fluorescence, two-level atom
Procedia PDF Downloads 2721803 Implications of Measuring the Progress towards Financial Risk Protection Using Varied Survey Instruments: A Case Study of Ghana
Authors: Jemima C. A. Sumboh
Abstract:
Given the urgency and consensus for countries to move towards Universal Health Coverage (UHC), health financing systems need to be accurately and consistently monitored to provide valuable data to inform policy and practice. Most of the indicators for monitoring UHC, particularly catastrophe and impoverishment, are established based on the impact of out-of-pocket health payments (OOPHP) on households’ living standards, collected through varied household surveys. These surveys, however, vary substantially in survey methods such as the length of the recall period or the number of items included in the survey questionnaire or the farming of questions, potentially influencing the level of OOPHP. Using different survey instruments can provide inaccurate, inconsistent, erroneous and misleading estimates of UHC, subsequently influencing wrong policy decisions. Using data from a household budget survey conducted by the Navrongo Health Research Center in Ghana from May 2017 to December 2018, this study intends to explore the potential implications of using surveys with varied levels of disaggregation of OOPHP data on estimates of financial risk protection. The household budget survey, structured around food and non-food expenditure, compared three OOPHP measuring instruments: Version I (existing questions used to measure OOPHP in household budget surveys), Version II (new questions developed through benchmarking the existing Classification of the Individual Consumption by Purpose (COICOP) OOPHP questions in household surveys) and Version III (existing questions used to measure OOPHP in health surveys integrated into household budget surveys- for this, the demographic and health surveillance (DHS) health survey was used). Version I, II and III contained 11, 44, and 56 health items, respectively. However, the choice of recall periods was held constant across versions. The sample size for Version I, II and III were 930, 1032 and 1068 households, respectively. Financial risk protection will be measured based on the catastrophic and impoverishment methodologies using STATA 15 and Adept Software for each version. It is expected that findings from this study will present valuable contributions to the repository of knowledge on standardizing survey instruments to obtain estimates of financial risk protection that are valid and consistent.Keywords: Ghana, household budget surveys, measuring financial risk protection, out-of-pocket health payments, survey instruments, universal health coverage
Procedia PDF Downloads 1411802 Carotenoid Bioaccessibility: Effects of Food Matrix and Excipient Foods
Authors: Birgul Hizlar, Sibel Karakaya
Abstract:
Recently, increasing attention has been given to carotenoid bioaccessibility and bioavailability in the field of nutrition research. As a consequence of their lipophilic nature and their specific localization in plant-based tissues, carotenoid bioaccessibility and bioavailability is generally quite low in raw fruits and vegetables, since carotenoids need to be released from the cellular matrix and incorporated in the lipid fraction during digestion before being absorbed. Today’s approach related to improving the bioaccessibility is to design food matrix. Recently, the newest approach, excipient food, has been introduced to improve the bioavailability of orally administered bioactive compounds. The main idea is combining food and another food (the excipient food) whose composition and/or structure is specifically designed for improving health benefits. In this study, effects of food processing, food matrix and the addition of excipient foods on the carotenoid bioaccessibility of carrots were determined. Different excipient foods (olive oil, lemon juice and whey curd) and different food matrices (grating, boiling and mashing) were used. Total carotenoid contents of the grated, boiled and mashed carrots were 57.23, 51.11 and 62.10 μg/g respectively. No significant differences among these values indicated that these treatments had no effect on the release of carotenoids from the food matrix. Contrary to, changes in the food matrix, especially mashing caused significant increase in the carotenoid bioaccessibility. Although the carotenoid bioaccessibility was 10.76% in grated carrots, this value was 18.19% in mashed carrots (p<0.05). Addition of olive oil and lemon juice as excipients into the grated carrots caused 1.23 times and 1.67 times increase in the carotenoid content and the carotenoid bioaccessibility respectively. However, addition of the excipient foods in the boiled carrot samples did not influence the release of carotenoid from the food matrix. Whereas, up to 1.9 fold increase in the carotenoid bioaccessibility was determined by the addition of the excipient foods into the boiled carrots. The bioaccessibility increased from 14.20% to 27.12% by the addition of olive oil, lemon juice and whey curd. The highest carotenoid content among mashed carrots was found in the mashed carrots incorporated with olive oil and lemon juice. This combination also caused a significant increase in the carotenoid bioaccessibility from 18.19% to 29.94% (p<0.05). When compared the results related with the effect of the treatments on the carotenoid bioaccessibility, mashed carrots containing olive oil, lemon juice and whey curd had the highest carotenoid bioaccessibility. The increase in the bioaccessibility was approximately 81% when compared to grated and mashed samples containing olive oil, lemon juice and whey curd. In conclusion, these results demonstrated that the food matrix and addition of the excipient foods had a significant effect on the carotenoid content and the carotenoid bioaccessibility.Keywords: carrot, carotenoids, excipient foods, food matrix
Procedia PDF Downloads 4641801 Synthesis of Porphyrin-Functionalized Beads for Flow Cytometry
Authors: William E. Bauta, Jennifer Rebeles, Reggie Jacob
Abstract:
Porphyrins are noteworthy in biomedical science for their cancer tissue accumulation and photophysical properties. The preferential accumulation of some porphyrins in cancerous tissue has been known for many years. This, combined with their characteristic photophysical and photochemical properties, including their strong fluorescence and their ability to generate reactive oxygen species in vivo upon laser irradiation, has led to much research into the application of porphyrins as cancer diagnostic and therapeutic agents. Porphyrins have been used as dyes to detect cancer cells both in vivo and, less commonly, in vitro. In one example, human sputum samples from lung cancer patients and patients without the disease were dissociated and stained with the porphyrin TCPP (5,10,15,20-tetrakis-(4-carboxyphenyl)-porphine). Cells were analyzed by flow cytometry. Cancer samples were identified by their higher TCPP fluorescence intensity relative to the no-cancer controls. However, quantitative analysis of fluorescence in cell suspensions stained with multiple fluorophores requires particles stained with each of the individual fluorophores as controls. Fluorescent control particles must be compatible in size with flow cytometer fluidics and have favorable hydrodynamic properties in suspension. They must also display fluorescence comparable to the cells of interest and be stable upon storage amine-functionalized spherical polystyrene beads in the 5 to 20-micron diameter range that was reacted with TCPP and EDC in aqueous pH six buffer overnight to form amide bonds. Beads were isolated by centrifugation and tested by flow cytometry. The 10-micron amine-functionalized beads displayed the best combination of fluorescence intensity and hydrodynamic properties, such as lack of clumping and remaining in suspension during the experiment. These beads were further optimized by varying the stoichiometry of EDC and TCPP relative to the amine. The reaction was accompanied by the formation of a TCPP-related particulate, which was removed, after bead centrifugation, using a microfiltration process. The resultant TCPP-functionalized beads were compatible with flow cytometry conditions and displayed a fluorescence comparable to that of stained cells, which allowed their use as fluorescence standards. The beads were stable in refrigerated storage in the dark for more than eight months. This work demonstrates the first preparation of porphyrin-functionalized flow cytometry control beads.Keywords: tetraaryl porphyrin, polystyrene beads, flow cytometry, peptide coupling
Procedia PDF Downloads 941800 Persistence of Ready Mix (Chlorpyriphos 50% + Cypermethrin 5%), Cypermethrin and Chlorpyriphos in Soil under Okra Fruits
Authors: Samriti Wadhwa, Beena Kumari
Abstract:
Background and Significance: Residue levels of ready mix (chlorpyriphos 50% and cypermethrin 5%), cypermethrin and chlorpyriphos individually in sandy loam soil under okra fruits (Variety, Varsha Uphar) were determined; a field experiment was conducted at Research Farm of Department of Entomology of Chaudhary Charan Singh Haryana Agriculture University, Hisar, Haryana, India. Persistence behavior of cypermethrin and chlorpyriphos was studied following application of a pre-mix formulation of insecticides viz. Action-505EC, chlorpyriphos (Radar 20 EC) and cypermethrin (Cyperkill 10 EC) at the recommended dose and double the recommended dose along with control at fruiting stage. Pesticide application also leads to decline in soil acarine fauna which is instrumental in the breakdown of the litter because of which minerals are released into the soil. So, by this study, one can evaluate the safety of pesticides for the soil health. Methodology: Action-505EC (chlorpyriphos 50% and cypermethrin 5%) at 275 g a .i. ha⁻¹ (single dose) and 550 g a. i. ha⁻¹ (double dose), chlorpyriphos (Radar 20 EC) at 200 g a. i. ha⁻¹ (single dose) and 400 g a. i. ha⁻¹ (double dose) and cypermethrin (Cyperkill 10 EC) at 50 g a. i. ha⁻¹ (single dose) and 100 g a. i. ha⁻¹ (double dose) were applied at the fruiting stage on okra crop. Samples of soils from okra field were collected periodically at 0 (1h after spray), 1, 3, 5, 7, 10, 15 days and at harvest after application as well of control soil sample. After air drying, adsorbing through Florisil and activated charcoal and eluting with hexane: acetone (9:1) then residues in soils were estimated by a gas chromatograph equipped with a capillary column and electron capture detector. Results: No persistence of cypermethrin in ready-mix in soil under okra fruits at single and double dose was observed. In case of chlorpyriphos in ready-mix, average initial deposits on 0 (1 h after treatment) day was 0.015 mg kg⁻¹ and 0.036 mg kg⁻¹ which persisted up to 5 days and up to 7 days for single and double dose, respectively. After that residues reached below a detectable level of 0.010 mg kg⁻¹. Experimental studies on cypermethrin individually revealed that average initial deposits on 0 (1 h after treatment) were 0.008 mg kg⁻¹ and 0.012 mg kg⁻¹ which persisted up to 3 days and 5 days for single and double dose, respectively after that residues reached to below detectable level. The initial deposits of chlorpyriphos individually in soil were found to be 0.055 mg kg⁻¹ and 0.113 mg kg⁻¹ which persisted up to 7 days and 10 days at a lower dose and higher dose, respectively after that residues reached to below determination level. Conclusion: In soil under okra crop, only individual cypermethrin in both the doses persisted whereas no persistence of cypermethrin in ready-mix was observed. Persistence of chlorpyriphos individually is more as compared to chlorpyriphos in ready-mix in both the doses. Overall, the persistence of chlorpyriphos in soil under okra crop is more than cypermethrin.Keywords: chlorpyriphos, cypermethrin, okra, ready mix, soil
Procedia PDF Downloads 1661799 Antigen Stasis can Predispose Primary Ciliary Dyskinesia (PCD) Patients to Asthma
Authors: Nadzeya Marozkina, Joe Zein, Benjamin Gaston
Abstract:
Introduction: We have observed that many patients with Primary Ciliary Dyskinesia (PCD) benefit from asthma medications. In healthy airways, the ciliary function is normal. Antigens and irritants are rapidly cleared, and NO enters the gas phase normally to be exhaled. In the PCD airways, however, antigens, such as Dermatophagoides, are not as well cleared. This defect leads to oxidative stress, marked by increased DUOX1 expression and decreased superoxide dismutase [SOD] activity (manuscript under revision). H₂O₂, in high concentrations in the PCD airway, injures the airway. NO is oxidized rather than being exhaled, forming cytotoxic peroxynitrous acid. Thus, antigen stasis on PCD airway epithelium leads to airway injury and may predispose PCD patients to asthma. Indeed, recent population genetics suggest that PCD genes may be associated with asthma. We therefore hypothesized that PCD patients would be predisposed to having asthma. Methods. We analyzed our database of 18 million individual electronic medical records (EMRs) in the Indiana Network for Patient Care research database (INPCR). There is not an ICD10 code for PCD itself; code Q34.8 is most commonly used clinically. To validate analysis of this code, we queried patients who had an ICD10 code for both bronchiectasis and situs inversus totalis in INPCR. We also studied a validation cohort using the IBM Explorys® database (over 80 million individuals). Analyses were adjusted for age, sex and race using a 1 PCD: 3 controls matching method in INPCR and multivariable logistic regression in the IBM Explorys® database. Results. The prevalence of asthma ICD10 codes in subjects with a code Q34.8 was 67% vs 19% in controls (P < 0.0001) (Regenstrief Institute). Similarly, in IBM*Explorys, the OR [95% CI] for having asthma if a patient also had ICD10 code 34.8, relative to controls, was =4.04 [3.99; 4.09]. For situs inversus alone the OR [95% CI] was 4.42 [4.14; 4.71]; and bronchiectasis alone the OR [95% CI] =10.68 (10.56; 10.79). For both bronchiectasis and situs inversus together, the OR [95% CI] =28.80 (23.17; 35.81). Conclusions: PCD causes antigen stasis in the human airway (under review), likely predisposing to asthma in addition to oxidative and nitrosative stress and to airway injury. Here, we show that, by several different population-based metrics, and using two large databases, patients with PCD appear to have between a three- and 28-fold increased risk of having asthma. These data suggest that additional studies should be undertaken to understand the role of ciliary dysfunction in the pathogenesis and genetics of asthma. Decreased antigen clearance caused by ciliary dysfunction may be a risk factor for asthma development.Keywords: antigen, PCD, asthma, nitric oxide
Procedia PDF Downloads 1081798 Synthesis and Characterization of pH-Responsive Nanocarriers Based on POEOMA-b-PDPA Block Copolymers for RNA Delivery
Authors: Bruno Baptista, Andreia S. R. Oliveira, Patricia V. Mendonca, Jorge F. J. Coelho, Fani Sousa
Abstract:
Drug delivery systems are designed to allow adequate protection and controlled delivery of drugs to specific locations. These systems aim to reduce side effects and control the biodistribution profile of drugs, thus improving therapeutic efficacy. This study involved the synthesis of polymeric nanoparticles, based on amphiphilic diblock copolymers, comprising a biocompatible, poly (oligo (ethylene oxide) methyl ether methacrylate (POEOMA) as hydrophilic segment and a pH-sensitive block, the poly (2-diisopropylamino)ethyl methacrylate) (PDPA). The objective of this work was the development of polymeric pH-responsive nanoparticles to encapsulate and carry small RNAs as a model to further develop non-coding RNAs delivery systems with therapeutic value. The responsiveness of PDPA to pH allows the electrostatic interaction of these copolymers with nucleic acids at acidic pH, as a result of the protonation of the tertiary amine groups of this polymer at pH values below its pKa (around 6.2). Initially, the molecular weight parameters and chemical structure of the block copolymers were determined by size exclusion chromatography (SEC) and nuclear magnetic resonance (1H-NMR) spectroscopy, respectively. Then, the complexation with small RNAs was verified, generating polyplexes with sizes ranging from 300 to 600 nm and with encapsulation efficiencies around 80%, depending on the molecular weight of the polymers, their composition, and concentration used. The effect of pH on the morphology of nanoparticles was evaluated by scanning electron microscopy (SEM) being verified that at higher pH values, particles tend to lose their spherical shape. Since this work aims to develop systems for the delivery of non-coding RNAs, studies on RNA protection (contact with RNase, FBS, and Trypsin) and cell viability were also carried out. It was found that they induce some protection against constituents of the cellular environment and have no cellular toxicity. In summary, this research work contributes to the development of pH-sensitive polymers, capable of protecting and encapsulating RNA, in a relatively simple and efficient manner, to further be applied on drug delivery to specific sites where pH may have a critical role, as it can occur in several cancer environments.Keywords: drug delivery systems, pH-responsive polymers, POEOMA-b-PDPA, small RNAs
Procedia PDF Downloads 2611797 Investigation of the EEG Signal Parameters during Epileptic Seizure Phases in Consequence to the Application of External Healing Therapy on Subjects
Authors: Karan Sharma, Ajay Kumar
Abstract:
Epileptic seizure is a type of disease due to which electrical charge in the brain flows abruptly resulting in abnormal activity by the subject. One percent of total world population gets epileptic seizure attacks.Due to abrupt flow of charge, EEG (Electroencephalogram) waveforms change. On the display appear a lot of spikes and sharp waves in the EEG signals. Detection of epileptic seizure by using conventional methods is time-consuming. Many methods have been evolved that detect it automatically. The initial part of this paper provides the review of techniques used to detect epileptic seizure automatically. The automatic detection is based on the feature extraction and classification patterns. For better accuracy decomposition of the signal is required before feature extraction. A number of parameters are calculated by the researchers using different techniques e.g. approximate entropy, sample entropy, Fuzzy approximate entropy, intrinsic mode function, cross-correlation etc. to discriminate between a normal signal & an epileptic seizure signal.The main objective of this review paper is to present the variations in the EEG signals at both stages (i) Interictal (recording between the epileptic seizure attacks). (ii) Ictal (recording during the epileptic seizure), using most appropriate methods of analysis to provide better healthcare diagnosis. This research paper then investigates the effects of a noninvasive healing therapy on the subjects by studying the EEG signals using latest signal processing techniques. The study has been conducted with Reiki as a healing technique, beneficial for restoring balance in cases of body mind alterations associated with an epileptic seizure. Reiki is practiced around the world and is recommended for different health services as a treatment approach. Reiki is an energy medicine, specifically a biofield therapy developed in Japan in the early 20th century. It is a system involving the laying on of hands, to stimulate the body’s natural energetic system. Earlier studies have shown an apparent connection between Reiki and the autonomous nervous system. The Reiki sessions are applied by an experienced therapist. EEG signals are measured at baseline, during session and post intervention to bring about effective epileptic seizure control or its elimination altogether.Keywords: EEG signal, Reiki, time consuming, epileptic seizure
Procedia PDF Downloads 4071796 Empirical Superpave Mix-Design of Rubber-Modified Hot-Mix Asphalt in Railway Sub-Ballast
Authors: Fernando M. Soto, Gaetano Di Mino
Abstract:
The design of an unmodified bituminous mixture and three rubber-aggregate mixtures containing rubber-aggregate by a dry process (RUMAC) was evaluated, using an empirical-analytical approach based on experimental findings obtained in the laboratory with the volumetric mix design by gyratory compaction. A reference dense-graded bituminous sub-ballast mixture (3% of air voids and a bitumen 4% over the total weight of the mix), and three rubberized mixtures by dry process (1,5 to 3% of rubber by total weight and 5-7% of binder) were used applying the Superpave mix-design for a level 3 (high-traffic) design rail lines. The railway trackbed section analyzed was a granular layer of 19 cm compacted, while for the sub-ballast a thickness of 12 cm has been used. In order to evaluate the effect of increasing the specimen density (as a percent of its theoretical maximum specific gravity), in this article, are illustrated the results obtained after different comparative analysis into the influence of varying the binder-rubber percentages under the sub-ballast layer mix-design. This work demonstrates that rubberized blends containing crumb and ground rubber in bituminous asphalt mixtures behave at least similar or better than conventional asphalt materials. By using the same methodology of volumetric compaction, the densification curves resulting from each mixture have been studied. The purpose is to obtain an optimum empirical parameter multiplier of the number of gyrations necessary to reach the same compaction energy as in conventional mixtures. It has provided some experimental parameters adopting an empirical-analytical method, evaluating the results obtained from the gyratory-compaction of bituminous mixtures with an HMA and rubber-aggregate blends. An extensive integrated research has been carried out to assess the suitability of rubber-modified hot mix asphalt mixtures as a sub-ballast layer in railway underlayment trackbed. Design optimization of the mixture was conducted for each mixture and the volumetric properties analyzed. Also, an improved and complete manufacturing process, compaction and curing of these blends are provided. By adopting this increase-parameters of compaction, called 'beta' factor, mixtures modified with rubber with uniform densification and workability are obtained that in the conventional mixtures. It is found that considering the usual bearing capacity requirements in rail track, the optimal rubber content is 2% (by weight) or 3.95% (by volumetric substitution) and a binder content of 6%.Keywords: empirical approach, rubber-asphalt, sub-ballast, superpave mix-design
Procedia PDF Downloads 3701795 Nondestructive Prediction and Classification of Gel Strength in Ethanol-Treated Kudzu Starch Gels Using Near-Infrared Spectroscopy
Authors: John-Nelson Ekumah, Selorm Yao-Say Solomon Adade, Mingming Zhong, Yufan Sun, Qiufang Liang, Muhammad Safiullah Virk, Xorlali Nunekpeku, Nana Adwoa Nkuma Johnson, Bridget Ama Kwadzokpui, Xiaofeng Ren
Abstract:
Enhancing starch gel strength and stability is crucial. However, traditional gel property assessment methods are destructive, time-consuming, and resource-intensive. Thus, understanding ethanol treatment effects on kudzu starch gel strength and developing a rapid, nondestructive gel strength assessment method is essential for optimizing the treatment process and ensuring product quality consistency. This study investigated the effects of different ethanol concentrations on the microstructure of kudzu starch gels using a comprehensive microstructural analysis. We also developed a nondestructive method for predicting gel strength and classifying treatment levels using near-infrared (NIR) spectroscopy, and advanced data analytics. Scanning electron microscopy revealed progressive network densification and pore collapse with increasing ethanol concentration, correlating with enhanced mechanical properties. NIR spectroscopy, combined with various variable selection methods (CARS, GA, and UVE) and modeling algorithms (PLS, SVM, and ELM), was employed to develop predictive models for gel strength. The UVE-SVM model demonstrated exceptional performance, with the highest R² values (Rc = 0.9786, Rp = 0.9688) and lowest error rates (RMSEC = 6.1340, RMSEP = 6.0283). Pattern recognition algorithms (PCA, LDA, and KNN) successfully classified gels based on ethanol treatment levels, achieving near-perfect accuracy. This integrated approach provided a multiscale perspective on ethanol-induced starch gel modification, from molecular interactions to macroscopic properties. Our findings demonstrate the potential of NIR spectroscopy, coupled with advanced data analysis, as a powerful tool for rapid, nondestructive quality assessment in starch gel production. This study contributes significantly to the understanding of starch modification processes and opens new avenues for research and industrial applications in food science, pharmaceuticals, and biomaterials.Keywords: kudzu starch gel, near-infrared spectroscopy, gel strength prediction, support vector machine, pattern recognition algorithms, ethanol treatment
Procedia PDF Downloads 421794 What Are the Problems in the Case of Analysis of Selenium by Inductively Coupled Plasma Mass Spectrometry in Food and Food Raw Materials?
Authors: Béla Kovács, Éva Bódi, Farzaneh Garousi, Szilvia Várallyay, Dávid Andrási
Abstract:
For analysis of elements in different food, feed and food raw material samples generally a flame atomic absorption spectrometer (FAAS), a graphite furnace atomic absorption spectrometer (GF-AAS), an inductively coupled plasma optical emission spectrometer (ICP-OES) and an inductively coupled plasma mass spectrometer (ICP-MS) are applied. All the analytical instruments have different physical and chemical interfering effects analysing food and food raw material samples. The smaller the concentration of an analyte and the larger the concentration of the matrix the larger the interfering effects. Nowadays, it is very important to analyse growingly smaller concentrations of elements. From the above analytical instruments generally the inductively coupled plasma mass spectrometer is capable of analysing the smallest concentration of elements. The applied ICP-MS instrument has Collision Cell Technology (CCT) also. Using CCT mode certain elements have better detection limits with 1-3 magnitudes comparing to a normal ICP-MS analytical method. The CCT mode has better detection limits mainly for analysis of selenium (arsenic, germanium, vanadium, and chromium). To elaborate an analytical method for selenium with an inductively coupled plasma mass spectrometer the most important interfering effects (problems) were evaluated: 1) isobaric elemental, 2) isobaric molecular, and 3) physical interferences. Analysing food and food raw material samples an other (new) interfering effect emerged in ICP-MS, namely the effect of various matrixes having different evaporation and nebulization effectiveness, moreover having different quantity of carbon content of food, feed and food raw material samples. In our research work the effect of different water-soluble compounds furthermore the effect of various quantity of carbon content (as sample matrix) were examined on changes of intensity of selenium. So finally we could find “opportunities” to decrease the error of selenium analysis. To analyse selenium in food, feed and food raw material samples, the most appropriate inductively coupled plasma mass spectrometer is a quadrupole instrument applying a collision cell technique (CCT). The extent of interfering effect of carbon content depends on the type of compounds. The carbon content significantly affects the measured concentration (intensities) of Se, which can be corrected using internal standard (arsenic or tellurium).Keywords: selenium, ICP-MS, food, food raw material
Procedia PDF Downloads 5091793 Choice Analysis of Ground Access to São Paulo/Guarulhos International Airport Using Adaptive Choice-Based Conjoint Analysis (ACBC)
Authors: Carolina Silva Ansélmo
Abstract:
Airports are demand-generating poles that affect the flow of traffic around them. The airport access system must be fast, convenient, and adequately planned, considering its potential users. An airport with good ground access conditions can provide the user with a more satisfactory access experience. When several transport options are available, service providers must understand users' preferences and the expected quality of service. The present study focuses on airport access in a comparative scenario between bus, private vehicle, subway, taxi and urban mobility transport applications to São Paulo/Guarulhos International Airport. The objectives are (i) to identify the factors that influence the choice, (ii) to measure Willingness to Pay (WTP), and (iii) to estimate the market share for each modal. The applied method was Adaptive Choice-based Conjoint Analysis (ACBC) technique using Sawtooth Software. Conjoint analysis, rooted in Utility Theory, is a survey technique that quantifies the customer's perceived utility when choosing alternatives. Assessing user preferences provides insights into their priorities for product or service attributes. An additional advantage of conjoint analysis is its requirement for a smaller sample size compared to other methods. Furthermore, ACBC provides valuable insights into consumers' preferences, willingness to pay, and market dynamics, aiding strategic decision-making to provide a better customer experience, pricing, and market segmentation. In the present research, the ACBC questionnaire had the following variables: (i) access time to the boarding point, (ii) comfort in the vehicle, (iii) number of travelers together, (iv) price, (v) supply power, and (vi) type of vehicle. The case study questionnaire reached 213 valid responses considering the scenario of access from the São Paulo city center to São Paulo/Guarulhos International Airport. As a result, the price and the number of travelers are the most relevant attributes for the sample when choosing airport access. The market share of the selection is mainly urban mobility transport applications, followed by buses, private vehicles, taxis and subways.Keywords: adaptive choice-based conjoint analysis, ground access to airport, market share, willingness to pay
Procedia PDF Downloads 791792 Early Modern Controversies of Mobility within the Spanish Empire: Francisco De Vitoria and the Peaceful Right to Travel
Authors: Beatriz Salamanca
Abstract:
In his public lecture ‘On the American Indians’ given at the University of Salamanca in 1538-39, Francisco de Vitoria presented an unsettling defense of freedom of movement, arguing that the Spanish had the right to travel and dwell in the New World, since it was considered part of the law of nations [ius gentium] that men enjoyed free mutual intercourse anywhere they went. The principle of freedom of movement brought hopeful expectations, promising to bring mankind together and strengthen the ties of fraternity. However, it led to polemical situations when those whose mobility was in question represented a harmful threat or was for some reason undesired. In this context, Vitoria’s argument has been seen on multiple occasions as a justification of the expansion of the Spanish empire. In order to examine the meaning of Vitoria’s defense of free mobility, a more detailed look at Vitoria’s text is required, together with the study of some of his earliest works, among them, his commentaries on Thomas Aquinas’s Summa Theologiae, where he presented relevant insights on the idea of the law of nations. In addition, it is necessary to place Vitoria’s work in the context of the intellectual tradition he belonged to and the responses he obtained from some of his contemporaries who were concerned with similar issues. The claim of this research is that the Spanish right to travel advocated by Vitoria was not intended to be interpreted in absolute terms, for it had to serve the purpose of bringing peace and unity among men, and could not contradict natural law. In addition, Vitoria explicitly observed that the right to travel was only valid if the Spaniards caused no harm, a condition that has been underestimated by his critics. Therefore, Vitoria’s legacy is of enormous value as it initiated a long lasting discussion regarding the question of the grounds under which human mobility could be restricted. Again, under Vitoria’s argument it was clear that this freedom was not absolute, but the controversial nature of his defense of Spanish mobility demonstrates how difficult it was and still is to address the issue of the circulation of peoples across frontiers, and shows the significance of this discussion in today’s globalized world, where the rights and wrongs of notions like immigration, international trade or foreign intervention still lack sufficient consensus. This inquiry about Vitoria’s defense of the principle of freedom of movement is being placed here against the background of the history of political thought, political theory, international law, and international relations, following the methodological framework of contextual history of the ‘Cambridge School’.Keywords: Francisco de Vitoria, freedom of movement, law of nations, ius gentium, Spanish empire
Procedia PDF Downloads 3671791 A Digital Twin Approach to Support Real-time Situational Awareness and Intelligent Cyber-physical Control in Energy Smart Buildings
Authors: Haowen Xu, Xiaobing Liu, Jin Dong, Jianming Lian
Abstract:
Emerging smart buildings often employ cyberinfrastructure, cyber-physical systems, and Internet of Things (IoT) technologies to increase the automation and responsiveness of building operations for better energy efficiency and lower carbon emission. These operations include the control of Heating, Ventilation, and Air Conditioning (HVAC) and lighting systems, which are often considered a major source of energy consumption in both commercial and residential buildings. Developing energy-saving control models for optimizing HVAC operations usually requires the collection of high-quality instrumental data from iterations of in-situ building experiments, which can be time-consuming and labor-intensive. This abstract describes a digital twin approach to automate building energy experiments for optimizing HVAC operations through the design and development of an adaptive web-based platform. The platform is created to enable (a) automated data acquisition from a variety of IoT-connected HVAC instruments, (b) real-time situational awareness through domain-based visualizations, (c) adaption of HVAC optimization algorithms based on experimental data, (d) sharing of experimental data and model predictive controls through web services, and (e) cyber-physical control of individual instruments in the HVAC system using outputs from different optimization algorithms. Through the digital twin approach, we aim to replicate a real-world building and its HVAC systems in an online computing environment to automate the development of building-specific model predictive controls and collaborative experiments in buildings located in different climate zones in the United States. We present two case studies to demonstrate our platform’s capability for real-time situational awareness and cyber-physical control of the HVAC in the flexible research platforms within the Oak Ridge National Laboratory (ORNL) main campus. Our platform is developed using adaptive and flexible architecture design, rendering the platform generalizable and extendable to support HVAC optimization experiments in different types of buildings across the nation.Keywords: energy-saving buildings, digital twins, HVAC, cyber-physical system, BIM
Procedia PDF Downloads 1131790 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation
Authors: Jonathan Gong
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning
Procedia PDF Downloads 1321789 Resilience and Urban Transformation: A Review of Recent Interventions in Europe and Turkey
Authors: Bilge Ozel
Abstract:
Cities are high-complex living organisms and are subjects to continuous transformations produced by the stress that derives from changing conditions. Today the metropolises are seen like “development engines” of the countries and accordingly they become the centre of better living conditions that encourages demographic growth which constitutes the main reason of the changes. Indeed, the potential for economic advancement of the cities directly represents the economic status of their countries. The term of “resilience”, which sees the changes as natural processes and represents the flexibility and adaptability of the systems in the face of changing conditions, becomes a key concept for the development of urban transformation policies. The term of “resilience” derives from the Latin word ‘resilire’, which means ‘bounce’, ‘jump back’, refers to the ability of a system to withstand shocks and still maintain the basic characteristics. A resilient system does not only survive the potential risks and threats but also takes advantage of the positive outcomes of the perturbations and ensures adaptation to the new external conditions. When this understanding is taken into the urban context - or rather “urban resilience” - it delineates the capacity of cities to anticipate upcoming shocks and changes without undergoing major alterations in its functional, physical, socio-economic systems. Undoubtedly, the issue of coordinating the urban systems in a “resilient” form is a multidisciplinary and complex process as the cities are multi-layered and dynamic structures. The concept of “urban transformation” is first launched in Europe just after World War II. It has been applied through different methods such as renovation, revitalization, improvement and gentrification. These methods have been in continuous advancement by acquiring new meanings and trends over years. With the effects of neoliberal policies in the 1980s, the concept of urban transformation has been associated with economic objectives. Subsequently this understanding has been improved over time and had new orientations such as providing more social justice and environmental sustainability. The aim of this research is to identify the most applied urban transformation methods in Turkey and its main reasons of being selected. Moreover, investigating the lacking and limiting points of the urban transformation policies in the context of “urban resilience” in a comparative way with European interventions. The emblematic examples, which symbolize the breaking points of the recent evolution of urban transformation concepts in Europe and Turkey, are chosen and reviewed in a critical way.Keywords: resilience, urban dynamics, urban resilience, urban transformation
Procedia PDF Downloads 2661788 A Fine String between Weaving the Text and Patching It: Reading beyond the Hidden Symbols and Antithetical Relationships in the Classical and Modern Arabic Poetry
Authors: Rima Abu Jaber-Bransi, Rawya Jarjoura Burbara
Abstract:
This study reveals the extension and continuity between the classical Arabic poetry and modern Arabic poetry through investigation of its ambiguity, symbolism, and antithetical relationships. The significance of this study lies in its exploration and discovering of a new method of reading classical and modern Arabic poetry. The study deals with the Fatimid poetry and discovers a new method to read it. It also deals with the relationship between the apparent and the hidden meanings of words through focusing on how the paradoxical antithetical relationships change the meaning of the whole poem and give it a different dimension through the use of Oxymorons. In our unprecedented research on Oxymoron, we found out that the words in modern Arabic poetry are used in unusual combinations that convey apparent and hidden meanings. In some cases, the poet introduces an image with a symbol of a certain thing, but the reader soon discovers that the symbol includes its opposite, too. The question is: How does the reader find that hidden harmony in that apparent disharmony? The first and most important conclusion of this study is that the Fatimid poetry was written for two types of readers: religious readers who know the religious symbols and the hidden secret meanings behind the words, and ordinary readers who understand the apparent literal meaning of the words. Consequently, the interpretation of the poem is subject to the type of reading. In Fatimid poetry we found out that the hunting-journey is a journey of hidden esoteric knowledge; the Horse is al-Naqib, a religious rank of the investigator and missionary; the Lion is Ali Ibn Abi Talib. The words black and white, day and night, bird, death and murder have different meanings and indications. Our study points out the importance of reading certain poems in certain periods in two different ways: the first depends on a doctrinal interpretation that transforms the external apparent (ẓāher) meanings into internal inner hidden esoteric (bāṭen) ones; the second depends on the interpretation of antithetical relationships between the words in order to reveal meanings that the poet hid for a reader who participates in the processes of creativity. The second conclusion is that the classical poem employed symbols, oxymora and antonymous and antithetical forms to create two poetic texts in one mold and form. We can conclude that this study is pioneering in showing the constant paradoxical relationship between the apparent and the hidden meanings in classical and modern Arabic poetry.Keywords: apparent, symbol, hidden, antithetical, oxymoron, Sophism, Fatimid poetry
Procedia PDF Downloads 2631787 Strategy and Mechanism for Intercepting Unpredictable Moving Targets in the Blue-Tailed Damselfly (Ischnura elegans)
Authors: Ziv Kassner, Gal Ribak
Abstract:
Members of the Odonata order (dragonflies and damselflies) stand out for their maneuverability and superb flight control, which allow them to catch flying prey in the air. These outstanding aerial abilities were fine-tuned during millions of years of an evolutionary arms race between Odonata and their prey, providing an attractive research model for studying the relationship between sensory input – and aerodynamic output in a flying insect. The ability to catch a maneuvering target in air is interesting not just for insect behavioral ecology and neuroethology but also for designing small and efficient robotic air vehicles. While the aerial prey interception of dragonflies (suborder: Anisoptera) have been studied before, little is known about how damselflies (suborder: Zygoptera) intercept prey. Here, high-speed cameras (filming at 1000 frames per second) were used to explore how damselflies catch unpredictable targets that move through air. Blue-tailed damselflies - Ischnura elegans (family: Coenagrionidae) were introduced to a flight arena and filmed while landing on moving targets that were oscillated harmonically. The insects succeeded in capturing targets that were moved with an amplitude of 6 cm and frequencies of 0-2.5 Hz (fastest mean target speed of 0.3 m s⁻¹) and targets that were moved in 1 Hz (an average speed of 0.3 m s⁻¹) but with an amplitude of 15 cm. To land on stationary or slow targets, damselflies either flew directly to the target, or flew sideways, up to a point in which the target was fixed in the center of the field of view, followed by direct flight path towards the target. As the target moved in increased frequency, damselflies demonstrated an ability to track the targets while flying sideways and minimizing the changes of their body direction on the yaw axis. This was likely an attempt to keep the targets at the center of the visual field while minimizing rotational optic flow of the surrounding visual panorama. Stabilizing rotational optic flow helps in estimation of the velocity and distance of the target. These results illustrate how dynamic visual information is used by damselflies to guide them towards a maneuvering target, enabling the superb aerial hunting abilities of these insects. They also exemplifies the plasticity of the damselfly flight apparatus which enables flight in any direction, irrespective of the direction of the body.Keywords: bio-mechanics, insect flight, target fixation, tracking and interception
Procedia PDF Downloads 1581786 Impact of Interface Soil Layer on Groundwater Aquifer Behaviour
Authors: Hayder H. Kareem, Shunqi Pan
Abstract:
The geological environment where the groundwater is collected represents the most important element that affects the behaviour of groundwater aquifer. As groundwater is a worldwide vital resource, it requires knowing the parameters that affect this source accurately so that the conceptualized mathematical models would be acceptable to the broadest ranges. Therefore, groundwater models have recently become an effective and efficient tool to investigate groundwater aquifer behaviours. Groundwater aquifer may contain aquitards, aquicludes, or interfaces within its geological formations. Aquitards and aquicludes have geological formations that forced the modellers to include those formations within the conceptualized groundwater models, while interfaces are commonly neglected from the conceptualization process because the modellers believe that the interface has no effect on aquifer behaviour. The current research highlights the impact of an interface existing in a real unconfined groundwater aquifer called Dibdibba, located in Al-Najaf City, Iraq where it has a river called the Euphrates River that passes through the eastern part of this city. Dibdibba groundwater aquifer consists of two types of soil layers separated by an interface soil layer. A groundwater model is built for Al-Najaf City to explore the impact of this interface. Calibration process is done using PEST 'Parameter ESTimation' approach and the best Dibdibba groundwater model is obtained. When the soil interface is conceptualized, results show that the groundwater tables are significantly affected by that interface through appearing dry areas of 56.24 km² and 6.16 km² in the upper and lower layers of the aquifer, respectively. The Euphrates River will also leak water into the groundwater aquifer of 7359 m³/day. While these results are changed when the soil interface is neglected where the dry area became 0.16 km², the Euphrates River leakage became 6334 m³/day. In addition, the conceptualized models (with and without interface) reveal different responses for the change in the recharge rates applied on the aquifer through the uncertainty analysis test. The aquifer of Dibdibba in Al-Najaf City shows a slight deficit in the amount of water supplied by the current pumping scheme and also notices that the Euphrates River suffers from stresses applied to the aquifer. Ultimately, this study shows a crucial need to represent the interface soil layer in model conceptualization to be the intended and future predicted behaviours more reliable for consideration purposes.Keywords: Al-Najaf City, groundwater aquifer behaviour, groundwater modelling, interface soil layer, Visual MODFLOW
Procedia PDF Downloads 1861785 A Comparative Study of Optimization Techniques and Models to Forecasting Dengue Fever
Abstract:
Dengue is a serious public health issue that causes significant annual economic and welfare burdens on nations. However, enhanced optimization techniques and quantitative modeling approaches can predict the incidence of dengue. By advocating for a data-driven approach, public health officials can make informed decisions, thereby improving the overall effectiveness of sudden disease outbreak control efforts. The National Oceanic and Atmospheric Administration and the Centers for Disease Control and Prevention are two of the U.S. Federal Government agencies from which this study uses environmental data. Based on environmental data that describe changes in temperature, precipitation, vegetation, and other factors known to affect dengue incidence, many predictive models are constructed that use different machine learning methods to estimate weekly dengue cases. The first step involves preparing the data, which includes handling outliers and missing values to make sure the data is prepared for subsequent processing and the creation of an accurate forecasting model. In the second phase, multiple feature selection procedures are applied using various machine learning models and optimization techniques. During the third phase of the research, machine learning models like the Huber Regressor, Support Vector Machine, Gradient Boosting Regressor (GBR), and Support Vector Regressor (SVR) are compared with several optimization techniques for feature selection, such as Harmony Search and Genetic Algorithm. In the fourth stage, the model's performance is evaluated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) as assistance. Selecting an optimization strategy with the least number of errors, lowest price, biggest productivity, or maximum potential results is the goal. In a variety of industries, including engineering, science, management, mathematics, finance, and medicine, optimization is widely employed. An effective optimization method based on harmony search and an integrated genetic algorithm is introduced for input feature selection, and it shows an important improvement in the model's predictive accuracy. The predictive models with Huber Regressor as the foundation perform the best for optimization and also prediction.Keywords: deep learning model, dengue fever, prediction, optimization
Procedia PDF Downloads 681784 Design and Assessment of Base Isolated Structures under Spectrum-Compatible Bidirectional Earthquakes
Authors: Marco Furinghetti, Alberto Pavese, Michele Rinaldi
Abstract:
Concave Surface Slider devices have been more and more used in real applications for seismic protection of both bridge and building structures. Several research activities have been carried out, in order to investigate the lateral response of such a typology of devices, and a reasonably high level of knowledge has been reached. If radial analysis is performed, the frictional force is always aligned with respect to the restoring force, whereas under bidirectional seismic events, a bi-axial interaction of the directions of motion occurs, due to the step-wise projection of the main frictional force, which is assumed to be aligned to the trajectory of the isolator. Nonetheless, if non-linear time history analyses have to be performed, standard codes provide precise rules for the definition of an averagely spectrum-compatible set of accelerograms in radial conditions, whereas for bidirectional motions different combinations of the single components spectra can be found. Moreover, nowadays software for the adjustment of natural accelerograms are available, which lead to a higher quality of spectrum-compatibility and to a smaller dispersion of results for radial motions. In this endeavor a simplified design procedure is defined, for building structures, base-isolated by means of Concave Surface Slider devices. Different case study structures have been analyzed. In a first stage, the capacity curve has been computed, by means of non-linear static analyses on the fixed-base structures: inelastic fiber elements have been adopted and different direction angles of lateral forces have been studied. Thanks to these results, a linear elastic Finite Element Model has been defined, characterized by the same global stiffness of the linear elastic branch of the non-linear capacity curve. Then, non-linear time history analyses have been performed on the base-isolated structures, by applying seven bidirectional seismic events. The spectrum-compatibility of bidirectional earthquakes has been studied, by considering different combinations of single components and adjusting single records: thanks to the proposed procedure, results have shown a small dispersion and a good agreement in comparison to the assumed design values.Keywords: concave surface slider, spectrum-compatibility, bidirectional earthquake, base isolation
Procedia PDF Downloads 2931783 Sustainable Production of Algae through Nutrient Recovery in the Biofuel Conversion Process
Authors: Bagnoud-Velásquez Mariluz, Damergi Eya, Grandjean Dominique, Frédéric Vogel, Ludwig Christian
Abstract:
The sustainability of algae to biofuel processes is seriously affected by the energy intensive production of fertilizers. Large amounts of nitrogen and phosphorus are required for a large-scale production resulting in many cases in a negative impact of the limited mineral resources. In order to meet the algal bioenergy opportunity it appears crucial the promotion of processes applying a nutrient recovery and/or making use of renewable sources including waste. Hydrothermal (HT) conversion is a promising and suitable technology for microalgae to generate biofuels. Besides the fact that water is used as a “green” reactant and solvent and that no biomass drying is required, the technology offers a great potential for nutrient recycling. This study evaluated the possibility to treat the water HT effluent by the growth of microalgae while producing renewable algal biomass. As already demonstrated in previous works by the authors, the HT aqueous product besides having N, P and other important nutrients, presents a small fraction of organic compounds rarely studied. Therefore, extracted heteroaromatic compounds in the HT effluent were the target of the present research; they were profiled using GC-MS and LC-MS-MS. The results indicate the presence of cyclic amides, piperazinediones, amines and their derivatives. The most prominent nitrogenous organic compounds (NOC’s) in the extracts were carefully examined by their effect on microalgae, namely 2-pyrrolidinone and β-phenylethylamine (β-PEA). These two substances were prepared at three different concentrations (10, 50 and 150 ppm). This toxicity bioassay used three different microalgae strains: Phaeodactylum tricornutum, Chlorella sorokiniana and Scenedesmus vacuolatus. The confirmed IC50 was for all cases ca. 75ppm. Experimental conditions were set up for the growth of microalgae in the aqueous phase by adjusting the nitrogen concentration (the key nutrient for algae) to fit that one established for a known commercial medium. The values of specific NOC’s were lowered at concentrations of 8.5 mg/L 2-pyrrolidinone; 1mg/L δ-valerolactam and 0.5 mg/L β-PEA. The growth with the diluted HT solution was kept constant with no inhibition evidence. An additional ongoing test is addressing the possibility to apply an integrated water cleanup step making use of the existent hydrothermal catalytic facility.Keywords: hydrothermal process, microalgae, nitrogenous organic compounds, nutrient recovery, renewable biomass
Procedia PDF Downloads 412