Search results for: industrial system
1050 Bioinformatic Strategies for the Production of Glycoproteins in Algae
Authors: Fadi Saleh, Çığdem Sezer Zhmurov
Abstract:
Biopharmaceuticals represent one of the wildest developing fields within biotechnology, and the biological macromolecules being produced inside cells have a variety of applications for therapies. In the past, mammalian cells, especially CHO cells, have been employed in the production of biopharmaceuticals. This is because these cells can achieve human-like completion of PTM. These systems, however, carry apparent disadvantages like high production costs, vulnerability to contamination, and limitations in scalability. This research is focused on the utilization of microalgae as a bioreactor system for the synthesis of biopharmaceutical glycoproteins in relation to PTMs, particularly N-glycosylation. The research points to a growing interest in microalgae as a potential substitute for more conventional expression systems. A number of advantages exist in the use of microalgae, including rapid growth rates, the lack of common human pathogens, controlled scalability in bioreactors, and the ability of some PTMs to take place. Thus, the potential of microalgae to produce recombinant proteins with favorable characteristics makes this a promising platform in order to produce biopharmaceuticals. The study focuses on the examination of the N-glycosylation pathways across different species of microalgae. This investigation is important as N-glycosylation—the process by which carbohydrate groups are linked to proteins—profoundly influences the stability, activity, and general performance of glycoproteins. Additionally, bioinformatics methodologies are employed to explain the genetic pathways implicated in N-glycosylation within microalgae, with the intention of modifying these organisms to produce glycoproteins suitable for human consumption. In this way, the present comparative analysis of the N-glycosylation pathway in humans and microalgae can be used to bridge both systems in order to produce biopharmaceuticals with humanized glycosylation profiles within the microalgal organisms. The results of the research underline microalgae's potential to help improve some of the limitations associated with traditional biopharmaceutical production systems. The study may help in the creation of a cost-effective and scale-up means of producing quality biopharmaceuticals by modifying microalgae genetically to produce glycoproteins with N-glycosylation that is compatible with humans. Improvements in effectiveness will benefit biopharmaceutical production and the biopharmaceutical sector with this novel, green, and efficient expression platform. This thesis, therefore, is thorough research into the viability of microalgae as an efficient platform for producing biopharmaceutical glycoproteins. Based on the in-depth bioinformatic analysis of microalgal N-glycosylation pathways, a platform for their engineering to produce human-compatible glycoproteins is set out in this work. The findings obtained in this research will have significant implications for the biopharmaceutical industry by opening up a new way of developing safer, more efficient, and economically more feasible biopharmaceutical manufacturing platforms.Keywords: microalgae, glycoproteins, post-translational modification, genome
Procedia PDF Downloads 231049 Autophagy in the Midgut Epithelium of Spodoptera exigua Hübner (Lepidoptera: Noctuidae) Larvae Exposed to Various Cadmium Concentration - 6-Generational Exposure
Authors: Magdalena Maria Rost-Roszkowska, Alina Chachulska-Żymełka, Monika Tarnawska, Maria Augustyniak, Alina Kafel, Agnieszka Babczyńska
Abstract:
Autophagy is a form of cell remodeling in which an internalization of organelles into vacuoles that are called autophagosomes occur. Autophagosomes are the targets of lysosomes, thus causing digestion of cytoplasmic components. Eventually, it can lead to the death of the entire cell. However, in response to several stress factors, e.g., starvation, heavy metals (e.g., cadmium) autophagy can also act as a pro-survival factor, protecting the cell against its death. The main aim of our studies was to check if the process of autophagy, which could appear in the midgut epithelium after Cd treatment, can be fixed during the following generations of insects. As a model animal, we chose the beet armyworm Spodoptera exigua Hübner (Lepidoptera: Noctuidae), a well-known polyphagous pest of many vegetable crops. We analyzed specimens at final larval stage (5th larval stage), due to its hyperfagy, resulting in great amount of cadmium assimilate. The culture consisted of two strains: a control strain (K) fed a standard diet, and a cadmium strain (Cd), fed on standard diet supplemented with cadmium (44 mg Cd per kg of dry weight of food) for 146 generations, both strains. In addition, the control insects were transferred to the Cd supplemented diet (5 mg Cd per kg of dry weight of food, 10 mg Cd per kg of dry weight of food, 20 mg Cd per kg of dry weight of food, 44 mg Cd per kg of dry weight of food). Therefore, we obtained Cd1, Cd2, Cd3 and KCd experimental groups. Autophagy has been examined using transmission electron microscope. During this process, degenerated organelles were surrounded by a membranous phagophore and enclosed in an autophagosome. Eventually, after the autophagosome fused with a lysosome, an autolysosome was formed and the process of the digestion of organelles began. During the 1st year of the experiment, we analyzed specimens of 6 generations in all the lines. The intensity of autophagy depends significantly on the generation, tissue and cadmium concentration in the insect rearing medium. In the Ist, IInd, IIIrd, IVth, Vth and VIth generation the intensity of autophagy in the midguts from cadmium-exposed strains decreased gradually according to the following order of strains: Cd1, Cd2, Cd3 and KCd. The higher amount of cells with autophagy was observed in Cd1 and Cd2. However, it was still higher than the percentage of cells with autophagy in the same tissues of the insects from the control and multigenerational cadmium strain. This may indicate that during 6-generational exposure to various Cd concentration, a preserved tolerance to cadmium was not maintained. The study has been financed by the National Science Centre Poland, grant no 2016/21/B/NZ8/00831.Keywords: autophagy, cell death, digestive system, ultrastructure
Procedia PDF Downloads 2321048 Architectural Design as Knowledge Production: A Comparative Science and Technology Study of Design Teaching and Research at Different Architecture Schools
Authors: Kim Norgaard Helmersen, Jan Silberberger
Abstract:
Questions of style and reproducibility in relation to architectural design are not only continuously debated; the very concepts can seem quite provocative to architects, who like to think of architectural design as depending on intuition, ideas, and individual personalities. This standpoint - dominant in architectural discourse - is challenged in the present paper presenting early findings from a comparative STS-inspired research study of architectural design teaching and research at different architecture schools in varying national contexts. In philosophy of science framework, the paper reflects empirical observations of design teaching at the Royal Academy of Fine Arts in Copenhagen and presents a tentative theoretical framework for the on-going research project. The framework suggests that architecture – as a field of knowledge production – is mainly dominated by three epistemological positions, which will be presented and discussed. Besides serving as a loosely structured framework for future data analysis, the proposed framework brings forth the argument that architecture can be roughly divided into different schools of thought, like the traditional science disciplines. Without reducing the complexity of the discipline, describing its main intellectual positions should prove fruitful for the future development of architecture as a theoretical discipline, moving an architectural critique beyond discussions of taste preferences. Unlike traditional science disciplines, there is a lack of a community-wide, shared pool of codified references in architecture, with architects instead referencing art projects, buildings, and famous architects, when positioning their standpoints. While these inscriptions work as an architectural reference system, to be compared to codified theories in academic writing of traditional research, they are not used systematically in the same way. As a result, architectural critique is often reduced to discussions of taste and subjectivity rather than epistemological positioning. Architects are often criticized as judges of taste and accused that their rationality is rooted in cultural-relative aesthetical concepts of taste closely linked to questions of style, but arguably their supposedly subjective reasoning, in fact, forms part of larger systems of thought. Putting architectural ‘styles’ under a loop, and tracing their philosophical roots, can potentially open up a black box in architectural theory. Besides ascertaining and recognizing the existence of specific ‘styles’ and thereby schools of thought in current architectural discourse, the study could potentially also point at some mutations of the conventional – something actually ‘new’ – of potentially high value for architectural design education.Keywords: architectural theory, design research, science and technology studies (STS), sociology of architecture
Procedia PDF Downloads 1261047 Pulsed-Wave Doppler Ultrasonographic Assessment of the Maximum Blood Velocity in Common Carotid Artery in Horses after Administration of Ketamine and Acepromazine
Authors: Saman Ahani, Aboozar Dehghan, Roham Vali, Hamid Salehian, Amin Ebrahimi
Abstract:
Pulsed-wave (PW) doppler ultrasonography is a non-invasive, relatively accurate imaging technique that can measure blood speed. The imaging could be obtained via the common carotid artery, as one of the main vessels supplying the blood of vital organs. In horses, factors such as susceptibility to depression of the cardiovascular system and their large muscular mass have rendered them vulnerable to changes in blood speed. One of the most important factors causing blood velocity changes is the administration of anesthetic drugs, including Ketamine and Acepromazine. Thus, in this study, the Pulsed-wave doppler technique was performed to assess the highest blood velocity in the common carotid artery following administration of Ketamine and Acepromazine. Six male and six female healthy Kurdish horses weighing 351 ± 46 kg (mean ± SD) and aged 9.2 ± 1.7 years (mean ± SD) were housed under animal welfare guidelines. After fasting for six hours, the normal blood flow velocity in the common carotid artery was measured using a Pulsed-wave doppler ultrasonography machine (BK Medical, Denmark), and a high-frequency linear transducer (12 MHz) without applying any sedative drugs as a control group. The same procedure was repeated after each individual received the following medications: 1.1, 2.2 mg/kg Ketamine (Pfizer, USA), and 0.5, 1 mg/kg Acepromizine (RACEHORSE MEDS, Ukraine), with an interval of 21 days between the administration of each dose and/or drug. The ultrasonographic study was done five (T5) and fifteen (T15) minutes after injecting each dose intravenously. Lastly, the statistical analysis was performed using SPSS software version 22 for Windows and a P value less than 0.05 was considered to be statistically significant. Five minutes after administration of Ketamine (1.1, 2.2 mg/kg) in both male and female horses, the blood velocity decreased to 38.44, 34.53 cm/s in males, and 39.06, 34.10 cm/s in females in comparison to the control group (39.59 and 40.39 cm/s in males and females respectively) while administration of 0.5 mg/kg Acepromazine led to a significant rise (73.15 and 55.80 cm/s in males and females respectively) (p<0.05). It means that the most drastic change in blood velocity, regardless of gender, refers to the latter dose/drug. In both medications and both genders, the increase in doses led to a decrease in blood velocity compared to the lower dose of the same drug. In all experiments in this study, the blood velocity approached its normal value at T15. In another study comparing the blood velocity changes affected by Ketamine and Acepromazine through femoral arteries, the most drastic changes were attributed to Ketamine; however, in this experiment, the maximum blood velocity was observed following administration of Acepromazine via the common carotid artery. Therefore, further experiments using the same medications are suggested using Pulsed-wave doppler measuring the blood velocity changes in both femoral and common carotid arteries simultaneously.Keywords: Acepromazine, common carotid artery, horse, ketamine, pulsed-wave doppler ultrasonography
Procedia PDF Downloads 1261046 Nonlinear Response of Tall Reinforced Concrete Shear Wall Buildings under Wind Loads
Authors: Mahtab Abdollahi Sarvi, Siamak Epackachi, Ali Imanpour
Abstract:
Reinforced concrete shear walls are commonly used as the lateral load-resisting system of mid- to high-rise office or residential buildings around the world. Design of such systems is often governed by wind rather than seismic effects, in particular in low-to-moderate seismic regions. The current design philosophy as per the majority of building codes under wind loads require elastic response of lateral load-resisting systems including reinforced concrete shear walls when subjected to the rare design wind load, resulting in significantly large wall sections needed to meet strength requirements and drift limits. The latter can highly influence the design in upper stories due to stringent drift limits specified by building codes, leading to substantial added costs to the construction of the wall. However, such walls may offer limited to moderate over-strength and ductility due to their large reserve capacity provided that they are designed and detailed to appropriately develop such over-strength and ductility under extreme wind loads. This would significantly contribute to reducing construction time and costs, while maintaining structural integrity under gravity and frequently-occurring and less frequent wind events. This paper aims to investigate the over-strength and ductility capacity of several imaginary office buildings located in Edmonton, Canada with a glance at earthquake design philosophy. Selected models are 10- to 25-story buildings with three types of reinforced concrete shear wall configurations including rectangular, barbell, and flanged. The buildings are designed according to National Building Code of Canada. Then fiber-based numerical models of the walls are developed in Perform 3D and by conducting nonlinear static (pushover) analysis, lateral nonlinear behavior of the walls are evaluated. Ductility and over-strength of the structures are obtained based on the results of the pushover analyses. The results confirmed moderate nonlinear capacity of reinforced concrete shear walls under extreme wind loads. This is while lateral displacements of the walls pass the serviceability limit states defined in Pre standard for Performance-Based Wind Design (ASCE). The results indicate that we can benefit the limited nonlinear response observed in the reinforced concrete shear walls to economize the design of such systems under wind loads.Keywords: concrete shear wall, high-rise buildings, nonlinear static analysis, response modification factor, wind load
Procedia PDF Downloads 1051045 Material Use and Life Cycle GHG Emissions of Different Electrification Options for Long-Haul Trucks
Authors: Nafisa Mahbub, Hajo Ribberink
Abstract:
Electrification of long-haul trucks has been in discussion as a potential strategy to decarbonization. These trucks will require large batteries because of their weight and long daily driving distances. Around 245 million battery electric vehicles are predicted to be on the road by the year 2035. This huge increase in the number of electric vehicles (EVs) will require intensive mining operations for metals and other materials to manufacture millions of batteries for the EVs. These operations will add significant environmental burdens and there is a significant risk that the mining sector will not be able to meet the demand for battery materials, leading to higher prices. Since the battery is the most expensive component in the EVs, technologies that can enable electrification with smaller batteries sizes have substantial potential to reduce the material usage and associated environmental and cost burdens. One of these technologies is an ‘electrified road’ (eroad), where vehicles receive power while they are driving, for instance through an overhead catenary (OC) wire (like trolleybuses and electric trains), through wireless (inductive) chargers embedded in the road, or by connecting to an electrified rail in or on the road surface. This study assessed the total material use and associated life cycle GHG emissions of two types of eroads (overhead catenary and in-road wireless charging) for long-haul trucks in Canada and compared them to electrification using stationary plug-in fast charging. As different electrification technologies require different amounts of materials for charging infrastructure and for the truck batteries, the study included the contributions of both for the total material use. The study developed a bottom-up approach model comparing the three different charging scenarios – plug in fast chargers, overhead catenary and in-road wireless charging. The investigated materials for charging technology and batteries were copper (Cu), steel (Fe), aluminium (Al), and lithium (Li). For the plug-in fast charging technology, different charging scenarios ranging from overnight charging (350 kW) to megawatt (MW) charging (2 MW) were investigated. A 500 km of highway (1 lane of in-road charging per direction) was considered to estimate the material use for the overhead catenary and inductive charging technologies. The study considered trucks needing an 800 kWh battery under the plug-in charger scenario but only a 200 kWh battery for the OC and inductive charging scenarios. Results showed that overall the inductive charging scenario has the lowest material use followed by OC and plug-in charger scenarios respectively. The materials use for the OC and plug-in charger scenarios were 50-70% higher than for the inductive charging scenarios for the overall system including the charging infrastructure and battery. The life cycle GHG emissions from the construction and installation of the charging technology material were also investigated.Keywords: charging technology, eroad, GHG emissions, material use, overhead catenary, plug in charger
Procedia PDF Downloads 501044 Collateral Impact of Water Resources Development in an Arsenic Affected Village of Patna District
Authors: Asrarul H. Jeelani
Abstract:
Arsenic contamination of groundwater and its’ health implications in lower Gangetic plain of Indian states started reporting in the 1980s. The same period was declared as the first water decade (1981-1990) to achieve ‘water for all.’ To fulfill the aim, the Indian government, with the support of international agencies installed millions of hand-pumps through water resources development programs. The hand-pumps improve the accessibility if the groundwater, but over-extraction of it increases the chances of mixing of trivalent arsenic which is more toxic than pentavalent arsenic of dug well water in Gangetic plain and has different physical manifestations. Now after three decades, Bihar (middle Gangetic plain) is also facing arsenic contamination of groundwater and its’ health implications. Objective: This interdisciplinary research attempts to understand the health and social implications of arsenicosis among different castes in Haldi Chhapra village and to find the association of ramifications with water resources development. Methodology: The Study used concurrent quantitative dominant mix method (QUAN+qual). The researcher had employed household survey, social mapping, interviews, and participatory interactions. However, the researcher used secondary data for retrospective analysis of hand-pumps and implications of arsenicosis. Findings: The study found 88.5% (115) household have hand-pumps as a source of water however 13.8% uses purified supplied water bottle and 3.6% uses combinations of hand-pump, bottled water and dug well water for drinking purposes. Among the population, 3.65% of individuals have arsenicosis, and 2.72% of children between the age group of 5 to 15 years are affected. The caste variable has also emerged through quantitative as well as geophysical locations analysis as 5.44% of arsenicosis manifested individual belong to scheduled caste (SC), 3.89% to extremely backward caste (EBC), 2.57% to backward caste (BC) and 3% to other. Among three clusters of arsenic poisoned locations, two belong to SC and EBC. The village as arsenic affected is being discriminated, whereas the affected individual is also facing discrimination, isolation, stigma, and problem in getting married. The forceful intervention to install hand-pumps in the first water decades and later restructuring of the dug well destroyed a conventional method of dug well cleaning. Conclusion: The common manifestation of arsenicosis has increased by 1.3% within six years of span in the village. This raised the need for setting up a proper surveillance system in the village. It is imperative to consider the social structure for arsenic mitigation program as this research reveals caste as a significant factor. The health and social implications found in the study; retrospectively analyzed as the collateral impact of water resource development programs in the village.Keywords: arsenicosis, caste, collateral impact, water resources
Procedia PDF Downloads 1081043 Convention Refugees in New Zealand: Being Trapped in Immigration Limbo without the Right to Obtain a Visa
Authors: Saska Alexandria Hayes
Abstract:
Multiple Convention Refugees in New Zealand are stuck in a state of immigration limbo due to a lack of defined immigration policies. The Refugee Convention of 1951 does not give the right to be issued a permanent right to live and work in the country of asylum. A gap in New Zealand's immigration law and policy has left Convention Refugees without the right to obtain a resident or temporary entry visa. The significant lack of literature on this topic suggests that the lack of visa options for Convention Refugees in New Zealand is a widely unknown or unacknowledged issue. Refugees in New Zealand enjoy the right of non-refoulement contained in Article 33 of the Refugee Convention 1951, whether lawful or unlawful. However, a number of rights contained in the Refugee Convention 1951, such as the right to gainful employment and social security, are limited to refugees who maintain lawful immigration status. If a Convention Refugee is denied a resident visa, the only temporary entry visa a Convention Refugee can apply for in New Zealand is discretionary. The appeal cases heard at the Immigration Protection Tribunal establish that Immigration New Zealand has declined resident and discretionary temporary entry visa applications by Convention Refugees for failing to meet the health or character immigration instructions. The inability of a Convention Refugee to gain residency in New Zealand creates a dependence on the issue of discretionary temporary entry visas to maintain lawful status. The appeal cases record that this reliance has led to Convention Refugees' lawful immigration status being in question, temporarily depriving them of the rights contained in the Refugee Convention 1951 of lawful refugees. In one case, the process of applying for a discretionary temporary entry visa led to a lawful Convention Refugee being temporarily deprived of the right to social security, breaching Article 24 of the Refugee Convention 1951. The judiciary has stated a constant reliance on the issue of discretionary temporary entry visas for Convention Refugees can lead to a breach of New Zealand's international obligations under Article 7 of the International Covenant on Civil and Political Rights. The appeal cases suggest that, despite successful judicial proceedings, at least three persons have been made to rely on the issue of discretionary temporary entry visas potentially indefinitely. The appeal cases establish that a Convention Refugee can be denied a discretionary temporary entry visa and become unlawful. Unlawful status could ultimately breach New Zealand's obligations under Article 33 of the Refugee Convention 1951 as it would procedurally deny Convention Refugees asylum. It would force them to choose between the right of non-refoulement or leaving New Zealand to seek the ability to access all the human rights contained in the Universal Declaration of Human Rights elsewhere. This paper discusses how the current system has given rise to these breaches and emphasizes a need to create a designated temporary entry visa category for Convention Refugees.Keywords: domestic policy, immigration, migration, New Zealand
Procedia PDF Downloads 1011042 Life Cycle Assessment to Study the Acidification and Eutrophication Impacts of Sweet Cherry Production
Authors: G. Bravo, D. Lopez, A. Iriarte
Abstract:
Several organizations and governments have created a demand for information about the environmental impacts of agricultural products. Today, the export oriented fruit sector in Chile is being challenged to quantify and reduce their environmental impacts. Chile is the largest southern hemisphere producer and exporter of sweet cherry fruit. Chilean sweet cherry production reached a volume of 80,000 tons in 2012. The main destination market for the Chilean cherry in 2012 was Asia (including Hong Kong and China), taking in 69% of exported volume. Another important market was the United States with 16% participation, followed by Latin America (7%) and Europe (6%). Concerning geographical distribution, the Chilean conventional cherry production is focused in the center-south area, between the regions of Maule and O’Higgins; both regions represent 81% of the planted surface. The Life Cycle Assessment (LCA) is widely accepted as one of the major methodologies for assessing environmental impacts of products or services. The LCA identifies the material, energy, material, and waste flows of a product or service, and their impact on the environment. There are scant studies that examine the impacts of sweet cherry cultivation, such as acidification and eutrophication. Within this context, the main objective of this study is to evaluate, using the LCA, the acidification and eutrophication impacts of sweet cherry production in Chile. The additional objective is to identify the agricultural inputs that contributed significantly to the impacts of this fruit. The system under study included all the life cycle stages from the cradle to the farm gate (harvested sweet cherry). The data of sweet cherry production correspond to nationwide representative practices and are based on technical-economic studies and field information obtained in several face-to-face interviews. The study takes into account the following agricultural inputs: fertilizers, pesticides, diesel consumption for agricultural operations, machinery and electricity for irrigation. The results indicated that the mineral fertilizers are the most important contributors to the acidification and eutrophication impacts of the sheet cherry cultivation. Improvement options are suggested for the hotspot in order to reduce the environmental impacts. The results allow planning and promoting low impacts procedures across fruit companies, as well as policymakers, and other stakeholders on the subject. In this context, this study is one of the first assessments of the environmental impacts of sweet cherry production. New field data or evaluation of other life cycle stages could further improve the knowledge on the impacts of this fruit. This study may contribute to environmental information in other countries where there is similar agricultural production for sweet cherry.Keywords: acidification, eutrophication, life cycle assessment, sweet cherry production
Procedia PDF Downloads 2691041 The Current Home Hemodialysis Practices and Patients’ Safety Related Factors: A Case Study from Germany
Authors: Ilyas Khan. Liliane Pintelon, Harry Martin, Michael Shömig
Abstract:
The increasing costs of healthcare on one hand, and the rise in aging population and associated chronic disease, on the other hand, are putting increasing burden on the current health care system in many Western countries. For instance, chronic kidney disease (CKD) is a common disease and in Europe, the cost of renal replacement therapy (RRT) is very significant to the total health care cost. However, the recent advancement in healthcare technology, provide the opportunity to treat patients at home in their own comfort. It is evident that home healthcare offers numerous advantages apparently, low costs and high patients’ quality of life. Despite these advantages, the intake of home hemodialysis (HHD) therapy is still low in particular in Germany. Many factors are accounted for the low number of HHD intake. However, this paper is focusing on patients’ safety-related factors of current HHD practices in Germany. The aim of this paper is to analyze the current HHD practices in Germany and to identify risks related factors if any exist. A case study has been conducted in a dialysis center which consists of four dialysis centers in the south of Germany. In total, these dialysis centers have 350 chronic dialysis patients, of which, four patients are on HHD. The centers have 126 staff which includes six nephrologists and 120 other staff i.e. nurses and administration. The results of the study revealed several risk-related factors. Most importantly, these centers do not offer allied health services at the pre-dialysis stage, the HHD training did not have an established curriculum; however, they have just recently developed the first version. Only a soft copy of the machine manual is offered to patients. Surprisingly, the management was not aware of any standard available for home assessment and installation. The home assessment is done by a third party (i.e. the machines and equipment provider) and they may not consider the hygienic quality of the patient’s home. The type of machine provided to patients at home is similar to the one in the center. The model may not be suitable at home because of its size and complexity. Even though portable hemodialysis machines, which are specially designed for home use, are available in the market such as the NxStage series. Besides the type of machine, no assistance is offered for space management at home in particular for placing the machine. Moreover, the centers do not offer remote assistance to patients and their carer at home. However, telephonic assistance is available. Furthermore, no alternative is offered if a carer is not available. In addition, the centers are lacking medical staff including nephrologists and renal nurses.Keywords: home hemodialysis, home hemodialysis practices, patients’ related risks in the current home hemodialysis practices, patient safety in home hemodialysis
Procedia PDF Downloads 1171040 Socioeconomic Burden of Life Long Disease: A Case of Diabetes Care in Bangladesh
Authors: Samira Humaira Habib
Abstract:
Diabetes has profound effects on individuals and their families. If diabetes is not well monitored and managed, then it leads to long-term complications and a large and growing cost to the health care system. Prevalence and socioeconomic burden of diabetes and relative return of investment for the elimination or the reduction of the burden are much more important regarding its cost burden. Various studies regarding the socioeconomic cost burden of diabetes are well explored in developed countries but almost absent in developing countries like Bangladesh. The main objective of the study is to estimate the total socioeconomic burden of diabetes. It is a prospective longitudinal follow up study which is analytical in nature. Primary and secondary data are collected from patients who are undergoing treatment for diabetes at the out-patient department of Bangladesh Institute of Research & Rehabilitation in Diabetes, Endocrine & Metabolic Disorders (BIRDEM). Of the 2115 diabetic subjects, females constitute around 50.35% of the study subject, and the rest are male (49.65%). Among the subjects, 1323 are controlled, and 792 are uncontrolled diabetes. Cost analysis of 2115 diabetic patients shows that the total cost of diabetes management and treatment is US$ 903018 with an average of US$ 426.95 per patient. In direct cost, the investigation and medical treatment at hospital along with investigation constitute most of the cost in diabetes. The average cost of a hospital is US$ 311.79, which indicates an alarming warn for diabetic patients. The indirect cost shows that cost of productivity loss (US$ 51110.1) is higher among the all indirect item. All constitute total indirect cost as US$ 69215.7. The incremental cost of intensive management of uncontrolled diabetes is US$ 101.54 per patient and event-free time gained in this group is 0.55 years and the life years gain is 1.19 years. The incremental cost per event-free year gained is US$ 198.12. The incremental cost of intensive management of the controlled group is US$ 89.54 per patient and event-free time gained is 0.68 years, and the life year gain is 1.12 years. The incremental cost per event-free year gained is US$ 223.34. The EuroQoL difference between the groups is found to be 64.04. The cost-effective ratio is found to be US$ 1.64 cost per effect in case of controlled diabetes and US$ 1.69 cost per effect in case of uncontrolled diabetes. So management of diabetes is much more cost-effective. Cost of young type 1 diabetic patient showed upper socioeconomic class, and with the increase of the duration of diabetes, the cost increased also. The dietary pattern showed macronutrients intake and cost are significantly higher in the uncontrolled group than their counterparts. Proper management and control of diabetes can decrease the cost of care for the long term.Keywords: cost, cost-effective, chronic diseases, diabetes care, burden, Bangladesh
Procedia PDF Downloads 1461039 Methylphenidate Use by Canadian Children and Adolescents and the Associated Adverse Reactions
Authors: Ming-Dong Wang, Abigail F. Ruby, Michelle E. Ross
Abstract:
Methylphenidate is a first-line treatment drug for attention deficit hyperactivity disorder (ADHD), a common mental health disorder in children and adolescents. Over the last several decades, the rate of children and adolescents using ADHD medication has been increasing in many countries. A recent study found that the prevalence of ADHD medication use among children aged 3-18 years increased in 13 different world regions between 2001 and 2015, where the absolute increase ranged from 0.02 to 0.26% per year. The goal of this study was to examine the use of methylphenidate in Canadian children and its associated adverse reactions. Methylphenidate use information among young Canadians aged 0-14 years was extracted from IQVIA data on prescriptions dispensed by pharmacies between April 2014 and June 2020. The adverse reaction information associated with methylphenidate use was extracted from the Canada Vigilance database for the same time period. Methylphenidate use trends were analyzed based on sex, age group (0-4 years, 5-9 years, and 10-14 years), and geographical location (province). The common classes of adverse reactions associated with methylphenidate use were sorted, and the relative risks associated with methylphenidate use as compared with two second-line amphetamine medications for ADHD were estimated. This study revealed that among Canadians aged 0-14 years, every 100 people used about 25 prescriptions (or 23,000 mg) of methylphenidate per year during the study period, and the use increased with time. Boys used almost three times more methylphenidate than girls. The amount of drug used was inversely associated with age: Canadians aged 10-14 years used nearly three times as many drugs compared to those aged 5-9 years. Seasonal methylphenidate use patterns were apparent among young Canadians, but the seasonal trends differed among the three age groups. Methylphenidate use varied from region to region, and the highest methylphenidate use was observed in Quebec, where the use of methylphenidate was at least double that of any other province. During the study period, Health Canada received 304 adverse reaction reports associated with the use of methylphenidate for Canadians aged 0-14 years. The number of adverse reaction reports received for boys was 3.5 times higher than that for girls. The three most common adverse reaction classes were psychiatric disorders, nervous system disorders and injury, poisoning procedural complications. The number one commonly reported adverse reaction for boys was aggression (11.2%), while for girls, it was a tremor (9.6%). The safety profile in terms of adverse reaction classes associated with methylphenidate use was similar to that of the selected control products. Methylphenidate is a commonly used pharmaceutical product in young Canadians, particularly in the province of Quebec. Boys used approximately three times more of this product as compared to girls. Future investigation is needed to determine what factors are associated with the observed geographic variations in Canada.Keywords: adverse reaction risk, methylphenidate, prescription trend, use variation
Procedia PDF Downloads 1591038 Forensic Investigation: The Impact of Biometric-Based Solution in Combatting Mobile Fraud
Authors: Mokopane Charles Marakalala
Abstract:
Research shows that mobile fraud has grown exponentially in South Africa during the lockdown caused by the COVID-19 pandemic. According to the South African Banking Risk Information Centre (SABRIC), fraudulent online banking and transactions resulted in a sharp increase in cybercrime since the beginning of the lockdown, resulting in a huge loss to the banking industry in South Africa. While the Financial Intelligence Centre Act, 38 of 2001, regulate financial transactions, it is evident that criminals are making use of technology to their advantage. Money-laundering ranks among the major crimes, not only in South Africa but worldwide. This paper focuses on the impact of biometric-based solutions in combatting mobile fraud at the South African Risk Information. SABRIC had the challenges of a successful mobile fraud; cybercriminals could hijack a mobile device and use it to gain access to sensitive personal data and accounts. Cybercriminals are constantly looting the depths of cyberspace in search of victims to attack. Millions of people worldwide use online banking to do their regular bank-related transactions quickly and conveniently. This was supported by the SABRIC, who regularly highlighted incidents of mobile fraud, corruption, and maladministration in SABRIC, resulting in a lack of secure their banking online; they are vulnerable to falling prey to fraud scams such as mobile fraud. Criminals have made use of digital platforms since the development of technology. In 2017, 13 438 instances involving banking apps, internet banking, and mobile banking caused the sector to suffer gross losses of more than R250,000,000. The final three parties are forced to point fingers at one another while the fraudster makes off with the money. A non-probability sampling (purposive sampling) was used in selecting these participants. These included telephone calls and virtual interviews. The results indicate that there is a relationship between remote online banking and the increase in money-laundering as the system allows transactions to take place with limited verification processes. This paper highlights the significance of considering the development of prevention mechanisms, capacity development, and strategies for both financial institutions as well as law enforcement agencies in South Africa to reduce crime such as money-laundering. The researcher recommends that strategies to increase awareness for bank staff must be harnessed through the provision of requisite training and to be provided adequate training.Keywords: biometric-based solution, investigation, cybercrime, forensic investigation, fraud, combatting
Procedia PDF Downloads 991037 A Research on the Improvement of Small and Medium-Sized City in Early-Modern China (1895-1927): Taking Southern Jiangsu as an Example
Authors: Xiaoqiang Fu, Baihao Li
Abstract:
In 1895, the failure of Sino-Japanese prompted the trend of comprehensive and systematic study of western pattern in China. In urban planning and construction, urban reform movement sprang up slowly, which aimed at renovating and reconstructing the traditional cities into modern cities similar to the concessions. During the movement, Chinese traditional city initiated a process of modern urban planning for its modernization. Meanwhile, the traditional planning morphology and system started to disintegrate, on the contrary, western form and technology had become the paradigm. Therefore, the improvement of existing cities had become the prototype of urban planning of early modern China. Currently, researches of the movement mainly concentrate on large cities, concessions, railway hub cities and some special cities resembling those. However, the systematic research about the large number of traditional small and medium-sized cities is still blank, up to now. This paper takes the improvement constructions of small and medium-sized cities in Southern region of Jiangsu Province as the research object. First of all, the criteria of small and medium-sized cities are based on the administrative levels of general office and cities at the county level. Secondly, the suitability of taking the Southern Jiangsu as the research object. The southern area of Jiangsu province called Southern Jiangsu for short, was the most economically developed region in Jiangsu, and also one of the most economically developed and the highest urbanization regions in China. As the most developed agricultural areas in ancient China, Southern Jiangsu formed a large number of traditional small and medium-sized cities. In early modern times, with the help of the Shanghai economic radiation, geographical advantage and powerful economic foundation, Southern Jiangsu became an important birthplace of Chinese national industry. Furthermore, the strong business atmosphere promoted the widespread urban improvement practices, which were incomparable of other regions. Meanwhile, the demonstration of Shanghai, Zhenjiang, Suzhou and other port cities became the improvement pattern of small and medium-sized city in Southern Jiangsu. This paper analyzes the reform movement of the small and medium-sized cities in Southern Jiangsu (1895-1927), including the subjects, objects, laws, technologies and the influence factors of politic and society, etc. At last, this paper reveals the formation mechanism and characteristics of urban improvement movement in early modern China. According to the paper, the improvement of small-medium city was a kind of gestation of the local city planning culture in early modern China,with a fusion of introduction and endophytism.Keywords: early modern China, improvement of small-medium city, southern region of Jiangsu province, urban planning history of China
Procedia PDF Downloads 2591036 Vulnerability Assessment of Groundwater Quality Deterioration Using PMWIN Model
Authors: A. Shakoor, M. Arshad
Abstract:
The utilization of groundwater resources in irrigation has significantly increased during the last two decades due to constrained canal water supplies. More than 70% of the farmers in the Punjab, Pakistan, depend directly or indirectly on groundwater to meet their crop water demands and hence, an unchecked paradigm shift has resulted in aquifer depletion and deterioration. Therefore, a comprehensive research was carried at central Punjab-Pakistan, regarding spatiotemporal variation in groundwater level and quality. Processing MODFLOW for window (PMWIN) and MT3D (solute transport model) models were used for existing and future prediction of groundwater level and quality till 2030. The comprehensive data set of aquifer lithology, canal network, groundwater level, groundwater salinity, evapotranspiration, groundwater abstraction, recharge etc. were used in PMWIN model development. The model was thus, successfully calibrated and validated with respect to groundwater level for the periods of 2003 to 2007 and 2008 to 2012, respectively. The coefficient of determination (R2) and model efficiency (MEF) for calibration and validation period were calculated as 0.89 and 0.98, respectively, which argued a high level of correlation between the calculated and measured data. For solute transport model (MT3D), the values of advection and dispersion parameters were used. The model used for future scenario up to 2030, by assuming that there would be no uncertain change in climate and groundwater abstraction rate would increase gradually. The model predicted results revealed that the groundwater would decline from 0.0131 to 1.68m/year during 2013 to 2030 and the maximum decline would be on the lower side of the study area, where infrastructure of canal system is very less. This lowering of groundwater level might cause an increase in the tubewell installation and pumping cost. Similarly, the predicted total dissolved solids (TDS) of the groundwater would increase from 6.88 to 69.88mg/L/year during 2013 to 2030 and the maximum increase would be on lower side. It was found that in 2030, the good quality would reduce by 21.4%, while marginal and hazardous quality water increased by 19.28 and 2%, respectively. It was found from the simulated results that the salinity of the study area had increased due to the intrusion of salts. The deterioration of groundwater quality would cause soil salinity and ultimately the reduction in crop productivity. It was concluded from the predicted results of groundwater model that the groundwater deteriorated with the depth of water table i.e. TDS increased with declining groundwater level. It is recommended that agronomic and engineering practices i.e. land leveling, rainwater harvesting, skimming well, ASR (Aquifer Storage and Recovery Wells) etc. should be integrated to meliorate management of groundwater for higher crop production in salt affected soils.Keywords: groundwater quality, groundwater management, PMWIN, MT3D model
Procedia PDF Downloads 3751035 Improving Student Learning in a Math Bridge Course through Computer Algebra Systems
Authors: Alejandro Adorjan
Abstract:
Universities are motivated to understand the factor contributing to low retention of engineering undergraduates. While precollege students for engineering increases, the number of engineering graduates continues to decrease and attrition rates for engineering undergraduates remains high. Calculus 1 (C1) is the entry point of most undergraduate Engineering Science and often a prerequisite for Computing Curricula courses. Mathematics continues to be a major hurdle for engineering students and many students who drop out from engineering cite specifically Calculus as one of the most influential factors in that decision. In this context, creating course activities that increase retention and motivate students to obtain better final results is a challenge. In order to develop several competencies in our students of Software Engineering courses, Calculus 1 at Universidad ORT Uruguay focuses on developing several competencies such as capacity of synthesis, abstraction, and problem solving (based on the ACM/AIS/IEEE). Every semester we try to reflect on our practice and try to answer the following research question: What kind of teaching approach in Calculus 1 can we design to retain students and obtain better results? Since 2010, Universidad ORT Uruguay offers a six-week summer noncompulsory bridge course of preparatory math (to bridge the math gap between high school and university). Last semester was the first time the Department of Mathematics offered the course while students were enrolled in C1. Traditional lectures in this bridge course lead to just transcribe notes from blackboard. Last semester we proposed a Hands On Lab course using Geogebra (interactive geometry and Computer Algebra System (CAS) software) as a Math Driven Development Tool. Students worked in a computer laboratory class and developed most of the tasks and topics in Geogebra. As a result of this approach, several pros and cons were found. It was an excessive amount of weekly hours of mathematics for students and, as the course was non-compulsory; the attendance decreased with time. Nevertheless, this activity succeeds in improving final test results and most students expressed the pleasure of working with this methodology. This teaching technology oriented approach strengthens student math competencies needed for Calculus 1 and improves student performance, engagement, and self-confidence. It is important as a teacher to reflect on our practice, including innovative proposals with the objective of engaging students, increasing retention and obtaining better results. The high degree of motivation and engagement of participants with this methodology exceeded our initial expectations, so we plan to experiment with more groups during the summer so as to validate preliminary results.Keywords: calculus, engineering education, PreCalculus, Summer Program
Procedia PDF Downloads 2891034 Event Data Representation Based on Time Stamp for Pedestrian Detection
Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita
Abstract:
In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption
Procedia PDF Downloads 971033 Didacticization of Code Switching as a Tool for Bilingual Education in Mali
Authors: Kadidiatou Toure
Abstract:
Mali has started experimentation of teaching the national languages at school through the convergent pedagogy in 1987. Then, it is in 1994 that it will become widespread with eleven of the thirteen former national languages used at primary school. The aim was to improve the Malian educational system because the use of French as the only medium of instruction was considered a contributing factor to the significant number of student dropouts and the high rate of repetition. The Convergent pedagogy highlights the knowledge acquired by children at home, their vision of the world and especially the knowledge they have of their mother tongue. That pedagogy requires the use of a specific medium only during classroom practices and teachers have been trained in this sense. The specific medium depends on the learning content, which sometimes is French, other times, it is the national language. Research has shown that bilingual learners do not only use the required medium in their learning activities, but they code switch. It is part of their learning processes. Currently, many scholars agree on the importance of CS in bilingual classes, and teachers have been told about the necessity of integrating it into their classroom practices. One of the challenges of the Malian bilingual education curriculum is the question of ‘effective languages management’. Theoretically, depending on the classrooms, an average have been established for each of the involved language. Following that, teachers make use of CS differently, sometimes, it favors the learners, other times, it contributes to the development of some linguistic weaknesses. The present research tries to fill that gap through a tentative model of didactization of CS, which simply means the practical management of the languages involved in the bilingual classrooms. It is to know how to use CS for effective learning. Moreover, the didactization of CS tends to sensitize the teachers about the functional role of CS so that they may overcome their own weaknesses. The overall goal of this research is to make code switching a real tool for bilingual education. The specific objectives are: to identify the types of CS used during classroom activities to present the functional role of CS for the teachers as well as the pupils. to develop a tentative model of code-switching, which will help the teachers in transitional classes of bilingual schools to recognize the appropriate moment for making use of code switching in their classrooms. The methodology adopted is a qualitative one. The study is based on recorded videos of teachers of 3rd year of primary school during their classroom activities and interviews with the teachers in order to confirm the functional role of CS in bilingual classes. The theoretical framework adopted is the typology of CS proposed by Poplack (1980) to identify the types of CS used. The study reveals that teachers need to be trained on the types of CS and the different functions they assume and on the consequences of inappropriate use of language alternation.Keywords: bilingual curriculum, code switching, didactization, national languages
Procedia PDF Downloads 681032 Through Additive Manufacturing. A New Perspective for the Mass Production of Made in Italy Products
Authors: Elisabetta Cianfanelli, Paolo Pupparo, Maria Claudia Coppola
Abstract:
The recent evolutions in the innovation processes and in the intrinsic tendencies of the product development process, lead to new considerations on the design flow. The instability and complexity that contemporary life describes, defines new problems in the production of products, stimulating at the same time the adoption of new solutions across the entire design process. The advent of Additive Manufacturing, but also of IOT and AI technologies, continuously puts us in front of new paradigms regarding design as a social activity. The totality of these technologies from the point of view of application describes a whole series of problems and considerations immanent to design thinking. Addressing these problems may require some initial intuition and the use of some provisional set of rules or plausible strategies, i.e., heuristic reasoning. At the same time, however, the evolution of digital technology and the computational speed of new design tools describe a new and contrary design framework in which to operate. It is therefore interesting to understand the opportunities and boundaries of the new man-algorithm relationship. The contribution investigates the man-algorithm relationship starting from the state of the art of the Made in Italy model, the most known fields of application are described and then focus on specific cases in which the mutual relationship between man and AI becomes a new driving force of innovation for entire production chains. On the other hand, the use of algorithms could engulf many design phases, such as the definition of shape, dimensions, proportions, materials, static verifications, and simulations. Operating in this context, therefore, becomes a strategic action, capable of defining fundamental choices for the design of product systems in the near future. If there is a human-algorithm combination within a new integrated system, quantitative values can be controlled in relation to qualitative and material values. The trajectory that is described therefore becomes a new design horizon in which to operate, where it is interesting to highlight the good practices that already exist. In this context, the designer developing new forms can experiment with ways still unexpressed in the project and can define a new synthesis and simplification of algorithms, so that each artifact has a signature in order to define in all its parts, emotional and structural. This signature of the designer, a combination of values and design culture, will be internal to the algorithms and able to relate to digital technologies, creating a generative dialogue for design purposes. The result that is envisaged indicates a new vision of digital technologies, no longer understood only as of the custodians of vast quantities of information, but also as a valid integrated tool in close relationship with the design culture.Keywords: decision making, design euristics, product design, product design process, design paradigms
Procedia PDF Downloads 1181031 Extrudable Foamed Concrete: General Benefits in Prefabrication and Comparison in Terms of Fresh Properties and Compressive Strength with Classic Foamed Concrete
Authors: D. Falliano, G. Ricciardi, E. Gugliandolo
Abstract:
Foamed concrete belongs to the category of lightweight concrete. It is characterized by a density which is generally ranging from 200 to 2000 kg/m³ and typically comprises cement, water, preformed foam, fine sand and eventually fine particles such as fly ash or silica fume. The foam component mixed with the cement paste give rise to the development of a system of air-voids in the cementitious matrix. The peculiar characteristics of foamed concrete elements are summarized in the following aspects: 1) lightness which allows reducing the dimensions of the resisting frame structure and is advantageous in the scope of refurbishment or seismic retrofitting in seismically vulnerable areas; 2) thermal insulating properties, especially in the case of low densities; 3) the good resistance against fire as compared to ordinary concrete; 4) the improved workability; 5) cost-effectiveness due to the usage of rather simple constituting elements that are easily available locally. Classic foamed concrete cannot be extruded, as the dimensional stability is not permitted in the green state and this severely limits the possibility of industrializing them through a simple and cost-effective process, characterized by flexibility and high production capacity. In fact, viscosity enhancing agents (VEA) used to extrude traditional concrete, in the case of foamed concrete cause the collapsing of air bubbles, so that it is impossible to extrude a lightweight product. These requirements have suggested the study of a particular additive that modifies the rheology of foamed concrete fresh paste by increasing cohesion and viscosity and, at the same time, stabilizes the bubbles into the cementitious matrix, in order to allow the dimensional stability in the green state and, consequently, the extrusion of a lightweight product. There are plans to submit the additive’s formulation to patent. In addition to the general benefits of using the extrusion process, extrudable foamed concrete allow other limits to be exceeded: elimination of formworks, expanded application spectrum, due to the possibility of extrusion in a range varying between 200 and 2000 kg/m³, which allows the prefabrication of both structural and non-structural constructive elements. Besides, this contribution aims to present the significant differences regarding extrudable and classic foamed concrete fresh properties in terms of slump. Plastic air content, plastic density, hardened density and compressive strength have been also evaluated. The outcomes show that there are no substantial differences between extrudable and classic foamed concrete compression resistances.Keywords: compressive strength, extrusion, foamed concrete, fresh properties, plastic air content, slump.
Procedia PDF Downloads 1721030 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression
Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin
Abstract:
This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression
Procedia PDF Downloads 2891029 The Sea Striker: The Relevance of Small Assets Using an Integrated Conception with Operational Performance Computations
Authors: Gaëtan Calvar, Christophe Bouvier, Alexis Blasselle
Abstract:
This paper presents the Sea Striker, a compact hydrofoil designed with the goal to address some of the issues raised by the recent evolutions of naval missions, threats and operation theatres in modern warfare. Able to perform a wide range of operations, the Sea Striker is a 40-meter stealth surface combatant equipped with a gas turbine and aft and forward foils to reach high speeds. The Sea Striker's stealthiness is enabled by the combination of composite structure, exterior design, and the advanced integration of sensors. The ship is fitted with a powerful and adaptable combat system, ensuring a versatile and efficient response to modern threats. Lightly Manned with a core crew of 10, this hydrofoil is highly automated and can be remoted pilote for special force operation or transit. Such a kind of ship is not new: it has been used in the past by different navies, for example, by the US Navy with the USS Pegasus. Nevertheless, the recent evolutions in science and technologies on the one hand, and the emergence of new missions, threats and operation theatres, on the other hand, put forward its concept as an answer to nowadays operational challenges. Indeed, even if multiples opinions and analyses can be given regarding the modern warfare and naval surface operations, general observations and tendencies can be drawn such as the major increase in the sensors and weapons types and ranges and, more generally, capacities; the emergence of new versatile and evolving threats and enemies, such as asymmetric groups, swarm drones or hypersonic missile; or the growing number of operation theatres located in more coastal and shallow waters. These researches were performed with a complete study of the ship after several operational performance computations in order to justify the relevance of using ships like the Sea Striker in naval surface operations. For the selected scenarios, the conception process enabled to measure the performance, namely a “Measure of Efficiency” in the NATO framework for 2 different kinds of models: A centralized, classic model, using large and powerful ships; and A distributed model relying on several Sea Strikers. After this stage, a was performed. Lethal, agile, stealth, compact and fitted with a complete set of sensors, the Sea Striker is a new major player in modern warfare and constitutes a very attractive response between the naval unit and the combat helicopter, enabling to reach high operational performances at a reduced cost.Keywords: surface combatant, compact, hydrofoil, stealth, velocity, lethal
Procedia PDF Downloads 1161028 The Influence of Nutritional and Immunological Status on the Prognosis of Head and Neck Cancer
Authors: Ching-Yi Yiu, Hui-Chen Hsu
Abstract:
Objectives: Head and neck cancer (HNC) is a big global health problem in the world. Despite the development of diagnosis and treatment, the overall survival of HNC is still low. The well recognition of the interaction of the host immune system and cancer cells has led to realizing the processes of tumor initiation, progression and metastasis. Many systemic inflammatory responses have been shown to play a crucial role in cancer progression. The pre and post-treatment nutritional and immunological status of HNC patients is a reliable prognostic indicator of tumor outcomes and survivors. Methods: Between July 2020 to June 2022, We have enrolled 60 HNC patients, including 59 males and 1 female, in Chi Mei Medical Center, Liouying, Taiwan. The age distribution was from 37 to 81 years old (y/o), with a mean age of 57.6 y/o. We evaluated the pre-and post-treatment nutritional and immunological status of these HNC patients with body weight, body weight loss, body mass index (BMI), whole blood count including hemoglobin (Hb), lymphocyte, neutrophil and platelet counts, biochemistry including prealbumin, albumin, c-reactive protein (CRP), with the time period of before treatment, post-treatment 3 and 6 months. We calculated the neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) to assess how these biomarkers influence the outcomes of HNC patients. Results: We have carcinoma of the hypopharynx in 21 cases with 35%, carcinoma of the larynx in 9 cases, carcinoma of the tonsil and tongue every 6 cases, carcinoma soft palate and tongue base every 5 cases, carcinoma of buccal mucosa, retromolar trigone and mouth floor every 2 cases, carcinoma of the hard palate and low lip each 1 case. There were stage I 15 cases, stage II 13 cases, stage III 6 cases, stage IVA 10 cases, and stage IVB 16 cases. All patients have received surgery, chemoradiation therapy or combined therapy. We have wound infection in 6 cases, 2 cases of pharyngocutaneous fistula, flap necrosis in 2 cases, and mortality in 6 cases. In the wound infection group, the average BMI is 20.4 kg/m2; the average Hb is 12.9 g/dL, the average albumin is 3.5 g/dL, the average NLR is 6.78, and the average PLR is 243.5. In the PC fistula and flap necrosis group, the average BMI is 21.65 kg/m2; the average Hb is 11.7 g/dL, the average albumin is 3.15 g/dL, average NLR is 13.28, average PLR is 418.84. In the mortality group, the average BMI is 22.3 kg/m2; the average Hb is 13.58 g/dL, the average albumin is 3.77 g/dL, the average NLR is 6.06, and the average PLR is 275.5. Conclusion: HNC is a big challenging public health problem worldwide, especially in the high prevalence of betel nut consumption area Taiwan. Besides the definite risk factors of smoking, drinking and betel nut related, the other biomarkers may play significant prognosticators in the HNC outcomes. We concluded that the average BMI is less than 22 kg/m2, the average Hb is low than 12.0 g/dL, the average albumin is low than 3.3 g/dL, the average NLR is low than 3, and the average PLR is more than 170, the surgical complications and mortality will be increased, and the prognosis is poor in HNC patients.Keywords: nutritional, immunological, neutrophil-to-lymphocyte ratio, paltelet-to-lymphocyte ratio.
Procedia PDF Downloads 791027 The Application of Raman Spectroscopy in Olive Oil Analysis
Authors: Silvia Portarena, Chiara Anselmi, Chiara Baldacchini, Enrico Brugnoli
Abstract:
Extra virgin olive oil (EVOO) is a complex matrix mainly composed by fatty acid and other minor compounds, among which carotenoids are well known for their antioxidative function that is a key mechanism of protection against cancer, cardiovascular diseases, and macular degeneration in humans. EVOO composition in terms of such constituents is generally the result of a complex combination of genetic, agronomical and environmental factors. To selectively improve the quality of EVOOs, the role of each factor on its biochemical composition need to be investigated. By selecting fruits from four different cultivars similarly grown and harvested, it was demonstrated that Raman spectroscopy, combined with chemometric analysis, is able to discriminate the different cultivars, also as a function of the harvest date, based on the relative content and composition of fatty acid and carotenoids. In particular, a correct classification up to 94.4% of samples, according to the cultivar and the maturation stage, was obtained. Moreover, by using gas chromatography and high-performance liquid chromatography as reference techniques, the Raman spectral features further allowed to build models, based on partial least squares regression, that were able to predict the relative amount of the main fatty acids and the main carotenoids in EVOO, with high coefficients of determination. Besides genetic factors, climatic parameters, such as light exposition, distance from the sea, temperature, and amount of precipitations could have a strong influence on EVOO composition of both major and minor compounds. This suggests that the Raman spectra could act as a specific fingerprint for the geographical discrimination and authentication of EVOO. To understand the influence of environment on EVOO Raman spectra, samples from seven regions along the Italian coasts were selected and analyzed. In particular, it was used a dual approach combining Raman spectroscopy and isotope ratio mass spectrometry (IRMS) with principal component and linear discriminant analysis. A correct classification of 82% EVOO based on their regional geographical origin was obtained. Raman spectra were obtained by Super Labram spectrometer equipped with an Argon laser (514.5 nm wavelenght). Analyses of stable isotope content ratio were performed using an isotope ratio mass spectrometer connected to an elemental analyzer and to a pyrolysis system. These studies demonstrate that RR spectroscopy is a valuable and useful technique for the analysis of EVOO. In combination with statistical analysis, it makes possible the assessment of specific samples’ content and allows for classifying oils according to their geographical and varietal origin.Keywords: authentication, chemometrics, olive oil, raman spectroscopy
Procedia PDF Downloads 3301026 Safety Validation of Black-Box Autonomous Systems: A Multi-Fidelity Reinforcement Learning Approach
Authors: Jared Beard, Ali Baheri
Abstract:
As autonomous systems become more prominent in society, ensuring their safe application becomes increasingly important. This is clearly demonstrated with autonomous cars traveling through a crowded city or robots traversing a warehouse with heavy equipment. Human environments can be complex, having high dimensional state and action spaces. This gives rise to two problems. One being that analytic solutions may not be possible. The other is that in simulation based approaches, searching the entirety of the problem space could be computationally intractable, ruling out formal methods. To overcome this, approximate solutions may seek to find failures or estimate their likelihood of occurrence. One such approach is adaptive stress testing (AST) which uses reinforcement learning to induce failures in the system. The premise of which is that a learned model can be used to help find new failure scenarios, making better use of simulations. In spite of these failures AST fails to find particularly sparse failures and can be inclined to find similar solutions to those found previously. To help overcome this, multi-fidelity learning can be used to alleviate this overuse of information. That is, information in lower fidelity can simulations can be used to build up samples less expensively, and more effectively cover the solution space to find a broader set of failures. Recent work in multi-fidelity learning has passed information bidirectionally using “knows what it knows” (KWIK) reinforcement learners to minimize the number of samples in high fidelity simulators (thereby reducing computation time and load). The contribution of this work, then, is development of the bidirectional multi-fidelity AST framework. Such an algorithm, uses multi-fidelity KWIK learners in an adversarial context to find failure modes. Thus far, a KWIK learner has been used to train an adversary in a grid world to prevent an agent from reaching its goal; thus demonstrating the utility of KWIK learners in an AST framework. The next step is implementation of the bidirectional multi-fidelity AST framework described. Testing will be conducted in a grid world containing an agent attempting to reach a goal position and adversary tasked with intercepting the agent as demonstrated previously. Fidelities will be modified by adjusting the size of a time-step, with higher-fidelity effectively allowing for more responsive closed loop feedback. Results will compare the single KWIK AST learner with the multi-fidelity algorithm with respect to number of samples, distinct failure modes found, and relative effect of learning after a number of trials.Keywords: multi-fidelity reinforcement learning, multi-fidelity simulation, safety validation, falsification
Procedia PDF Downloads 1551025 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution
Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino
Abstract:
This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization
Procedia PDF Downloads 1361024 Optimization of the Feedstock Supply of an Oilseeds Conversion Unit for Biofuel Production in West Africa: A Comparative Study of the Supply of Jatropha curcas and Balanites aegyptiaca Seeds
Authors: Linda D. F. Bambara, Marie Sawadogo
Abstract:
Jatropha curcas (jatropha) is the plant that has been the most studied for biofuel production in West Africa. There exist however other plants such as Balanites aegyptiaca (balanites) that have been targeted as a potential feedstock for biofuel production. This biomass could be an alternative feedstock for the production of straight vegetable oil (SVO) at costs lower than jatropha-based SVO production costs. This study aims firstly to determine, through an MILP model, the optimal organization that minimizes the costs of the oilseeds supply of two biomass conversion units (BCU) exploiting respectively jatropha seeds and the balanitès seeds. Secondly, the study aims to carry out a comparative study of these costs obtained for each BCU. The model was then implemented on two theoretical cases studies built on the basis of the common practices in Burkina Faso and two scenarios were carried out for each case study. In Scenario 1, 3 pre-processing locations ("at the harvesting area", "at the gathering points", "at the BCU") are possible. In scenario 2, only one location ("at the BCU") is possible. For each biomass, the system studied is the upstream supply chain (harvesting, transport and pre-processing (drying, dehulling, depulping)), including cultivation (for jatropha). The model optimizes the area of land to be exploited based on the productivity of the studied plants and material losses that may occur during the harvesting and the supply of the BCU. It then defines the configuration of the logistics network allowing an optimal supply of the BCU taking into account the most common means of transport in West African rural areas. For the two scenarios, the results of the implementation showed that the total area exploited for balanites (1807 ha) is 4.7 times greater than the total area exploited for Jatropha (381 ha). In both case studies, the location of pre-processing “at the harvesting area” was always chosen for scenario1. As the balanites trees were not planted and because the first harvest of the jatropha seeds took place 4 years after planting, the cost price of the seeds at the BCU without the pre-processing costs was about 430 XOF/kg. This cost is 3 times higher than the balanites's one, which is 140 XOF/kg. After the first year of harvest, i.e. 5 years after planting, and assuming that the yield remains constant, the same cost price is about 200 XOF/kg for Jatropha. This cost is still 1.4 times greater than the balanites's one. The transport cost of the balanites seeds is about 120 XOF/kg. This cost is similar for the jatropha seeds. However, when the pre-processing is located at the BCU, i.e. for scenario2, the transport costs of the balanites seeds is 1200 XOF/kg. These costs are 6 times greater than the transport costs of jatropha which is 200 XOF/kg. These results show that the cost price of the balanites seeds at the BCU can be competitive compared to the jatropha's one if the pre-processing is located at the harvesting area.Keywords: Balanites aegyptiaca, biomass conversion, Jatropha curcas, optimization, post-harvest operations
Procedia PDF Downloads 3361023 Trafficking of Women in Assam: The Untold Violation of Women's Human Rights
Authors: Mridula Devi
Abstract:
Trafficking of women is a slur on human dignity and a shameful act to human civilization and development. Trafficking of women is one of worst brazen abuses which violate the women’s human rights. In India, more particularly in Assam, human trafficking and infringement of human rights of individual includes mainly the women and girl child of the State. Trafficking in North East region of India, more particularly in Assam occurs in two different ways – one is the internal trafficking of women and girl child from conflict affected rural areas of Assam for domestic work and prostitution. Secondly, there is trafficking of women to other south-East Asiatic countries like Bangladesh, Bhutan, Bangkok, Myanmar (Burma) for various purposes such as drug trafficking, labor, bar girl and prostitution.Historically, trafficking in human beings is associated with slavery and bonded or forced labor. Since the period of Roman Civilization, there was the practice of traffic in persons in the form of slave trade among the nations. With the rise of new imperialism, slavery had become an integral part of the colonial system of European Countries. With time, it almost became synonymous with prostitution or commercial sexual exploitation. Finally, the United Nation adopted the Convention for the Suppression of the Traffic in Persons and of the Prostitution of others, 1949 by the G.A.Res.No.-317(iv). The Convention totally denounces the traffic in persons for the purpose of prostitution. However, it is important to note that, now a days trafficking is not confined to commercial sexual exploitation of women and children alone. It has myriad forms and the number of victims has been steadily on the rise over the past few decades. In Assam, it takes place through and for marriage, sexual exploitation, begging, organ trading, militancy conflicts, drug padding and smuggling, labour, adoption, entertainment, and sports. In this paper, empirical methodology has been used. The study is based on primary and secondary sources. Data’s are collected from different books, publications, newspaper, journals etc. For empirical analysis, some random samples are collected and systematized for better result. India suffers from the ignominy of being one of the biggest hubs of women trafficking in the world. Over the years, Assam: the north east part of India has been bearing the brunt of the rapidly rising evil of trafficking of women which threaten the life, dignity and human rights of women. Though different laws are adopted at international and national level to restore trafficking, still the menace of trafficking of women in Assam is not decreased, rather it increased. This causes a serious violation of women’s human right in Assam. Human trafficking or women’s trafficking is a serious crime against society. To curb this in Assam it is required to take some effective and dedicated measure at state level as well as national and international level.Keywords: Assam, human trafficking, sexual exploitation, India
Procedia PDF Downloads 5131022 An Integrative Review on the Experiences of Integration of Quality Assurance Systems in Universities
Authors: Laura Mion
Abstract:
Concepts of quality assurance and management are now part of the organizational culture of the Universities. Quality Assurance (QA) systems are, in large part, provided for by national regulatory dictates or supranational indications (such as, for example, at European level are, the ESG Guidelines "European Standard Guidelines"), but their specific definition, in terms of guiding principles, requirements and methodologies, are often delegated to the national evaluation agencies or to the autonomy of individual universities. For this reason, the experiences of implementation of QA systems in different countries and in different universities is an interesting source of information to understand how quality in universities is understood, pursued and verified. The literature often deals with the treatment of the experiences of implementation of QA systems in the individual areas in which the University's activity is carried out - teaching, research, third mission - but only rarely considers quality systems with a systemic and integrated approach, which allows to correlate subjects, actions, and performance in a virtuous circuit of continuous improvement. In particular, it is interesting to understand how to relate the results and uses of the QA in the triple distinction of university activities, identifying how one can cause the performance of the other as a function of an integrated whole and not as an exploit of specific activities or processes conceived in an abstractly atomistic way. The aim of the research is, therefore, to investigate which experiences of "integrated" QA systems are present on the international scene: starting from the experience of European countries that have long shared the Bologna Process for the creation of a European space for Higher Education (EHEA), but also considering experiences from emerging countries that use QA processes to develop their higher education systems to keep them up to date with international levels. The concept of "integration", in this research, is understood in a double meaning: i) between the different areas of activity, in particular between the didactic and research areas, and possibly with the so-called "third mission" "ii) the functional integration between those involved in quality assessment and management and the governance of the University. The paper will present the results of a systematic review conducted according with a method of an integrative review aimed at identifying best practices of quality assurance systems, in individual countries or individual universities, with a high level of integration. The analysis of the material thus obtained has made it possible to grasp common and transversal elements of QA system integration practices or particularly interesting elements and strengths of these experiences that can, therefore, be considered as winning aspects in a QA practice. The paper will present the method of analysis carried out, and the characteristics of the experiences identified, of which the structural elements will be highlighted (level of integration, areas considered, organizational levels included, etc.) and the elements for which these experiences can be considered as best practices.Keywords: quality assurance, university, integration, country
Procedia PDF Downloads 861021 Using Lysosomal Immunogenic Cell Death to Target Breast Cancer via Xanthine Oxidase/Micro-Antibody Fusion Protein
Authors: Iulianna Taritsa, Kuldeep Neote, Eric Fossel
Abstract:
Lysosome-induced immunogenic cell death (LIICD) is a powerful mechanism of targeting cancer cells that kills circulating malignant cells and primes the host’s immune cells against future remission. Current immunotherapies for cancer are limited in preventing recurrence – a gap that can be bridged by training the immune system to recognize cancer neoantigens. Lysosomal leakage can be induced therapeutically to traffic antigens from dying cells to dendritic cells, which can later present those tumorigenic antigens to T cells. Previous research has shown that oxidative agents administered in the tumor microenvironment can initiate LIICD. We generated a fusion protein between an oxidative agent known as xanthine oxidase (XO) and a mini-antibody specific for EGFR/HER2-sensitive breast tumor cells. The anti-EGFR single domain antibody fragment is uniquely sourced from llama, which is functional without the presence of a light chain. These llama micro-antibodies have been shown to be better able to penetrate tissues and have improved physicochemical stability as compared to traditional monoclonal antibodies. We demonstrate that the fusion protein created is stable and can induce early markers of immunogenic cell death in an in vitro human breast cancer cell line (SkBr3). Specifically, we measured overall cell death, as well as surface-expressed calreticulin, extracellular ATP release, and HMGB1 production. These markers are consensus indicators of ICD. Flow cytometry, luminescence assays, and ELISA were used respectively to quantify biomarker levels between treated versus untreated cells. We also included a positive control group of SkBr3 cells dosed with doxorubicin (a known inducer of LIICD) and a negative control dosed with cisplatin (a known inducer of cell death, but not of the immunogenic variety). We looked at each marker at various time points after cancer cells were treated with the XO/antibody fusion protein, doxorubicin, and cisplatin. Upregulated biomarkers after treatment with the fusion protein indicate an immunogenic response. We thus show the potential for this fusion protein to induce an anticancer effect paired with an adaptive immune response against EGFR/HER2+ cells. Our research in human cell lines here provides evidence for the success of the same therapeutic method for patients and serves as the gateway to developing a new treatment approach against breast cancer.Keywords: apoptosis, breast cancer, immunogenic cell death, lysosome
Procedia PDF Downloads 198