Search results for: intelligence and security
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3937

Search results for: intelligence and security

247 New Territories: Materiality and Craft from Natural Systems to Digital Experiments

Authors: Carla Aramouny

Abstract:

Digital fabrication, between advancements in software and machinery, is pushing practice today towards more complexity in design, allowing for unparalleled explorations. It is giving designers the immediate capacity to apply their imagined objects into physical results. Yet at no time have questions of material knowledge become more relevant and crucial, as technological advancements approach a radical re-invention of the design process. As more and more designers look towards tactile crafts for material know-how, an interest in natural behaviors has also emerged trying to embed intelligence from nature into the designed objects. Concerned with enhancing their immediate environment, designers today are pushing the boundaries of design by bringing in natural systems, materiality, and advanced fabrication as essential processes to produce active designs. New Territories, a yearly architecture and design course on digital design and materiality, allows students to explore processes of digital fabrication in intersection with natural systems and hands-on experiments. This paper will highlight the importance of learning from nature and from physical materiality in a digital design process, and how the simultaneous move between the digital and physical realms has become an essential design method. It will detail the work done over the course of three years, on themes of natural systems, crafts, concrete plasticity, and active composite materials. The aim throughout the course is to explore the design of products and active systems, be it modular facades, intelligent cladding, or adaptable seating, by embedding current digital technologies with an understanding of natural systems and a physical know-how of material behavior. From this aim, three main themes of inquiry have emerged through the varied explorations across the three years, each one approaching materiality and digital technologies through a different lens. The first theme involves crossing the study of naturals systems as precedents for intelligent formal assemblies with traditional crafts methods. The students worked on designing performative facade systems, starting from the study of relevant natural systems and a specific craft, and then using parametric modeling to develop their modular facades. The second theme looks at the cross of craft and digital technologies through form-finding techniques and elastic material properties, bringing in flexible formwork into the digital fabrication process. Students explored concrete plasticity and behaviors with natural references, as they worked on the design of an exterior seating installation using lightweight concrete composites and complex casting methods. The third theme brings in bio-composite material properties with additive fabrication and environmental concerns to create performative cladding systems. Students experimented in concrete composites materials, biomaterials and clay 3D printing to produce different cladding and tiling prototypes that actively enhance their immediate environment. This paper thus will detail the work process done by the students under these three themes of inquiry, describing their material experimentation, digital and analog design methodologies, and their final results. It aims to shed light on the persisting importance of material knowledge as it intersects with advanced digital fabrication and the significance of learning from natural systems and biological properties to embed an active performance in today’s design process.

Keywords: digital fabrication, design and craft, materiality, natural systems

Procedia PDF Downloads 104
246 ‘Transnationalism and the Temporality of Naturalized Citizenship

Authors: Edward Shizha

Abstract:

Citizenship is not only political, but it is also a socio-cultural expectation that naturalized immigrants desire for. However, the outcomes of citizenship desirability are determined by forces outside the individual’s control based on legislation and laws that are designed at the macro and exosystemic levels by politicians and policy makers. These laws are then applied to determine the status (permanency or temporariness) of citizenship for immigrants and refugees, but the same laws do not apply to non-immigrant citizens who attain it by birth. While theoretically, citizenship has generally been considered an irrevocable legal status and the highest and most secure legal status one can hold in a state, it is not inviolate for immigrants. While Article 8 of the United Nations Convention on the Reduction of Statelessness provides grounds for revocation of citizenship obtained by immigrants and refugees in host countries, nation-states have their own laws tied to the convention that provide grounds for revocation. Ever since the 9/11 attacks in the USA, there has been a rise in conditional citizenship and the state’s withdrawal of citizenship through revocation laws that denaturalize citizens who end up not merely losing their citizenship but also the right to reside in the country of immigration. Because immigrants can be perceived as a security threat, the securitization of citizenship and the legislative changes have been adopted to specifically allow greater discretionary power in stripping people of their citizenship.The paper ‘Do We Really Belong Here?’ Transnationalism and the Temporality of Naturalized Citizenship examines literature on the temporality of naturalized citizenship and questions whether citizenship, for newcomers (immigrants and refugees), is a protected human right or a privilege. The paper argues that citizenship in a host country is a well sought-after status by newcomers. The question is whether their citizenship, if granted, has a permanent or temporary status and whether it is treated in the same way as that of non-immigrant citizens. The paper further argues that, despite citizenship having generally been considered an irrevocable status in most Western countries, in practice, if not in law, for immigrants and refugees, citizenship comes with strings attached because of policies and laws that control naturalized citizenship. These laws can be used to denationalize naturalized citizens through revocations for those stigmatized as ‘undesirables’ who are threatened with deportation. Whereas non-immigrant citizens (those who attain it by birth) have absolute right to their citizenship, this is seldom the case for immigrants.This paper takes a multidisciplinary approach using Urie Bronfenbrenner’s ecological systems theory, the macrosystem and exo-system, to examine and review literature on the temporality of naturalized citizenship and questions whether citizenship is a protected right or a privilege for immigrants. The paper challenges the human rights violation of citizenship revocation and argues for equality of treatment for all citizens despite how they acquired their citizenship. The fragility of naturalized citizenship undermines the basic rights and securities that citizenship status can provide to the person as an inclusive practice in a diverse society.

Keywords: citizenship, citizenship revocation, dual citizenship, human rights, naturalization, naturalized citizenship

Procedia PDF Downloads 40
245 Protection of Victims’ Rights in International Criminal Proceedings

Authors: Irina Belozerova

Abstract:

In the recent years, the number of crimes against peace and humanity has constantly been increasing. The development of the international community is inseparably connected to the compliance with the law which protects the rights and interests of citizens in all of their manifestations. The provisions of the law of criminal procedure are no exception. The rights of the victims of genocide, of the war crimes and the crimes against humanity, require particular attention. These crimes fall within the jurisdiction of the International Criminal Court governed by the Rome Statute of the International Criminal Court. These crimes have the following features. First, any such crime has a mass character and therefore requires specific regulation in the international criminal law and procedure and the national criminal law and procedure of different countries. Second, the victims of such crimes are usually children, women and old people; the entire national, ethnic, racial or religious groups are destroyed. These features influence the classification of victims by the age criterion. Article 68 of the Rome Statute provides for protection of the safety, physical and psychological well-being, dignity and privacy of victims and witnesses and thus determines the procedural status of these persons. However, not all the persons whose rights have been violated by the commission of these crimes acquire the status of victims. This is due to the fact that such crimes affect a huge number of persons and it is impossible to mention them all by name. It is also difficult to assess the entire damage suffered by the victims. While assessing the amount of damages it is essential to take into account physical and moral harm, as well as property damage. The procedural status of victims thus gains an exclusive character. In order to determine the full extent of the damage suffered by the victims it is necessary to collect sufficient evidence. However, it is extremely difficult to collect the evidence that would ensure the full and objective protection of the victims’ rights. While making requests for the collection of evidence, the International Criminal Court faces the problem of protection of national security information. Religious beliefs and the family life of victims are of great importance. In some Islamic countries, it is impossible to question a woman without her husband’s consent which affects the objectivity of her testimony. Finally, the number of victims is quantified by hundreds and thousands. The assessment of these elements demands time and highly qualified work. These factors justify the creation of a mechanism that would help to collect the evidence and establish the truth in the international criminal proceedings. This mechanism will help to impose a just and appropriate punishment for the persons accused of having committed a crime, since, committing the crime, criminals could not misunderstand the outcome of their criminal intent.

Keywords: crimes against humanity, evidence in international criminal proceedings, international criminal proceedings, protection of victims

Procedia PDF Downloads 222
244 The Desirable Construction of Urbanity in Spaces for Public Use

Authors: Giselly Barros Rodrigues, Carlos Leite de Souza

Abstract:

In recent years, there has been a great discussion about urbanism, the right to the city, the search for the public space and the occupation and appropriation of people in the spaces of the city. This movement happens all over the world and also in the great Brazilian metropolises. The more human-friendly city - the desirable construction of urbanity - as well as the encouragement of walking or bicycling to the detriment of cars is one of the major issues addressed by urban planners and challenges in the process of reviewing regulatory frameworks. The fact is that even if there are public spaces or space for public use in private areas - it is essential that there be, besides a project focused on the people and the use of space, a good management not to generate excess of control and consequently the segregation between different ethnicities, classes or creed. With the insertion of the Strategic Master Plan of Sao Paulo (2014), there is great incentive for them to implement - in the private spaces - of mixed uses and active facades (Services and commerce in the basement of buildings), these incentives will generate a city for people in the medium and long term. This research seeks to discuss the extent to which these spaces are democratic, what their perceptions are in relation to the space of public use in private areas and why this perception may be the one that was originally idealized. For this study, we carried out bibliographic reviews where applied research were carried out in three case studies listed in Sao Paulo. Questionnaires were also applied to the actors who gave answers regarding their perceptions and how they were approached in the places analyzed. After analyzing the material, it was verified that in the three case studies analyzed, sitting on the floor is prohibited. In the two places in Paulista Avenue (Cetenco Plaza and Square of Mall Cidade Sao Paulo) there was no problem whatsoever in relation to the clothes or attitudes of the actors in the streets of Paulista Avenue in Sao Paulo city. Different from what happened in the Itaim neighborhood (Brascan Century Plaza), with more conservative characteristics, where the actors were heavily watched by security and observed by others due to their clothes and attitudes in that area. The city of Sao Paulo is slowly changing, people are increasingly looking for places of quality in public use in their daily lives. The Strategic Master Plan of Sao Paulo (2014) and the Legislation approved in 2016 envision a city more humane and people-oriented in the future. It is up to the private sector, the public, and society to work together so that this glimpse becomes an abundant reality in every city, generating quality of life and urbanity for all.

Keywords: urbanity, space for public use, appropriation of space, segregation

Procedia PDF Downloads 209
243 Applying GIS Geographic Weighted Regression Analysis to Assess Local Factors Impeding Smallholder Farmers from Participating in Agribusiness Markets: A Case Study of Vihiga County, Western Kenya

Authors: Mwehe Mathenge, Ben G. J. S. Sonneveld, Jacqueline E. W. Broerse

Abstract:

Smallholder farmers are important drivers of agriculture productivity, food security, and poverty reduction in Sub-Saharan Africa. However, they are faced with myriad challenges in their efforts at participating in agribusiness markets. How the geographic explicit factors existing at the local level interact to impede smallholder farmers' decision to participates (or not) in agribusiness markets is not well understood. Deconstructing the spatial complexity of the local environment could provide a deeper insight into how geographically explicit determinants promote or impede resource-poor smallholder farmers from participating in agribusiness. This paper’s objective was to identify, map, and analyze local spatial autocorrelation in factors that impede poor smallholders from participating in agribusiness markets. Data were collected using geocoded researcher-administered survey questionnaires from 392 households in Western Kenya. Three spatial statistics methods in geographic information system (GIS) were used to analyze data -Global Moran’s I, Cluster and Outliers Analysis (Anselin Local Moran’s I), and geographically weighted regression. The results of Global Moran’s I reveal the presence of spatial patterns in the dataset that was not caused by spatial randomness of data. Subsequently, Anselin Local Moran’s I result identified spatially and statistically significant local spatial clustering (hot spots and cold spots) in factors hindering smallholder participation. Finally, the geographically weighted regression results unearthed those specific geographic explicit factors impeding market participation in the study area. The results confirm that geographically explicit factors are indispensable in influencing the smallholder farming decisions, and policymakers should take cognizance of them. Additionally, this research demonstrated how geospatial explicit analysis conducted at the local level, using geographically disaggregated data, could help in identifying households and localities where the most impoverished and resource-poor smallholder households reside. In designing spatially targeted interventions, policymakers could benefit from geospatial analysis methods in understanding complex geographic factors and processes that interact to influence smallholder farmers' decision-making processes and choices.

Keywords: agribusiness markets, GIS, smallholder farmers, spatial statistics, disaggregated spatial data

Procedia PDF Downloads 112
242 Data Mining in Healthcare for Predictive Analytics

Authors: Ruzanna Muradyan

Abstract:

Medical data mining is a crucial field in contemporary healthcare that offers cutting-edge tactics with enormous potential to transform patient care. This abstract examines how sophisticated data mining techniques could transform the healthcare industry, with a special focus on how they might improve patient outcomes. Healthcare data repositories have dynamically evolved, producing a rich tapestry of different, multi-dimensional information that includes genetic profiles, lifestyle markers, electronic health records, and more. By utilizing data mining techniques inside this vast library, a variety of prospects for precision medicine, predictive analytics, and insight production become visible. Predictive modeling for illness prediction, risk stratification, and therapy efficacy evaluations are important points of focus. Healthcare providers may use this abundance of data to tailor treatment plans, identify high-risk patient populations, and forecast disease trajectories by applying machine learning algorithms and predictive analytics. Better patient outcomes, more efficient use of resources, and early treatments are made possible by this proactive strategy. Furthermore, data mining techniques act as catalysts to reveal complex relationships between apparently unrelated data pieces, providing enhanced insights into the cause of disease, genetic susceptibilities, and environmental factors. Healthcare practitioners can get practical insights that guide disease prevention, customized patient counseling, and focused therapies by analyzing these associations. The abstract explores the problems and ethical issues that come with using data mining techniques in the healthcare industry. In order to properly use these approaches, it is essential to find a balance between data privacy, security issues, and the interpretability of complex models. Finally, this abstract demonstrates the revolutionary power of modern data mining methodologies in transforming the healthcare sector. Healthcare practitioners and researchers can uncover unique insights, enhance clinical decision-making, and ultimately elevate patient care to unprecedented levels of precision and efficacy by employing cutting-edge methodologies.

Keywords: data mining, healthcare, patient care, predictive analytics, precision medicine, electronic health records, machine learning, predictive modeling, disease prognosis, risk stratification, treatment efficacy, genetic profiles, precision health

Procedia PDF Downloads 32
241 Enhancing Early Detection of Coronary Heart Disease Through Cloud-Based AI and Novel Simulation Techniques

Authors: Md. Abu Sufian, Robiqul Islam, Imam Hossain Shajid, Mahesh Hanumanthu, Jarasree Varadarajan, Md. Sipon Miah, Mingbo Niu

Abstract:

Coronary Heart Disease (CHD) remains a principal cause of global morbidity and mortality, characterized by atherosclerosis—the build-up of fatty deposits inside the arteries. The study introduces an innovative methodology that leverages cloud-based platforms like AWS Live Streaming and Artificial Intelligence (AI) to early detect and prevent CHD symptoms in web applications. By employing novel simulation processes and AI algorithms, this research aims to significantly mitigate the health and societal impacts of CHD. Methodology: This study introduces a novel simulation process alongside a multi-phased model development strategy. Initially, health-related data, including heart rate variability, blood pressure, lipid profiles, and ECG readings, were collected through user interactions with web-based applications as well as API Integration. The novel simulation process involved creating synthetic datasets that mimic early-stage CHD symptoms, allowing for the refinement and training of AI algorithms under controlled conditions without compromising patient privacy. AWS Live Streaming was utilized to capture real-time health data, which was then processed and analysed using advanced AI techniques. The novel aspect of our methodology lies in the simulation of CHD symptom progression, which provides a dynamic training environment for our AI models enhancing their predictive accuracy and robustness. Model Development: it developed a machine learning model trained on both real and simulated datasets. Incorporating a variety of algorithms including neural networks and ensemble learning model to identify early signs of CHD. The model's continuous learning mechanism allows it to evolve adapting to new data inputs and improving its predictive performance over time. Results and Findings: The deployment of our model yielded promising results. In the validation phase, it achieved an accuracy of 92% in predicting early CHD symptoms surpassing existing models. The precision and recall metrics stood at 89% and 91% respectively, indicating a high level of reliability in identifying at-risk individuals. These results underscore the effectiveness of combining live data streaming with AI in the early detection of CHD. Societal Implications: The implementation of cloud-based AI for CHD symptom detection represents a significant step forward in preventive healthcare. By facilitating early intervention, this approach has the potential to reduce the incidence of CHD-related complications, decrease healthcare costs, and improve patient outcomes. Moreover, the accessibility and scalability of cloud-based solutions democratize advanced health monitoring, making it available to a broader population. This study illustrates the transformative potential of integrating technology and healthcare, setting a new standard for the early detection and management of chronic diseases.

Keywords: coronary heart disease, cloud-based ai, machine learning, novel simulation techniques, early detection, preventive healthcare

Procedia PDF Downloads 25
240 Use of Geoinformatics and Mathematical Equations to Assess Erosion and Soil Fertility in Cassava Growing Areas in Maha Sarakham Province, Thailand

Authors: Sasirin Srisomkiew, Sireewan Ratsadornasai, Tanomkwan Tipvong, Isariya Meesing

Abstract:

Cassava is an important food source in the tropics and has recently gained attention as a potential source of biofuel that can replace limited fossil fuel sources. As a result, the demand for cassava production to support industries both within the country and abroad has increased. In Thailand, most farmers prefer to grow cassava in sandy and sandy loam areas where the soil has low natural fertility. Cassava is a tuber plant that has large roots to store food, resulting in the absorption of large amounts of nutrients from the soil, such as nitrogen, phosphorus, and potassium. Therefore, planting cassava in the same area for a long period causes soil erosion and decreases soil fertility. The loss of soil fertility affects the economy, society, and food and energy security of the country. Therefore, it is necessary to know the level of soil fertility and the amount of nutrients in the soil. To address this problem, this study applies geo-informatics technology and mathematical equations to assess erosion and soil fertility and to analyze factors affecting the amount of cassava production in Maha Sarakham Province. The results show that the area for cassava cultivation has increased in every district of Maha Sarakham Province between 2015-2022, with the total area increasing to 180,922 rai or 5.47% of the province’s total area during this period. Furthermore, it was found that it is possible to assess areas with soil erosion problems that had a moderate level of erosion in areas with high erosion rates ranging from 5-15 T/rai/year. Soil fertility assessment and information obtained from the soil nutrient map for 2015–2023 reveal that farmers in the area have improved the soil by adding chemical fertilizers along with organic fertilizers, such as manure and green manure, to increase the amount of nutrients in the soil. This is because the soil resources of Maha Sarakham Province mostly have relatively low agricultural potential due to the soil texture being sand and sandy loam. In this scenario, the ability to absorb nutrients is low, and the soil holds little water, so it is naturally low in fertility. Moreover, agricultural soil problems were found, including the presence of saline soil, sandy soil, and acidic soil, which is a serious restriction on land use because it affects the release of nutrients into the soil. The results of this study may be used as a guideline for managing soil resources and improving soil quality to prevent soil degradation problems that may occur in the future.

Keywords: Cassava, geoinformatics, soil erosion, soil fertility, land use change

Procedia PDF Downloads 20
239 Enhancing Seismic Resilience in Urban Environments

Authors: Beatriz González-rodrigo, Diego Hidalgo-leiva, Omar Flores, Claudia Germoso, Maribel Jiménez-martínez, Laura Navas-sánchez, Belén Orta, Nicola Tarque, Orlando Hernández- Rubio, Miguel Marchamalo, Juan Gregorio Rejas, Belén Benito-oterino

Abstract:

Cities facing seismic hazard necessitate detailed risk assessments for effective urban planning and vulnerability identification, ensuring the safety and sustainability of urban infrastructure. Comprehensive studies involving seismic hazard, vulnerability, and exposure evaluations are pivotal for estimating potential losses and guiding proactive measures against seismic events. However, broad-scale traditional risk studies limit consideration of specific local threats and identify vulnerable housing within a structural typology. Achieving precise results at neighbourhood levels demands higher resolution seismic hazard exposure, and vulnerability studies. This research aims to bolster sustainability and safety against seismic disasters in three Central American and Caribbean capitals. It integrates geospatial techniques and artificial intelligence into seismic risk studies, proposing cost-effective methods for exposure data collection and damage prediction. The methodology relies on prior seismic threat studies in pilot zones, utilizing existing exposure and vulnerability data in the region. Emphasizing detailed building attributes enables the consideration of behaviour modifiers affecting seismic response. The approach aims to generate detailed risk scenarios, facilitating prioritization of preventive actions pre-, during, and post-seismic events, enhancing decision-making certainty. Detailed risk scenarios necessitate substantial investment in fieldwork, training, research, and methodology development. Regional cooperation becomes crucial given similar seismic threats, urban planning, and construction systems among involved countries. The outcomes hold significance for emergency planning and national and regional construction regulations. The success of this methodology depends on cooperation, investment, and innovative approaches, offering insights and lessons applicable to regions facing moderate seismic threats with vulnerable constructions. Thus, this framework aims to fortify resilience in seismic-prone areas and serves as a reference for global urban planning and disaster management strategies. In conclusion, this research proposes a comprehensive framework for seismic risk assessment in high-risk urban areas, emphasizing detailed studies at finer resolutions for precise vulnerability evaluations. The approach integrates regional cooperation, geospatial technologies, and adaptive fragility curve adjustments to enhance risk assessment accuracy, guiding effective mitigation strategies and emergency management plans.

Keywords: assessment, behaviour modifiers, emergency management, mitigation strategies, resilience, vulnerability

Procedia PDF Downloads 41
238 Deployment of Armed Soldiers in European Cities as a Source of Insecurity among Czech Population

Authors: Blanka Havlickova

Abstract:

In the last ten years, there are growing numbers of troops with machine guns serving on streets of European cities. We can see them around government buildings, major transport hubs, synagogues, galleries and main tourist landmarks. As the main purpose of armed soldier’s presence in European cities authorities declare the prevention of terrorist attacks and psychological support for tourists and domestic population. The main objective of the following study is to find out whether the deployment of armed soldiers in European cities has a calming and reassuring effect on Czech citizens (if the presence at armed soldiers make the Czech population feel more secure) or rather becomes a stress factor (the presence of soldiers standing guard in full military fatigues recalls serious criminality and terrorist attacks which are reflected in the fears and insecurity of Czech population). The initial hypothesis of this study is connected with the priming theory, the idea that when we are exposed to an image (armed soldier), it makes us unconsciously focus on a topic connected with this image (terrorism). This paper is based on a quantitative public survey, which was carried out in the form of electronic questioning among the citizens of the Czech Republic. Respondents answered 14 questions about two European cities – London and Paris. Besides general questions investigating the respondents' awareness of these cities, some of the questions focused on the fear that the respondents had when picturing themselves leaving next Monday for the given city (London or Paris). The questions asking about respondent´s travel fears and concerns were accompanied by different photos. When answering the question about fear some respondents have been presented with a photo of Westminster Palace and the Eiffel with ordinary citizens while other respondents have been presented with a picture of the Westminster Palace, the and Eiffel's tower not only with ordinary citizens, but also with one soldier holding a machine gun. The main goal of this paper is to analyse and compare data about concerns for these two groups of respondents (presented with different pictures) and find out if and how an armed soldier with a machine gun in front of the Westminster Palace or the Eiffel Tower affects the public's concerns about visiting the site. In other words, the aim of this paper is to confirm or rebut the hypothesis that the look at a soldier with a machine gun in front of the Eiffel Tower or the Westminster Palace automatically triggers the association with a terrorist attack leading to an increase in fear and insecurity among Czech population.

Keywords: terrorism, security measures, priming, risk perception

Procedia PDF Downloads 225
237 Artificial Intelligence Based Method in Identifying Tumour Infiltrating Lymphocytes of Triple Negative Breast Cancer

Authors: Nurkhairul Bariyah Baharun, Afzan Adam, Reena Rahayu Md Zin

Abstract:

Tumor microenvironment (TME) in breast cancer is mainly composed of cancer cells, immune cells, and stromal cells. The interaction between cancer cells and their microenvironment plays an important role in tumor development, progression, and treatment response. The TME in breast cancer includes tumor-infiltrating lymphocytes (TILs) that are implicated in killing tumor cells. TILs can be found in tumor stroma (sTILs) and within the tumor (iTILs). TILs in triple negative breast cancer (TNBC) have been demonstrated to have prognostic and potentially predictive value. The international Immune-Oncology Biomarker Working Group (TIL-WG) had developed a guideline focus on the assessment of sTILs using hematoxylin and eosin (H&E)-stained slides. According to the guideline, the pathologists use “eye balling” method on the H&E stained- slide for sTILs assessment. This method has low precision, poor interobserver reproducibility, and is time-consuming for a comprehensive evaluation, besides only counted sTILs in their assessment. The TIL-WG has therefore recommended that any algorithm for computational assessment of TILs utilizing the guidelines provided to overcome the limitations of manual assessment, thus providing highly accurate and reliable TILs detection and classification for reproducible and quantitative measurement. This study is carried out to develop a TNBC digital whole slide image (WSI) dataset from H&E-stained slides and IHC (CD4+ and CD8+) stained slides. TNBC cases were retrieved from the database of the Department of Pathology, Hospital Canselor Tuanku Muhriz (HCTM). TNBC cases diagnosed between the year 2010 and 2021 with no history of other cancer and available block tissue were included in the study (n=58). Tissue blocks were sectioned approximately 4 µm for H&E and IHC stain. The H&E staining was performed according to a well-established protocol. Indirect IHC stain was also performed on the tissue sections using protocol from Diagnostic BioSystems PolyVue™ Plus Kit, USA. The slides were stained with rabbit monoclonal, CD8 antibody (SP16) and Rabbit monoclonal, CD4 antibody (EP204). The selected and quality-checked slides were then scanned using a high-resolution whole slide scanner (Pannoramic DESK II DW- slide scanner) to digitalize the tissue image with a pixel resolution of 20x magnification. A manual TILs (sTILs and iTILs) assessment was then carried out by the appointed pathologist (2 pathologists) for manual TILs scoring from the digital WSIs following the guideline developed by TIL-WG 2014, and the result displayed as the percentage of sTILs and iTILs per mm² stromal and tumour area on the tissue. Following this, we aimed to develop an automated digital image scoring framework that incorporates key elements of manual guidelines (including both sTILs and iTILs) using manually annotated data for robust and objective quantification of TILs in TNBC. From the study, we have developed a digital dataset of TNBC H&E and IHC (CD4+ and CD8+) stained slides. We hope that an automated based scoring method can provide quantitative and interpretable TILs scoring, which correlates with the manual pathologist-derived sTILs and iTILs scoring and thus has potential prognostic implications.

Keywords: automated quantification, digital pathology, triple negative breast cancer, tumour infiltrating lymphocytes

Procedia PDF Downloads 81
236 Reclaiming The Sahara as a Bridge to Afro-Arab solidarity

Authors: Radwa Saad

Abstract:

The Sahara is normatively treated as a barrier separating “two Africas"; one to the North with closer affinity to the Arab world, and one to the South that encompasses a diverse range of racial, ethnic and religious groups, commonly referred to as "Sub-Saharan Africa". This dichotomy however was challenged by many anticolonial leaders and intellectuals seeking to advance counter-hegemonic narratives that treat the Sahara as a bridge facilitating a long history of exchange, collaboration, and fusion between different civilizations on the continent. This paper reexamines the discourses governing the geographic distinction between North Africa and Sub-Saharan Africa. It argues that demarcating the African continent along the lines of the Sahara is part-and-parcel of a Euro-centric spatial imaginary that has served to enshrine a racialized global hierarchy of power. By drawing on Edward Said’s concept of ‘imagined geography’ and Charles Mill’s notion of “the racial contract”, it demonstrates how spatial boundaries often coincide with racial epistemologies to reinforce certain geopolitical imaginaries, whilst silencing others. It further draws on the works of two notable post-colonial figures - Gamal Abdel Nasser and Leopold Senghor - to explore alternative spatial imaginaries while highlighting some of the tensions embedded in advancing a trans-Saharan political project. Firstly, it deconstructs some of the normative claims used to justify the distinction between North and “sub-Saharan” Africa across political, literary and disciplinary boundaries. Secondly, it draws parallels between Said’s and Mills’ work to demonstrate how geographical boundaries and demarcations have been constructed to create racialized subjects and reinforce a hierarchy of color that favors European standpoints and epistemologies. Third, it draw on Leopard Senghor’s The Foundations of Africanité and Gamal Abdel Nasser’s The Philosophy of the Egyptian Revolution to examine some of the competing strands of unity that emerged out of the Saharan discourse. In these texts, one can identify a number of convergences and divergences in how post-colonial African elites attempts to reclaim and rearticulate the function of the Sahara along different epistemic, political and cultural premises. It concludes with reflections on some of the policy challenges that emerge from reinforcing the Saharan divide, particularly in the realm of peace and security.

Keywords: regional integration, politics of knowledge production, arab-african relations, african solutions to african problems

Procedia PDF Downloads 55
235 Partial M-Sequence Code Families Applied in Spectral Amplitude Coding Fiber-Optic Code-Division Multiple-Access Networks

Authors: Shin-Pin Tseng

Abstract:

Nowadays, numerous spectral amplitude coding (SAC) fiber-optic code-division-multiple-access (FO-CDMA) techniques were appealing due to their capable of providing moderate security and relieving the effects of multiuser interference (MUI). Nonetheless, the performance of the previous network is degraded due to fixed in-phase cross-correlation (IPCC) value. Based on the above problems, a new SAC FO-CDMA network using partial M-sequence (PMS) code is presented in this study. Because the proposed PMS code is originated from M-sequence code, the system using the PMS code could effectively suppress the effects of MUI. In addition, two-code keying (TCK) scheme can applied in the proposed SAC FO-CDMA network and enhance the whole network performance. According to the consideration of system flexibility, simple optical encoders/decoders (codecs) using fiber Bragg gratings (FBGs) were also developed. First, we constructed a diagram of the SAC FO-CDMA network, including (N/2-1) optical transmitters, (N/2-1) optical receivers, and one N×N star coupler for broadcasting transmitted optical signals to arrive at the input port of each optical receiver. Note that the parameter N for the PMS code was the code length. In addition, the proposed SAC network was using superluminescent diodes (SLDs) as light sources, which then can save a lot of system cost compared with the other FO-CDMA methods. For the design of each optical transmitter, it is composed of an SLD, one optical switch, and two optical encoders according to assigned PMS codewords. On the other hand, each optical receivers includes a 1 × 2 splitter, two optical decoders, and one balanced photodiode for mitigating the effect of MUI. In order to simplify the next analysis, the some assumptions were used. First, the unipolarized SLD has flat power spectral density (PSD). Second, the received optical power at the input port of each optical receiver is the same. Third, all photodiodes in the proposed network have the same electrical properties. Fourth, transmitting '1' and '0' has an equal probability. Subsequently, by taking the factors of phase‐induced intensity noise (PIIN) and thermal noise, the corresponding performance was displayed and compared with the performance of the previous SAC FO-CDMA networks. From the numerical result, it shows that the proposed network improved about 25% performance than that using other codes at BER=10-9. This is because the effect of PIIN was effectively mitigated and the received power was enhanced by two times. As a result, the SAC FO-CDMA network using PMS codes has an opportunity to apply in applications of the next-generation optical network.

Keywords: spectral amplitude coding, SAC, fiber-optic code-division multiple-access, FO-CDMA, partial M-sequence, PMS code, fiber Bragg grating, FBG

Procedia PDF Downloads 352
234 Towards Development of Superior Brassica juncea by Pyramiding of Genes of Diverse Pathways for Value Addition, Stress Alleviation and Human Health

Authors: Deepak Kumar, Ravi Rajwanshi, Mohd. Aslam Yusuf, Nisha Kant Pandey, Preeti Singh, Mukesh Saxena, Neera Bhalla Sarin

Abstract:

Global issues are leading to concerns over food security. These include climate change, urbanization, increase in population subsequently leading to greater energy and water demand. Futuristic approach for crop improvement involves gene pyramiding for agronomic traits that empower the plants to withstand multiple stresses. In an earlier study from the laboratory, the efficacy of overexpressing γ-tocopherol methyl transferase (γ-TMT) gene from the vitamin E biosynthetic pathway has been shown to result in six-fold increase of the most biologically active form, the α-tocopherol in Brassica juncea which resulted in alleviation of salt, heavy metal and osmoticum induced stress by the transgenic plants. The glyoxalase I (gly I) gene from the glyoxalase pathway has also been earlier shown by us to impart tolerance against multiple abioitc stresses by detoxification of the cytotoxic compound methylglyoxal in Brassica juncea. Recently, both the transgenes were pyramided in Brassica juncea lines through sexual crosses involving two stable Brassica juncea lines overexpressing γ-TMT and gly I genes respectively. The transgene integration was confirmed by PCR analysis and their mRNA expression was evident by RT-PCR analysis. Preliminary physiological investigations showed ~55% increased seed germination under 200 mM NaCl stress in the pyramided line and 81% higher seed germination under 200 mM mannitol stress as compared to the WT control plants. The pyramided lines also retained more chlorophyll content when the leaf discs were floated on NaCl (200, 400 and 600 mM) or mannitol (200, 400 and 600 mM) compared to the WT control plants. These plants had higher Relative Water Content and greater solute accumulation under stress compared to the parental plants having γ-TMT or the glyI gene respectively. The studies revealed the synergy of two components from different metabolic pathways in enhancing stress hardiness of the transgenic B. juncea plants. It was concluded that pyramiding of genes (γ-TMT and glyI) from diverse pathways can lead to enhanced tolerance to salt and mannitol stress (simulating drought conditions). This strategy can prove useful in enhancing the crop yields under various abiotic stresses.

Keywords: abiotic stress, brassica juncea, glyoxalase I, α-tocopherol

Procedia PDF Downloads 518
233 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS

Authors: Eunsu Jang, Kang Park

Abstract:

In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.

Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis

Procedia PDF Downloads 362
232 Working Conditions and Occupational Health: Analyzing the Stressing Factors in Outsourced Employees

Authors: Cledinaldo A. Dias, Isabela C. Santos, Marcus V. S. Siqueira

Abstract:

In the contemporary globalization, the competitiveness generated in the search of new markets aiming at the growth of productivity and, consequently, of profits, implies the redefinition of productive processes and new forms of work organization. As a result of this structuring, unemployment, labor force turnover and the increase in outsourcing and informal work occur. Considering the different relationships and working conditions of outsourced employees, this study aims to identify the most present stressors among outsourced service providers from a Federal Institution of Higher Education in Brazil. To reach this objective, a descriptive exploratory study with a quantitative approach was carried out. The qualitative approach was chosen to provide an in-depth analysis of the occupational conditions of outsourced workers since this method seeks to focus on the social as a world of investigated meanings and the language or speech of each subject as the object of this approach. The survey was conducted in the city of Montes Claros - Minas Gerais (Brazil) and involved eighty workers from companies hired by the institution, including armed security guards, porters, cleaners, drivers, gardeners, and administrative assistants. The choice of professionals obeyed non-probabilistic criteria for convenience or accessibility. Data collection was performed by means of a structured questionnaire composed of sixty questions, in a Likert-type frequency interval scale format, in order to identify potential organizational stressors. The results obtained evidence that the stress factors pointed out by the workers are, in most cases, a determining factor due to the low productive performance at work. Amongst the factors associated with stress, the ones that stood out most were those related to organizational communication failures, the incentive to competition, lack of expectations of professional growth, insecurity and job instability. Based on the results, the need for greater concern and organizational responsibility with the well-being and mental health of the outsourced worker and the recognition of their physical and psychological limitations, and care that goes beyond the functional capacity for the work. Specifically for the preservation of mental health, physical and quality of life, it is concluded that it is necessary for the professional to be inserted in the external world that favors it internally since this set is complemented so that the individual remains in balance and obtain satisfaction in your work.

Keywords: occupational health, outsourced, organizational studies, stressors

Procedia PDF Downloads 73
231 Measuring Systems Interoperability: A Focal Point for Standardized Assessment of Regional Disaster Resilience

Authors: Joel Thomas, Alexa Squirini

Abstract:

The key argument of this research is that every element of systems interoperability is an enabler of regional disaster resilience, and arguably should become a focal point for standardized measurement of communities’ ability to work together. Few resilience research efforts have focused on the development and application of solutions that measurably improve communities’ ability to work together at a regional level, yet a majority of the most devastating and disruptive disasters are those that have had a regional impact. The key findings of the research include a unique theoretical, mathematical, and operational approach to tangibly and defensibly measure and assess systems interoperability required to support crisis information management activities performed by governments, the private sector, and humanitarian organizations. A most effective way for communities to measurably improve regional disaster resilience is through deliberately executed disaster preparedness activities. Developing interoperable crisis information management capabilities is a crosscutting preparedness activity that greatly affects a community’s readiness and ability to work together in times of crisis. Thus, improving communities’ human and technical posture to work together in advance of a crisis, with the ultimate goal of enabling information sharing to support coordination and the careful management of available resources, is a primary means by which communities may improve regional disaster resilience. This model describes how systems interoperability can be qualitatively and quantitatively assessed when characterized as five forms of capital: governance; standard operating procedures; technology; training and exercises; and usage. The unique measurement framework presented defines the relationships between systems interoperability, information sharing and safeguarding, operational coordination, community preparedness and regional disaster resilience, and offers a means by which to implement real-world solutions and measure progress over the course of a multi-year program. The model is being developed and piloted in partnership with the U.S. Department of Homeland Security (DHS) Science and Technology Directorate (S&T) and the North Atlantic Treaty Organization (NATO) Advanced Regional Civil Emergency Coordination Pilot (ARCECP) with twenty-three organizations in Bosnia and Herzegovina, Croatia, Macedonia, and Montenegro. The intended effect of the model implementation is to enable communities to answer two key questions: 'Have we measurably improved crisis information management capabilities as a result of this effort?' and, 'As a result, are we more resilient?'

Keywords: disaster, interoperability, measurement, resilience

Procedia PDF Downloads 116
230 Effectiveness of an Intervention to Increase Physics Students' STEM Self-Efficacy: Results of a Quasi-Experimental Study

Authors: Stephanie J. Sedberry, William J. Gerace, Ian D. Beatty, Michael J. Kane

Abstract:

Increasing the number of US university students who attain degrees in STEM and enter the STEM workforce is a national priority. Demographic groups vary in their rates of participation in STEM, and the US produces just 10% of the world’s science and engineering degrees (2014 figures). To address these gaps, we have developed and tested a practical, 30-minute, single-session classroom-based intervention to improve students’ self-efficacy and academic performance in University STEM courses. Self-efficacy is a psychosocial construct that strongly correlates with academic success. Self-efficacy is a construct that is internal and relates to the social, emotional, and psychological aspects of student motivation and performance. A compelling body of research demonstrates that university students’ self-efficacy beliefs are strongly related to their selection of STEM as a major, aspirations for STEM-related careers, and persistence in science. The development of an intervention to increase students’ self-efficacy is motivated by research showing that short, social-psychological interventions in education can lead to large gains in student achievement. Our intervention addresses STEM self-efficacy via two strong, but previously separate, lines of research into attitudinal/affect variables that influence student success. The first is ‘attributional retraining,’ in which students learn to attribute their successes and failures to internal rather than external factors. The second is ‘mindset’ about fixed vs. growable intelligence, in which students learn that the brain remains plastic throughout life and that they can, with conscious effort and attention to thinking skills and strategies, become smarter. Extant interventions for both of these constructs have significantly increased academic performance in the classroom. We developed a 34-item questionnaire (Likert scale) to measure STEM Self-efficacy, Perceived Academic Control, and Growth Mindset in a University STEM context, and validated it with exploratory factor analysis, Rasch analysis, and multi-trait multi-method comparison to coded interviews. Four iterations of our 42-week research protocol were conducted across two academic years (2017-2018) at three different Universities in North Carolina, USA (UNC-G, NC A&T SU, and NCSU) with varied student demographics. We utilized a quasi-experimental prospective multiple-group time series research design with both experimental and control groups, and we are employing linear modeling to estimate the impact of the intervention on Self-Efficacy,wth-Mindset, Perceived Academic Control, and final course grades (performance measure). Preliminary results indicate statistically significant effects of treatment vs. control on Self-Efficacy, Growth-Mindset, Perceived Academic Control. Analyses are ongoing and final results pending. This intervention may have the potential to increase student success in the STEM classroom—and ownership of that success—to continue in a STEM career. Additionally, we have learned a great deal about the complex components and dynamics of self-efficacy, their link to performance, and the ways they can be impacted to improve students’ academic performance.

Keywords: academic performance, affect variables, growth mindset, intervention, perceived academic control, psycho-social variables, self-efficacy, STEM, university classrooms

Procedia PDF Downloads 109
229 Examining the Mediating and Moderating Role of Relationships in the Association between Poverty and Children’s Subjective Well-Being

Authors: Esther Yin-Nei Cho

Abstract:

There is inconsistency among studies about whether there is an association between poverty and the subjective wellbeing of children. Some have found a positive association, though its magnitude could be limited, others have shown no association. One possible explanation for this inconsistency is that household income, an often-adopted measure of child poverty, may not accurately and stably reflect the actual life experience of children. Some studies have suggested, however, that material deprivation covering various dimensions of children’s lives could be a better measure of child poverty. Another possible explanation for the inconsistency is that the link between poverty and subjective wellbeing of children may not be that straightforward, as there could be underlying mechanisms, such as mediation and moderation, influencing its direction or strength. While a mediator refers to the mechanism through which an independent variable affects a dependent variable, a moderator changes the direction or strength of the relationship between an independent variable and a dependent variable. As suggested by empirical evidence, family relationships and friendships could be potential mediators or moderators of the link between poverty and subjective well-being: poverty affects relationships; relationships are an important element in children’s subjective well-being; and economic status affects child outcomes, though not necessarily subjective wellbeing, through relationships. Since the potential links have not been adequately understood, this study fills this gap by examining the possible role of family relationships and friendships as mediators or moderators between poverty (using child-derived material deprivation as measure) and the subjective wellbeing of children. Improving subjective wellbeing is increasingly considered as a policy goal. The finding of no or a limited association between poverty and subjective wellbeing of children could be a justification for less effort to improve poverty in this regard. But if the observed magnitude of that association is due to some underlying mechanisms at work, the effect of poverty may be underestimated and the potentially useful strategies that take into account both poverty and other mediators or moderators for improving children’s subjective well-being may be overlooked. Multiple mediation, and multiple moderation models, based on regression analyses, are performed to a sample of approximately 1,600 children, who are aged 10 to 15, from the wellbeing survey conducted by The Children’s Society in England from 2010 to 2011. Results show that the effect of children’s material deprivation on their subjective well-being is mediated by their family relationships and friendships. Moreover, family relationships are a significant moderator. It is found that the negative impact of child deprivation on subjective wellbeing could be exacerbated if family relationships are not going well, while good family relationships may prevent the further decline in subjective well-being. Policy implications of the findings are discussed. In particular, policy measures that focus on strengthening the family relationships or nurturing home environment through supporting household’s economic security and parental time with children could promote the subjective wellbeing of children.

Keywords: child poverty, mediation, moderation, subjective well-being of children

Procedia PDF Downloads 296
228 Nurturing Resilient Families: Strategies for Positive Parenting and Emotional Well-Being

Authors: Xu Qian

Abstract:

This abstract explores the importance of building resilience within families and offers evidence-based strategies for promoting positive parenting and enhancing emotional well-being. It emphasizes the role of effective communication, conflict resolution, and fostering a supportive environment to strengthen family bonds and promote healthy child development. Introduction: The well-being and resilience of families play a crucial role in fostering healthy child development and promoting overall emotional well-being. This abstract highlights the significance of nurturing resilient families and provides evidence-based strategies for positive parenting. By focusing on effective communication, conflict resolution, and creating a supportive environment, families can strengthen their bonds and enhance emotional well-being for both parents and children. Methods: This abstract draws upon a comprehensive review of existing research and literature on resilient families, positive parenting, and emotional well-being. The selected studies employ various methodologies, including surveys, interviews, and longitudinal observations, to investigate the factors contributing to family resilience and the strategies that promote positive parenting practices. The findings from these studies serve as the foundation for the strategies discussed in this abstract. Results: The results of the reviewed studies demonstrate that effective communication within families is a key factor in building resilience and promoting emotional well-being. Open and honest communication allows family members to express their thoughts, feelings, and concerns, fostering trust and understanding. Conflict resolution skills, such as active listening, compromise, and problem-solving, are vital in managing conflicts constructively and preventing negative consequences on family dynamics and children's well-being. Creating a supportive environment that nurtures emotional well-being is another critical aspect of promoting resilient families. This includes providing emotional support, setting clear boundaries, and promoting positive discipline strategies. Research indicates that consistent and responsive parenting approaches contribute to improved self-regulation skills, emotional intelligence, and overall mental health in children. Discussion: The discussion centers on the implications of these findings for promoting positive parenting and emotional well-being. It emphasizes the need for parents to prioritize self-care and seek support when facing challenges. Parental well-being directly influences the quality of parenting and the overall family environment. By attending to their own emotional needs, parents can better meet the needs of their children and create a nurturing atmosphere. Furthermore, the importance of fostering resilience in children is highlighted. Resilient children are better equipped to cope with adversity, adapt to change, and thrive in challenging circumstances. By cultivating resilience through supportive relationships, encouragement of independence, and providing opportunities for growth, parents can foster their children's ability to bounce back from setbacks and develop essential life skills. Conclusion: In conclusion, nurturing resilient families is crucial for positive parenting and enhancing emotional well-being. This abstract presents evidence-based strategies that emphasize effective communication, conflict resolution, and creating a supportive environment. By implementing these strategies, parents can strengthen family bonds, promote healthy child development, and enhance overall family resilience. Investing in resilient families not only benefits individual family members but also contributes to the well-being of the broader community.

Keywords: childrearing families, family education, children's mental health, positive parenting, emotional health

Procedia PDF Downloads 60
227 The Learning Loops in the Public Realm Project in South Verona: Air Quality and Noise Pollution Participatory Data Collection towards Co-Design, Planning and Construction of Mitigation Measures in Urban Areas

Authors: Massimiliano Condotta, Giovanni Borga, Chiara Scanagatta

Abstract:

Urban systems are places where the various actors involved interact and enter in conflict, in particular with reference to topics such as traffic congestion and security. But topics of discussion, and often clash because of their strong complexity, are air and noise pollution. For air pollution, the complexity stems from the fact that atmospheric pollution is due to many factors, but above all, the observation and measurement of the amount of pollution of a transparent, mobile and ethereal element like air is very difficult. Often the perceived condition of the inhabitants does not coincide with the real conditions, because it is conditioned - sometimes in positive ways other in negative ways - from many other factors such as the presence, or absence, of natural elements such as trees or rivers. These problems are seen with noise pollution as well, which is also less considered as an issue even if it’s problematic just as much as air quality. Starting from these opposite positions, it is difficult to identify and implement valid, and at the same time shared, mitigation solutions for the problem of urban pollution (air and noise pollution). The LOOPER (Learning Loops in the Public Realm) project –described in this paper – wants to build and test a methodology and a platform for participatory co-design, planning, and construction process inside a learning loop process. Novelties in this approach are various; the most relevant are three. The first is that citizens participation starts since from the research of problems and air quality analysis through a participatory data collection, and that continues in all process steps (design and construction). The second is that the methodology is characterized by a learning loop process. It means that after the first cycle of (1) problems identification, (2) planning and definition of design solution and (3) construction and implementation of mitigation measures, the effectiveness of implemented solutions is measured and verified through a new participatory data collection campaign. In this way, it is possible to understand if the policies and design solution had a positive impact on the territory. As a result of the learning process produced by the first loop, it will be possible to improve the design of the mitigation measures and start the second loop with new and more effective measures. The third relevant aspect is that the citizens' participation is carried out via Urban Living Labs that involve all stakeholder of the city (citizens, public administrators, associations of all urban stakeholders,…) and that the Urban Living Labs last for all the cycling of the design, planning and construction process. The paper will describe in detail the LOOPER methodology and the technical solution adopted for the participatory data collection and design and construction phases.

Keywords: air quality, co-design, learning loops, noise pollution, urban living labs

Procedia PDF Downloads 337
226 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine

Authors: Adriana Haulica

Abstract:

Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.

Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics

Procedia PDF Downloads 47
225 ESRA: An End-to-End System for Re-identification and Anonymization of Swiss Court Decisions

Authors: Joel Niklaus, Matthias Sturmer

Abstract:

The publication of judicial proceedings is a cornerstone of many democracies. It enables the court system to be made accountable by ensuring that justice is made in accordance with the laws. Equally important is privacy, as a fundamental human right (Article 12 in the Declaration of Human Rights). Therefore, it is important that the parties (especially minors, victims, or witnesses) involved in these court decisions be anonymized securely. Today, the anonymization of court decisions in Switzerland is performed either manually or semi-automatically using primitive software. While much research has been conducted on anonymization for tabular data, the literature on anonymization for unstructured text documents is thin and virtually non-existent for court decisions. In 2019, it has been shown that manual anonymization is not secure enough. In 21 of 25 attempted Swiss federal court decisions related to pharmaceutical companies, pharmaceuticals, and legal parties involved could be manually re-identified. This was achieved by linking the decisions with external databases using regular expressions. An automated re-identification system serves as an automated test for the safety of existing anonymizations and thus promotes the right to privacy. Manual anonymization is very expensive (recurring annual costs of over CHF 20M in Switzerland alone, according to an estimation). Consequently, many Swiss courts only publish a fraction of their decisions. An automated anonymization system reduces these costs substantially, further leading to more capacity for publishing court decisions much more comprehensively. For the re-identification system, topic modeling with latent dirichlet allocation is used to cluster an amount of over 500K Swiss court decisions into meaningful related categories. A comprehensive knowledge base with publicly available data (such as social media, newspapers, government documents, geographical information systems, business registers, online address books, obituary portal, web archive, etc.) is constructed to serve as an information hub for re-identifications. For the actual re-identification, a general-purpose language model is fine-tuned on the respective part of the knowledge base for each category of court decisions separately. The input to the model is the court decision to be re-identified, and the output is a probability distribution over named entities constituting possible re-identifications. For the anonymization system, named entity recognition (NER) is used to recognize the tokens that need to be anonymized. Since the focus lies on Swiss court decisions in German, a corpus for Swiss legal texts will be built for training the NER model. The recognized named entities are replaced by the category determined by the NER model and an identifier to preserve context. This work is part of an ongoing research project conducted by an interdisciplinary research consortium. Both a legal analysis and the implementation of the proposed system design ESRA will be performed within the next three years. This study introduces the system design of ESRA, an end-to-end system for re-identification and anonymization of Swiss court decisions. Firstly, the re-identification system tests the safety of existing anonymizations and thus promotes privacy. Secondly, the anonymization system substantially reduces the costs of manual anonymization of court decisions and thus introduces a more comprehensive publication practice.

Keywords: artificial intelligence, courts, legal tech, named entity recognition, natural language processing, ·privacy, topic modeling

Procedia PDF Downloads 109
224 Prediction of Terrorist Activities in Nigeria using Bayesian Neural Network with Heterogeneous Transfer Functions

Authors: Tayo P. Ogundunmade, Adedayo A. Adepoju

Abstract:

Terrorist attacks in liberal democracies bring about a few pessimistic results, for example, sabotaged public support in the governments they target, disturbing the peace of a protected environment underwritten by the state, and a limitation of individuals from adding to the advancement of the country, among others. Hence, seeking for techniques to understand the different factors involved in terrorism and how to deal with those factors in order to completely stop or reduce terrorist activities is the topmost priority of the government in every country. This research aim is to develop an efficient deep learning-based predictive model for the prediction of future terrorist activities in Nigeria, addressing low-quality prediction accuracy problems associated with the existing solution methods. The proposed predictive AI-based model as a counterterrorism tool will be useful by governments and law enforcement agencies to protect the lives of individuals in society and to improve the quality of life in general. A Heterogeneous Bayesian Neural Network (HETBNN) model was derived with Gaussian error normal distribution. Three primary transfer functions (HOTTFs), as well as two derived transfer functions (HETTFs) arising from the convolution of the HOTTFs, are namely; Symmetric Saturated Linear transfer function (SATLINS ), Hyperbolic Tangent transfer function (TANH), Hyperbolic Tangent sigmoid transfer function (TANSIG), Symmetric Saturated Linear and Hyperbolic Tangent transfer function (SATLINS-TANH) and Symmetric Saturated Linear and Hyperbolic Tangent Sigmoid transfer function (SATLINS-TANSIG). Data on the Terrorist activities in Nigeria gathered through questionnaires for the purpose of this study were used. Mean Square Error (MSE), Mean Absolute Error (MAE) and Test Error are the forecast prediction criteria. The results showed that the HETFs performed better in terms of prediction and factors associated with terrorist activities in Nigeria were determined. The proposed predictive deep learning-based model will be useful to governments and law enforcement agencies as an effective counterterrorism mechanism to understand the parameters of terrorism and to design strategies to deal with terrorism before an incident actually happens and potentially causes the loss of precious lives. The proposed predictive AI-based model will reduce the chances of terrorist activities and is particularly helpful for security agencies to predict future terrorist activities.

Keywords: activation functions, Bayesian neural network, mean square error, test error, terrorism

Procedia PDF Downloads 137
223 Effects of Land Certification in Securing Women’s Land Rights: The Case of Oromia Regional State, Central Ethiopia

Authors: Mesfin Nigussie Ibido

Abstract:

The study is designed to explore the effects of land certification in securing women’s land rights of two rural villages in Robe district at Arsi Zone of Oromia regional state. The land is very critical assets for human life survival and the backbone for rural women livelihood. Equal access and control power to the land have given a chance for rural women to participate in different economic activities and improve their bargaining ability for decision making on their rights. Unfortunately, women were discriminated and marginalized from access and control of land for centuries through customary practices. However, in many countries, legal reform is used as a powerful tool for eliminating discriminatory provisions in property rights. Among other equity and efficiency concerns, the land certification program in Ethiopia attempts to address gender bias concerns of the current land-tenure system. The existed rural land policy was recognizing a women land rights and benefited by strengthened wives awareness of their land rights and contribute to the strong involvement of wives in decision making. However, harmful practices and policy implementation problems still against women do not fully exercise a provision of land rights in a different area of the country. Thus, this study is carried out to examine the effect of land certification in securing women’s land rights by eliminating the discriminatory nature of cultural abuses of study areas. Probability and non-probability sampling types were used, and the sample size was determined by using the sampling distribution of the proportion method. Systematic random sampling method was applied by taking the nth element of the sample frame. Both quantitative and qualitative research methods were applied, and survey respondents of 192 households were conducted and administering questionnaires in the quantitative method. The qualitative method was applied by interviews with focus group discussions with rural women, case stories, Village, and relevant district offices. Triangulation method was applied in data collection, data presentation and in the analysis of findings. Study finding revealed that the existence of land certification is affected by rural women positively by advancing their land rights, but still, some women are challenged by unsolved problems in the study areas. The study forwards recommendation on the existed problems or gaps to ensure women’s equal access to and control over land in the study areas.

Keywords: decision making, effects, land certification, land right, tenure security

Procedia PDF Downloads 180
222 Application of the Pattern Method to Form the Stable Neural Structures in the Learning Process as a Way of Solving Modern Problems in Education

Authors: Liudmyla Vesper

Abstract:

The problems of modern education are large-scale and diverse. The aspirations of parents, teachers, and experts converge - everyone interested in growing up a generation of whole, well-educated persons. Both the family and society are expected in the future generation to be self-sufficient, desirable in the labor market, and capable of lifelong learning. Today's children have a powerful potential that is difficult to realize in the conditions of traditional school approaches. Focusing on STEM education in practice often ends with the simple use of computers and gadgets during class. "Science", "technology", "engineering" and "mathematics" are difficult to combine within school and university curricula, which have not changed much during the last 10 years. Solving the problems of modern education largely depends on teachers - innovators, teachers - practitioners who develop and implement effective educational methods and programs. Teachers who propose innovative pedagogical practices that allow students to master large-scale knowledge and apply it to the practical plane. Effective education considers the creation of stable neural structures during the learning process, which allow to preserve and increase knowledge throughout life. The author proposed a method of integrated lessons – cases based on the maths patterns for forming a holistic perception of the world. This method and program are scientifically substantiated and have more than 15 years of practical application experience in school and student classrooms. The first results of the practical application of the author's methodology and curriculum were announced at the International Conference "Teaching and Learning Strategies to Promote Elementary School Success", 2006, April 22-23, Yerevan, Armenia, IREX-administered 2004-2006 Multiple Component Education Project. This program is based on the concept of interdisciplinary connections and its implementation in the process of continuous learning. This allows students to save and increase knowledge throughout life according to a single pattern. The pattern principle stores information on different subjects according to one scheme (pattern), using long-term memory. This is how neural structures are created. The author also admits that a similar method can be successfully applied to the training of artificial intelligence neural networks. However, this assumption requires further research and verification. The educational method and program proposed by the author meet the modern requirements for education, which involves mastering various areas of knowledge, starting from an early age. This approach makes it possible to involve the child's cognitive potential as much as possible and direct it to the preservation and development of individual talents. According to the methodology, at the early stages of learning students understand the connection between school subjects (so-called "sciences" and "humanities") and in real life, apply the knowledge gained in practice. This approach allows students to realize their natural creative abilities and talents, which makes it easier to navigate professional choices and find their place in life.

Keywords: science education, maths education, AI, neuroplasticity, innovative education problem, creativity development, modern education problem

Procedia PDF Downloads 22
221 An Efficient Algorithm for Solving the Transmission Network Expansion Planning Problem Integrating Machine Learning with Mathematical Decomposition

Authors: Pablo Oteiza, Ricardo Alvarez, Mehrdad Pirnia, Fuat Can

Abstract:

To effectively combat climate change, many countries around the world have committed to a decarbonisation of their electricity, along with promoting a large-scale integration of renewable energy sources (RES). While this trend represents a unique opportunity to effectively combat climate change, achieving a sound and cost-efficient energy transition towards low-carbon power systems poses significant challenges for the multi-year Transmission Network Expansion Planning (TNEP) problem. The objective of the multi-year TNEP is to determine the necessary network infrastructure to supply the projected demand in a cost-efficient way, considering the evolution of the new generation mix, including the integration of RES. The rapid integration of large-scale RES increases the variability and uncertainty in the power system operation, which in turn increases short-term flexibility requirements. To meet these requirements, flexible generating technologies such as energy storage systems must be considered within the TNEP as well, along with proper models for capturing the operational challenges of future power systems. As a consequence, TNEP formulations are becoming more complex and difficult to solve, especially for its application in realistic-sized power system models. To meet these challenges, there is an increasing need for developing efficient algorithms capable of solving the TNEP problem with reasonable computational time and resources. In this regard, a promising research area is the use of artificial intelligence (AI) techniques for solving large-scale mixed-integer optimization problems, such as the TNEP. In particular, the use of AI along with mathematical optimization strategies based on decomposition has shown great potential. In this context, this paper presents an efficient algorithm for solving the multi-year TNEP problem. The algorithm combines AI techniques with Column Generation, a traditional decomposition-based mathematical optimization method. One of the challenges of using Column Generation for solving the TNEP problem is that the subproblems are of mixed-integer nature, and therefore solving them requires significant amounts of time and resources. Hence, in this proposal we solve a linearly relaxed version of the subproblems, and trained a binary classifier that determines the value of the binary variables, based on the results obtained from the linearized version. A key feature of the proposal is that we integrate the binary classifier into the optimization algorithm in such a way that the optimality of the solution can be guaranteed. The results of a study case based on the HRP 38-bus test system shows that the binary classifier has an accuracy above 97% for estimating the value of the binary variables. Since the linearly relaxed version of the subproblems can be solved with significantly less time than the integer programming counterpart, the integration of the binary classifier into the Column Generation algorithm allowed us to reduce the computational time required for solving the problem by 50%. The final version of this paper will contain a detailed description of the proposed algorithm, the AI-based binary classifier technique and its integration into the CG algorithm. To demonstrate the capabilities of the proposal, we evaluate the algorithm in case studies with different scenarios, as well as in other power system models.

Keywords: integer optimization, machine learning, mathematical decomposition, transmission planning

Procedia PDF Downloads 49
220 Islamic Extremist Groups' Usage of Populism in Social Media to Radicalize Muslim Migrants in Europe

Authors: Muhammad Irfan

Abstract:

The rise of radicalization within Islam has spawned a new era of global terror. The battlefield Successes of ISIS and the Taliban are fuelled by an ideological war waged, largely and successfully, in the media arena. This research will examine how Islamic extremist groups are using media modalities and populist narratives to influence migrant Muslim populations in Europe towards extremism. In 2014, ISIS shocked the world in exporting horrifically graphic forms of violence on social media. Their Muslim support base was largely disgusted and reviled. In response, they reconfigured their narrative by introducing populist 'hooks', astutely portraying the Muslim populous as oppressed and exploited by unjust, corrupt autocratic regimes and Western power structures. Within this crucible of real and perceived oppression, hundreds of thousands of the most desperate, vulnerable and abused migrants left their homelands, risking their lives in the hope of finding peace, justice, and prosperity in Europe. Instead, many encountered social stigmatization, detention and/or discrimination for being illegal migrants, for lacking resources and for simply being Muslim. This research will examine how Islamic extremist groups are exploiting the disenfranchisement of these migrant populations and using populist messaging on social media to influence them towards violent extremism. ISIS, in particular, formulates specific encoded messages for newly-arriving Muslims in Europe, preying upon their vulnerability. Violence is posited, as a populist response, to the tyranny of European oppression. This research will analyze the factors and indicators which propel Muslim migrants along the spectrum from resilience to violence extremism. Expected outcomes are identification of factors which influence vulnerability towards violent extremism; an early-warning detection framework; predictive analysis models; and de-radicalization frameworks. This research will provide valuable tools (practical and policy level) for European governments, security stakeholders, communities, policy-makers, and educators; it is anticipated to contribute to a de-escalation of Islamic extremism globally.

Keywords: populism, radicalization, de-radicalization, social media, ISIS, Taliban, shariah, jihad, Islam, Europe, political communication, terrorism, migrants, refugees, extremism, global terror, predictive analysis, early warning detection, models, strategic communication, populist narratives, Islamic extremism

Procedia PDF Downloads 96
219 Innovation Eco-Systems and Cities: Sustainable Innovation and Urban Form

Authors: Claudia Trillo

Abstract:

Regional innovation eco-ecosystems are composed of a variety of interconnected urban innovation eco-systems, mutually reinforcing each other and making the whole territorial system successful. Combining principles drawn from the new economic growth theory and from the socio-constructivist approach to the economic growth, with the new geography of innovation emerging from the networked nature of innovation districts, this paper explores the spatial configuration of urban innovation districts, with the aim of unveiling replicable spatial patterns and transferable portfolios of urban policies. While some authors suggest that cities should be considered ideal natural clusters, supporting cross-fertilization and innovation thanks to the physical setting they provide to the construction of collective knowledge, still a considerable distance persists between regional development strategies and urban policies. Moreover, while public and private policies supporting entrepreneurship normally consider innovation as the cornerstone of any action aimed at uplifting the competitiveness and economic success of a certain area, a growing body of literature suggests that innovation is non-neutral, hence, it should be constantly assessed against equity and social inclusion. This paper draws from a robust qualitative empirical dataset gathered through 4-years research conducted in Boston to provide readers with an evidence-based set of recommendations drawn from the lessons learned through the investigation of the chosen innovation districts in the Boston area. The evaluative framework used for assessing the overall performance of the chosen case studies stems from the Habitat III Sustainable Development Goals rationale. The concept of inclusive growth has been considered essential to assess the social innovation domain in each of the chosen cases. The key success factors for the development of the Boston innovation ecosystem can be generalized as follows: 1) a quadruple helix model embedded in the physical structure of the two cities (Boston and Cambridge), in which anchor Higher Education (HE) institutions continuously nurture the Entrepreneurial Environment. 2) an entrepreneurial approach emerging from the local governments, eliciting risk-taking and bottom-up civic participation in tackling key issues in the city. 3) a networking structure of some intermediary actors supporting entrepreneurial collaboration, cross-fertilization and co-creation, which collaborate at multiple-scales thus enabling positive spillovers from the stronger to the weaker contexts. 4) awareness of the socio-economic value of the built environment as enabler of cognitive networks allowing activation of the collective intelligence. 5) creation of civic-led spaces enabling grassroot collaboration and cooperation. Evidence shows that there is not a single magic recipe for the successful implementation of place-based and social innovation-driven strategies. On the contrary, the variety of place-grounded combinations of micro and macro initiatives, embedded in the social and spatial fine grain of places and encompassing a diversity of actors, can create the conditions enabling places to thrive and local economic activities to grow in a sustainable way.

Keywords: innovation-driven sustainable Eco-systems , place-based sustainable urban development, sustainable innovation districts, social innovation, urban policie

Procedia PDF Downloads 81
218 Association between Polygenic Risk of Alzheimer's Dementia, Brain MRI and Cognition in UK Biobank

Authors: Rachana Tank, Donald. M. Lyall, Kristin Flegal, Joey Ward, Jonathan Cavanagh

Abstract:

Alzheimer’s research UK estimates by 2050, 2 million individuals will be living with Late Onset Alzheimer’s disease (LOAD). However, individuals experience considerable cognitive deficits and brain pathology over decades before reaching clinically diagnosable LOAD and studies have utilised gene candidate studies such as genome wide association studies (GWAS) and polygenic risk (PGR) scores to identify high risk individuals and potential pathways. This investigation aims to determine whether high genetic risk of LOAD is associated with worse brain MRI and cognitive performance in healthy older adults within the UK Biobank cohort. Previous studies investigating associations of PGR for LOAD and measures of MRI or cognitive functioning have focused on specific aspects of hippocampal structure, in relatively small sample sizes and with poor ‘controlling’ for confounders such as smoking. Both the sample size of this study and the discovery GWAS sample are bigger than previous studies to our knowledge. Genetic interaction between loci showing largest effects in GWAS have not been extensively studied and it is known that APOE e4 poses the largest genetic risk of LOAD with potential gene-gene and gene-environment interactions of e4, for this reason we  also analyse genetic interactions of PGR with the APOE e4 genotype. High genetic loading based on a polygenic risk score of 21 SNPs for LOAD is associated with worse brain MRI and cognitive outcomes in healthy individuals within the UK Biobank cohort. Summary statistics from Kunkle et al., GWAS meta-analyses (case: n=30,344, control: n=52,427) will be used to create polygenic risk scores based on 21 SNPs and analyses will be carried out in N=37,000 participants in the UK Biobank. This will be the largest study to date investigating PGR of LOAD in relation to MRI. MRI outcome measures include WM tracts, structural volumes. Cognitive function measures include reaction time, pairs matching, trail making, digit symbol substitution and prospective memory. Interaction of the APOE e4 alleles and PGR will be analysed by including APOE status as an interaction term coded as either 0, 1 or 2 e4 alleles. Models will be adjusted partially for adjusted for age, BMI, sex, genotyping chip, smoking, depression and social deprivation. Preliminary results suggest PGR score for LOAD is associated with decreased hippocampal volumes including hippocampal body (standardised beta = -0.04, P = 0.022) and tail (standardised beta = -0.037, P = 0.030), but not with hippocampal head. There were also associations of genetic risk with decreased cognitive performance including fluid intelligence (standardised beta = -0.08, P<0.01) and reaction time (standardised beta = 2.04, P<0.01). No genetic interactions were found between APOE e4 dose and PGR score for MRI or cognitive measures. The generalisability of these results is limited by selection bias within the UK Biobank as participants are less likely to be obese, smoke, be socioeconomically deprived and have fewer self-reported health conditions when compared to the general population. Lack of a unified approach or standardised method for calculating genetic risk scores may also be a limitation of these analyses. Further discussion and results are pending.

Keywords: Alzheimer's dementia, cognition, polygenic risk, MRI

Procedia PDF Downloads 92