Search results for: deficit lens bias
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1597

Search results for: deficit lens bias

67 Overlaps and Intersections: An Alternative Look at Choreography

Authors: Ashlie Latiolais

Abstract:

Architecture, as a discipline, is on a trajectory of extension beyond the boundaries of buildings and, more increasingly, is coupled with research that connects to alternative and typically disjointed disciplines. A “both/and” approach and (expanded) definition of architecture, as depicted here, expands the margins that contain the profession. Figuratively, architecture is a series of edges, events, and occurrences that establishes a choreography or stage by which humanity exists. The way in which architecture controls and suggests the movement through these spaces, being within a landscape, city, or building, can be viewed as a datum by which the “dance” of everyday life occurs. This submission views the realm of architecture through the lens of movement and dance as a cross-fertilizer of collaboration, tectonic, and spatial geometry investigations. “Designing on digital programs puts architects at a distance from the spaces they imagine. While this has obvious advantages, it also means that they lose the lived, embodied experience of feeling what is needed in space—meaning that some design ideas that work in theory ultimately fail in practice.” By studying the body in motion through real-time performance, a more holistic understanding of architectural space surfaces and new prospects for theoretical teaching pedagogies emerge. The atypical intersection rethinks how architecture is considered, created, and tested, similar to how “dance artists often do this by thinking through the body, opening pathways and possibilities that might not otherwise be accessible” –this is the essence of this poster submission as explained through unFOLDED, a creative performance work. A new languageismaterialized through unFOLDED, a dynamic occupiable installation by which architecture is investigated through dance, movement, and body analysis. The entry unfolds a collaboration of an architect, dance choreographer, musicians, video artist, and lighting designers to re-create one of the first documented avant-garde performing arts collaborations (Matisse, Satie, Massine, Picasso) from the Ballet Russes in 1917, entitled Parade. Architecturally, this interdisciplinary project orients and suggests motion through structure, tectonic, lightness, darkness, and shadow as it questions the navigation of the dark space (stage) surrounding the installation. Artificial light via theatrical lighting and video graphics brought the blank canvas to life – where the sensitive mix of musicality coordinated with the structure’s movement sequencing was certainly a challenge. The upstage light from the video projections created both flickered contextual imagery and shadowed figures. When the dancers were either upstage or downstage of the structure, both silhouetted figures and revealed bodies are experienced as dancer-controlled installation manipulations occurred throughout the performance. The experimental performance, through structure, prompted moving (dancing) bodies in space, where the architecture served as a key component to the choreography itself. The tectonic of the delicate steel structure allowed for the dancers to interact with the installation, which created a variety of spatial conditions – the contained box of three-dimensional space, to a wall, and various abstracted geometries in between. The development of this research unveils the new role of an Architect as a Choreographer of the built environment.

Keywords: dance, architecture, choreography, installation, architect, choreographer, space

Procedia PDF Downloads 91
66 Assessing the Environmental Efficiency of China’s Power System: A Spatial Network Data Envelopment Analysis Approach

Authors: Jianli Jiang, Bai-Chen Xie

Abstract:

The climate issue has aroused global concern. Achieving sustainable development is a good path for countries to mitigate environmental and climatic pressures, although there are many difficulties. The first step towards sustainable development is to evaluate the environmental efficiency of the energy industry with proper methods. The power sector is a major source of CO2, SO2, and NOx emissions. Evaluating the environmental efficiency (EE) of power systems is the premise to alleviate the terrible situation of energy and the environment. Data Envelopment Analysis (DEA) has been widely used in efficiency studies. However, measuring the efficiency of a system (be it a nation, region, sector, or business) is a challenging task. The classic DEA takes the decision-making units (DMUs) as independent, which neglects the interaction between DMUs. While ignoring these inter-regional links may result in a systematic bias in the efficiency analysis; for instance, the renewable power generated in a certain region may benefit the adjacent regions while the SO2 and CO2 emissions act oppositely. This study proposes a spatial network DEA (SNDEA) with a slack measure that can capture the spatial spillover effects of inputs/outputs among DMUs to measure efficiency. This approach is used to study the EE of China's power system, which consists of generation, transmission, and distribution departments, using a panel dataset from 2014 to 2020. In the empirical example, the energy and patent inputs, the undesirable CO2 output, and the renewable energy (RE) power variables are tested for a significant spatial spillover effect. Compared with the classic network DEA, the SNDEA result shows an obvious difference tested by the global Moran' I index. From a dynamic perspective, the EE of the power system experiences a visible surge from 2015, then a sharp downtrend from 2019, which keeps the same trend with the power transmission department. This phenomenon benefits from the market-oriented reform in the Chinese power grid enacted in 2015. The rapid decline in the environmental efficiency of the transmission department in 2020 was mainly due to the Covid-19 epidemic, which hinders economic development seriously. While the EE of the power generation department witnesses a declining trend overall, this is reasonable, taking the RE power into consideration. The installed capacity of RE power in 2020 is 4.40 times that in 2014, while the power generation is 3.97 times; in other words, the power generation per installed capacity shrank. In addition, the consumption cost of renewable power increases rapidly with the increase of RE power generation. These two aspects make the EE of the power generation department show a declining trend. Incorporation of the interactions among inputs/outputs into the DEA model, this paper proposes an efficiency evaluation method on the basis of the DEA framework, which sheds some light on efficiency evaluation in regional studies. Furthermore, the SNDEA model and the spatial DEA concept can be extended to other fields, such as industry, country, and so on.

Keywords: spatial network DEA, environmental efficiency, sustainable development, power system

Procedia PDF Downloads 109
65 The Bidirectional Effect between Parental Burnout and the Child’s Internalized and/or Externalized Behaviors

Authors: Aline Woine, Moïra Mikolajczak, Virginie Dardier, Isabelle Roskam

Abstract:

Background information: Becoming a parent is said to be the happiest event one can ever experience in one’s life. This popular (and almost absolute) truth–which no reasonable and decent human being would ever dare question on pain of being singled out as a bad parent–contrasts with the nuances that reality offers. Indeed, while many parents do thrive in their parenting role, some others falter and become progressively overwhelmed by their parenting role, ineluctably caught in a spiral of exhaustion. Parental burnout (henceforth PB) sets in when parental demands (stressors) exceed parental resources. While it is now generally acknowledged that PB affects the parent’s behavior in terms of neglect and violence toward their offspring, little is known about the impact that the syndrome might have on the children’s internalized (anxious and depressive symptoms, somatic complaints, etc.) and/or externalized (irritability, violence, aggressiveness, conduct disorder, oppositional disorder, etc.) behaviors. Furthermore, at the time of writing, to our best knowledge, no research has yet tested the reverse effect, namely, that of the child's internalized and/or externalized behaviors on the onset and/or maintenance of parental burnout symptoms. Goals and hypotheses: The present pioneering research proposes to fill an important gap in the existing literature related to PB by investigating the bidirectional effect between PB and the child’s internalized and/or externalized behaviors. Relying on a cross-lagged longitudinal study with three waves of data collection (4 months apart), our study tests a transactional model with bidirectional and recursive relations between observed variables and at the three waves, as well as autoregressive paths and cross-sectional correlations. Methods: As we write this, wave-two data are being collected via Qualtrics, and we expect a final sample of about 600 participants composed of French-speaking (snowball sample) and English-speaking (Prolific sample) parents. Structural equation modeling is employed using Stata version 17. In order to retain as much statistical power as possible, we use all available data and therefore apply the maximum likelihood with a missing value (mlmv) as the method of estimation to compute the parameter estimates. To limit (in so far is possible) the shared method variance bias in the evaluation of the child’s behavior, the study relies on a multi-informant evaluation approach. Expected results: We expect our three-wave longitudinal study to show that PB symptoms (measured at T1) raise the occurrence/intensity of the child’s externalized and/or internalized behaviors (measured at T2 and T3). We further expect the child’s occurrence/intensity of externalized and/or internalized behaviors (measured at T1) to augment the risk for PB (measured at T2 and T3). Conclusion: Should our hypotheses be confirmed, our results will make an important contribution to the understanding of both PB and children’s behavioral issues, thereby opening interesting theoretical and clinical avenues.

Keywords: exhaustion, structural equation modeling, cross-lagged longitudinal study, violence and neglect, child-parent relationship

Procedia PDF Downloads 73
64 Production Factor Coefficients Transition through the Lens of State Space Model

Authors: Kanokwan Chancharoenchai

Abstract:

Economic growth can be considered as an important element of countries’ development process. For developing countries, like Thailand, to ensure the continuous growth of the economy, the Thai government usually implements various policies to stimulate economic growth. They may take the form of fiscal, monetary, trade, and other policies. Because of these different aspects, understanding factors relating to economic growth could allow the government to introduce the proper plan for the future economic stimulating scheme. Consequently, this issue has caught interest of not only policymakers but also academics. This study, therefore, investigates explanatory variables for economic growth in Thailand from 2005 to 2017 with a total of 52 quarters. The findings would contribute to the field of economic growth and become helpful information to policymakers. The investigation is estimated throughout the production function with non-linear Cobb-Douglas equation. The rate of growth is indicated by the change of GDP in the natural logarithmic form. The relevant factors included in the estimation cover three traditional means of production and implicit effects, such as human capital, international activity and technological transfer from developed countries. Besides, this investigation takes the internal and external instabilities into account as proxied by the unobserved inflation estimation and the real effective exchange rate (REER) of the Thai baht, respectively. The unobserved inflation series are obtained from the AR(1)-ARCH(1) model, while the unobserved REER of Thai baht is gathered from naive OLS-GARCH(1,1) model. According to empirical results, the AR(|2|) equation which includes seven significant variables, namely capital stock, labor, the imports of capital goods, trade openness, the REER of Thai baht uncertainty, one previous GDP, and the world financial crisis in 2009 dummy, presents the most suitable model. The autoregressive model is assumed constant estimator that would somehow cause the unbias. However, this is not the case of the recursive coefficient model from the state space model that allows the transition of coefficients. With the powerful state space model, it provides the productivity or effect of each significant factor more in detail. The state coefficients are estimated based on the AR(|2|) with the exception of the one previous GDP and the 2009 world financial crisis dummy. The findings shed the light that those factors seem to be stable through time since the occurrence of the world financial crisis together with the political situation in Thailand. These two events could lower the confidence in the Thai economy. Moreover, state coefficients highlight the sluggish rate of machinery replacement and quite low technology of capital goods imported from abroad. The Thai government should apply proactive policies via taxation and specific credit policy to improve technological advancement, for instance. Another interesting evidence is the issue of trade openness which shows the negative transition effect along the sample period. This could be explained by the loss of price competitiveness to imported goods, especially under the widespread implementation of free trade agreement. The Thai government should carefully handle with regulations and the investment incentive policy by focusing on strengthening small and medium enterprises.

Keywords: autoregressive model, economic growth, state space model, Thailand

Procedia PDF Downloads 151
63 Automatic Adult Age Estimation Using Deep Learning of the ResNeXt Model Based on CT Reconstruction Images of the Costal Cartilage

Authors: Ting Lu, Ya-Ru Diao, Fei Fan, Ye Xue, Lei Shi, Xian-e Tang, Meng-jun Zhan, Zhen-hua Deng

Abstract:

Accurate adult age estimation (AAE) is a significant and challenging task in forensic and archeology fields. Attempts have been made to explore optimal adult age metrics, and the rib is considered a potential age marker. The traditional way is to extract age-related features designed by experts from macroscopic or radiological images followed by classification or regression analysis. Those results still have not met the high-level requirements for practice, and the limitation of using feature design and manual extraction methods is loss of information since the features are likely not designed explicitly for extracting information relevant to age. Deep learning (DL) has recently garnered much interest in imaging learning and computer vision. It enables learning features that are important without a prior bias or hypothesis and could be supportive of AAE. This study aimed to develop DL models for AAE based on CT images and compare their performance to the manual visual scoring method. Chest CT data were reconstructed using volume rendering (VR). Retrospective data of 2500 patients aged 20.00-69.99 years were obtained between December 2019 and September 2021. Five-fold cross-validation was performed, and datasets were randomly split into training and validation sets in a 4:1 ratio for each fold. Before feeding the inputs into networks, all images were augmented with random rotation and vertical flip, normalized, and resized to 224×224 pixels. ResNeXt was chosen as the DL baseline due to its advantages of higher efficiency and accuracy in image classification. Mean absolute error (MAE) was the primary parameter. Independent data from 100 patients acquired between March and April 2022 were used as a test set. The manual method completely followed the prior study, which reported the lowest MAEs (5.31 in males and 6.72 in females) among similar studies. CT data and VR images were used. The radiation density of the first costal cartilage was recorded using CT data on the workstation. The osseous and calcified projections of the 1 to 7 costal cartilages were scored based on VR images using an eight-stage staging technique. According to the results of the prior study, the optimal models were the decision tree regression model in males and the stepwise multiple linear regression equation in females. Predicted ages of the test set were calculated separately using different models by sex. A total of 2600 patients (training and validation sets, mean age=45.19 years±14.20 [SD]; test set, mean age=46.57±9.66) were evaluated in this study. Of ResNeXt model training, MAEs were obtained with 3.95 in males and 3.65 in females. Based on the test set, DL achieved MAEs of 4.05 in males and 4.54 in females, which were far better than the MAEs of 8.90 and 6.42 respectively, for the manual method. Those results showed that the DL of the ResNeXt model outperformed the manual method in AAE based on CT reconstruction of the costal cartilage and the developed system may be a supportive tool for AAE.

Keywords: forensic anthropology, age determination by the skeleton, costal cartilage, CT, deep learning

Procedia PDF Downloads 73
62 The Role and Challenges of Media in the Transformation of Contemporary Nigeria Democracies

Authors: Henry Okechukwu Onyeiwu

Abstract:

The role of media in the transformation of contemporary Nigeria's democracies is multifaceted and profoundly impactful. As Nigeria navigates its complex socio-political landscape, media serves as both a catalyst for democratic engagement and a platform for public discourse. This paper explores the various dimensions through which media influences democracy in Nigeria, including its role in informing citizens, shaping public opinion, and providing a forum for diverse voices. The increasing penetration of social media has revolutionized the political sphere, empowering citizens to participate in governance and hold leaders accountable. However, challenges such as misinformation, censorship, and media bias continue to pose significant threats to democratic integrity. This study critically analyzes the interplay between traditional and new media, highlighting their contributions to electoral processes, civic education, and advocacy for human rights. Ultimately, the findings illustrate that while media is a crucial agent for democratic transformation, its potential can only be realized through a commitment to journalistic integrity and the promotion of media literacy among the Nigerian populace. The media plays a critical role in shaping public democracies in Nigeria, yet it faces a myriad of challenges that hinder its effectiveness. This paper examines the various obstacles confronting media broadcasting in Nigeria, which range from political interference and censorship to issues of professionalism and the proliferation of fake news. Political interference is particularly pronounced, as government entities and political actors often attempt to control narratives, compromising the independence of media outlets. This control often manifests in the form of censorship, where journalists face threats and harassment for reporting on sensitive topics related to governance, corruption, and human rights abuses. Moreover, the rapid rise of social media has introduced a dual challenge; while it offers a platform for citizen engagement and diverse viewpoints, it also facilitates the spread of misinformation and propaganda. The lack of media literacy among the populace exacerbates this issue, as citizens often struggle to discern credible information from false narratives. Additionally, economic constraints deeply affect the sustainability and independence of many broadcasting organizations. Advertisers may unduly influence content, leading to sensationalism over substantive reporting. This paper argues that for media to effectively contribute to Nigerian public democracies, there needs to be a concerted effort to address these challenges. Strengthening journalistic ethics, enhancing regulatory frameworks, and promoting media literacy among citizens are essential steps in fostering a more vibrant and accountable media landscape. Ultimately, this research underscores the necessity of a resilient media ecosystem that can truly support democratic processes, empower citizens, and hold power to account in contemporary Nigeria.

Keywords: media, democracy, socio-political, governance

Procedia PDF Downloads 21
61 Assessing the Threat of Dual Citizenship to State Interests: A Case Study of Sri Lanka

Authors: Kasuri Kaushalya Pathirana Pahamunu Pathirannehelage

Abstract:

Recent changes in the international system challenged the traditional idea of citizenship, prompting a need for a clearer definition. With the rapid globalization and shifting geopolitical dynamics, the concept of dual citizenship has emerged as a focal point of debate regarding its implications for state interests. As borders become less rigid and people identify with multiple nationalities, the traditional idea of citizenship is changing. This change is especially important given the increased connections between countries and the challenges that sovereign states face. While many countries accept dual citizenship, others are hesitant, seeing it as a potential threat to their national goals. This difference underscores the complicated relationship between national interests and the evolving concept of citizenship in the modern world. This study seeks to critically assess whether dual citizenship represents a significant threat to sovereign states by examining its effects across economic, social, and political sectors. Employing qualitative methodologies, including the analysis of published articles, reports, government acts, and a mix of primary and secondary sources, this research delves into the complexities surrounding dual citizenship. The findings reveal a nuanced landscape, showcasing both positive and negative impacts on state sovereignty and international cooperation. By exploring the tension between multinationalism and state interests, particularly through the lens of Sri Lanka’s evolving policies, this study aims to contribute valuable insights to the fields of political science and international relations, ultimately addressing the question of dual citizenship's implications for state interests. The evolving framework of dual citizenship in Sri Lanka provides a unique opportunity to examine its implications for various aspects of the nation. Specifically, this study will analyse the impact of dual citizenship on the country's economy, international cooperation, and social development. By exploring these dimensions, the research aims to provide a comprehensive understanding of how dual citizenship influences not only individual rights but also broader state interests and development goals within the context of globalization. It’s crucial to assess the potential threats posed by dual citizenship, as it can impact national security, economic stability, social unity, and political issues within countries. Understanding these effects is important for policymakers and researchers as they work to balance globalization with the need to protect state sovereignty. Dual citizenship presents a complex interplay of challenges and benefits to state interests, influencing critical areas such as international cooperation and state sovereignty. On the one hand, it can foster stronger ties between nations, enhance economic collaboration, and encourage cultural exchange, ultimately contributing to more robust international relationships. On the other hand, it may create tensions related to national identity, complicate governance, and raise concerns about loyalty and allegiance, which can challenge the notion of state sovereignty. As countries navigate these dual realities, it becomes essential to carefully assess and manage the implications of dual citizenship. By doing so, states can harness the potential advantages while addressing the associated risks, ultimately striving for a balance that promotes both national interests and international relations.

Keywords: dual citizenship, globalization, sustainable development, nationalism

Procedia PDF Downloads 19
60 Effectiveness of Dry Needling with and without Ultrasound Guidance in Patients with Knee Osteoarthritis and Patellofemoral Pain Syndrome: A Systematic Review and Meta-Analysis

Authors: Johnson C. Y. Pang, Amy S. N. Fu, Ryan K. L. Lee, Allan C. L. Fu

Abstract:

Dry needling (DN) is one of the puncturing methods that involves the insertion of needles into the tender spots of the human body without the injection of any substance. DN has long been used to treat the patient with knee pain caused by knee osteoarthritis (KOA) and patellofemoral pain syndrome (PFPS), but the effectiveness is still inconsistent. This study aimed to conduct a systematic review and meta-analysis to assess the intervention methods and effects of DN with and without ultrasound guidance for treating pain and dysfunctions in people with KOA and PFPS. Design: This systematic review adhered to the PRISMA reporting guidelines. The registration number of the study protocol published in the PROSPERO database was CRD42021221419. Six electronic databases were searched manually through CINAHL Complete (1976-2020), Cochrane Library (1996-2020), EMBASE (1947-2020), Medline (1946-2020), PubMed (1966-2020), and Psychinfo (1806-2020) in November 2020. Randomized controlled trials (RCTs) and controlled clinical trials were included to examine the effects of DN on knee pain, including KOA and PFPS. The key concepts included were: DN, acupuncture, ultrasound guidance, KOA, and PFPS. Risk of bias assessment and qualitative analysis were conducted by two independent reviewers using the PEDro score. Results: Fourteen articles met the inclusion criteria, and eight of them were high-quality papers in accordance with the PEDro score. There were variations in the techniques of DN. These included the direction, depth of insertion, number of needles, duration of stay, needle manipulation, and the number of treatment sessions. Meta-analysis was conducted on eight articles. DN group showed positive short-term effects (from immediate after DN to less than 3 months) on pain reduction for both KOA and PFPS with the overall standardized mean difference (SMD) of -1.549 (95% CI=-0.588 to -2.511); with great heterogeneity (P=0.002, I²=96.3%). In subgroup analysis, DN demonstrated significant effects in pain reduction on PFPS (p < 0.001) that could not be found in subjects with KOA (P=0.302). At 3-month post-intervention, DN also induced significant pain reduction in both subjects with KOA and PFPS with an overall SMD of -0.916 (95% CI=-0.133 to -1.699, and great heterogeneity (P=0.022, I²=95.63%). Besides, DN induced significant short-term improvement in function with the overall SMD=6.069; 95% CI=8.595 to 3.544; with great heterogeneity (P<0.001, I²=98.56%) when analyzed was conducted on both KOA and PFPS groups. In subgroup analysis, only PFPS showed a positive result with SMD=6.089, P<0.001; while KOA showed statistically insignificant with P=0.198 in short-term effect. Similarly, at 3-month post-intervention, significant improvement in function after DN was found when the analysis was conducted in both groups with the overall SMD=5.840; 95% CI=9.252 to 2.428; with great heterogeneity (P<0.001, I²=99.1%), but only PFPS showed significant improvement in sub-group analysis (P=0.002, I²=99.1%). Conclusions: The application of DN in KOA and PFPS patients varies among practitioners. DN is effective in reducing pain and dysfunction at short-term and 3-month post-intervention in individuals with PFPS. To our best knowledge, no study has reported the effects of DN with ultrasound guidance on KOA and PFPS. The longer-term effects of DN on KOA and PFPS are waiting for further study.

Keywords: dry needling, knee osteoarthritis, patellofemoral pain syndrome, ultrasound guidance

Procedia PDF Downloads 134
59 Memories of Lost Fathers: The Unfinished Transmission of Generational Values in Hungarian Cinema by Peter Falanga

Authors: Peter Falanga

Abstract:

During the process of de-Stalinization that began in 1956 with the Twentieth Congress of the Soviet Communist Party, many filmmakers in Hungary chose to explore their country’s political discomforts by using Socialist Realism as a negative model against which they could react to the dominating ideology. A renewed national film industry and a more permissive political regime would allow filmmakers to take to task the plight of the preceding generation who had experienced the fatal political turmoil of both World Wars and the purges of Stalin. What follows is no longer the multigenerational unity found in Socialist Realism wherein both the old and the young embrace Stalin’s revolutionary optimism; instead, the protagonists are parentless, and thus their connection to the previous generation is partially severed. In these films, violent historical forces leave one generation to search for both a connection with their family’s past, and for moral guidance to direct their future. István Szabó’s Father (1966), Márta Mészáros Diary for My Children (1984), and Pál Gábor’s Angi Vera (1978) each consider the fraught relationship between successive generations through the lens of postwar youth. A characteristic each of their protagonist’s share is that they are all missing one or both parents, and cope with familial loss either through recalling memories of their parents in dream-like sequences, or, in the case of Angi Vera, through embracing the surrogate paternalism that the Communist Party promises to provide. This paper considers the argument these films present about the progress of Hungarian history, and how this topic is explored in more recent films that similarly focus on the transmission of generational values. Scholars such as László Strausz and John Cunningham have written on the continuous concern with the transmission of generational values in more recent films such as István Szabó’s Sunshine (1999), Béla Tarr’s Werckmeister Harmonies (2000), György Pálfi’s Taxidermia (2006), Ágnes Kocsis’ Pál Adrienn (2010), and Kornél Mundruczó’s Evolution (2021). These films, they argue, make intimate portrayals of the various sweeping political changes in Hungary’s history and question how these epochs or events have impacted Hungarian identities. If these films attempt to personalize historical shifts of Hungary, then what is the significance of featuring characters who have lost one or both parents? An attempt to understand this coherent trend in Hungarian cinema will profit from examining the earlier, celebrated films of Szabó, Mészáros, and Gábor, who inaugurated this preoccupation with generational values. The pervasive interplay of dreams and memory in their films invites an additional element to their argument concerning historical progression. This paper incorporates Richard Teniman’s notion of the “dialectics of memory” in which memory is in a constant process of negation and reinvention to explain why these Directors prefer to explore Hungarian identity through the disarranged form of psychological realism over the linear causality structure of historical realism.

Keywords: film theory, Eastern European Studies, film history, Eastern European History

Procedia PDF Downloads 122
58 Covariate-Adjusted Response-Adaptive Designs for Semi-Parametric Survival Responses

Authors: Ayon Mukherjee

Abstract:

Covariate-adjusted response-adaptive (CARA) designs use the available responses to skew the treatment allocation in a clinical trial in towards treatment found at an interim stage to be best for a given patient's covariate profile. Extensive research has been done on various aspects of CARA designs with the patient responses assumed to follow a parametric model. However, ranges of application for such designs are limited in real-life clinical trials where the responses infrequently fit a certain parametric form. On the other hand, robust estimates for the covariate-adjusted treatment effects are obtained from the parametric assumption. To balance these two requirements, designs are developed which are free from distributional assumptions about the survival responses, relying only on the assumption of proportional hazards for the two treatment arms. The proposed designs are developed by deriving two types of optimum allocation designs, and also by using a distribution function to link the past allocation, covariate and response histories to the present allocation. The optimal designs are based on biased coin procedures, with a bias towards the better treatment arm. These are the doubly-adaptive biased coin design (DBCD) and the efficient randomized adaptive design (ERADE). The treatment allocation proportions for these designs converge to the expected target values, which are functions of the Cox regression coefficients that are estimated sequentially. These expected target values are derived based on constrained optimization problems and are updated as information accrues with sequential arrival of patients. The design based on the link function is derived using the distribution function of a probit model whose parameters are adjusted based on the covariate profile of the incoming patient. To apply such designs, the treatment allocation probabilities are sequentially modified based on the treatment allocation history, response history, previous patients’ covariates and also the covariates of the incoming patient. Given these information, an expression is obtained for the conditional probability of a patient allocation to a treatment arm. Based on simulation studies, it is found that the ERADE is preferable to the DBCD when the main aim is to minimize the variance of the observed allocation proportion and to maximize the power of the Wald test for a treatment difference. However, the former procedure being discrete tends to be slower in converging towards the expected target allocation proportion. The link function based design achieves the highest skewness of patient allocation to the best treatment arm and thus ethically is the best design. Other comparative merits of the proposed designs have been highlighted and their preferred areas of application are discussed. It is concluded that the proposed CARA designs can be considered as suitable alternatives to the traditional balanced randomization designs in survival trials in terms of the power of the Wald test, provided that response data are available during the recruitment phase of the trial to enable adaptations to the designs. Moreover, the proposed designs enable more patients to get treated with the better treatment during the trial thus making the designs more ethically attractive to the patients. An existing clinical trial has been redesigned using these methods.

Keywords: censored response, Cox regression, efficiency, ethics, optimal allocation, power, variability

Procedia PDF Downloads 165
57 Exploring the Influence of Maternal Self-Discrepancy on Psychological Well-Being: A Study of Middle-Aged Japanese Mothers

Authors: Chooi Fong Lee

Abstract:

Maternal psychological well-being has been investigated from various aspects, such as social support, employment status. However, a perspective from self-discrepancy theory has not been employed. Moreover, most were focused on young mothers. Less is understanding the middle-aged mother’s psychological well-being. This research examined the influence of maternal self-discrepancy between actual and ideal self on maternal role achievement, state anxiety, trait anxiety, and subjective well-being among Japanese middle-aged mothers across their employment status. A pilot study with 20 Japanese mother participants (aged 40-55, 9 regular-employed, 8 non-regular-employed, and 3 homemakers) was conducted to assess the viability of survey questionnaires (Maternal Role Achievement Scale, State-Trait Anxiety Inventory, Subjective Well-being Scale, and Self-report questionnaire). The self-report questionnaire prompted participants to list up to 3 ideal selves they aspired to be and rate the extent to which their actual selves deviated from their ideal selves on a 7-point scale (1= not at all; 4 = medium; 7 = extremely). Self-discrepancy scores were calculated by subtracting participants’ degree ratings from a 7-point scale, summing them up, and then dividing the total by 3. The final sample consisted of 241 participants, 97 regular-employed, 87 non-regular employed, and 57 homemaker mothers. We ensured participants were randomly selected to mitigate bias. The results show that regular-employed mothers tend to exhibit lower self-discrepancy scores compared to non-regular employed and homemaker mothers. Moreover, the discrepancy between actual and ideal self negatively correlated with maternal role achievement, state anxiety, and subjective well-being, while positively correlated with trait anxiety. Trait anxiety arises when one feels they did not meet their ideal self, as evidenced by higher levels in homemaker mothers, who experience lower state anxiety. Conversely, regular-employed mothers exhibit higher state anxiety but lower trait anxiety, suggesting satisfaction in their professional pursuits despite balancing work and family responsibilities. Full-time maternal roles contribute to lower state anxiety but higher trait anxiety among homemaker mothers due to a lack of personal identity achievement. Non-regular employed mothers show similarities to homemaker mothers. In self-reports, regular-employed mothers highlight support and devotion to their children’s development, while non-regular-employed mothers seek life fulfillment through part-time work alongside child-rearing duties. Homemaker mothers emphasize qualities like sociability, and communication skills, potentially influencing their self-discrepancy scores. Furthermore, the hierarchical multiple regression analysis revealed that the discrepancy between actual and ideal self significantly predicts subjective well-being. In conclusion, the findings offer valuable insights into the impact of maternal self-discrepancy on psychological well-being among middle-aged Japanese mothers across different employment statuses. Understanding these dynamics becomes crucial as contemporary women increasingly pursue higher education and depart from traditional motherhood norms. Working toward one ideal self might contribute to a mother psychological well-being. Acknowledgment: This project was made possible with funding support from the Japan ICU Foundation.

Keywords: maternal employment, maternal role, self-discrepancy, state-trait anxiety, subjective well-being

Procedia PDF Downloads 64
56 Polarization as a Proxy of Misinformation Spreading

Authors: Michela Del Vicario, Walter Quattrociocchi, Antonio Scala, Ana Lucía Schmidt, Fabiana Zollo

Abstract:

Information, rumors, and debates may shape and impact public opinion heavily. In the latest years, several concerns have been expressed about social influence on the Internet and the outcome that online debates might have on real-world processes. Indeed, on online social networks users tend to select information that is coherent to their system of beliefs and to form groups of like-minded people –i.e., echo chambers– where they reinforce and polarize their opinions. In this way, the potential benefits coming from the exposure to different points of view may be reduced dramatically, and individuals' views may become more and more extreme. Such a context fosters misinformation spreading, which has always represented a socio-political and economic risk. The persistence of unsubstantiated rumors –e.g., the hypothetical and hazardous link between vaccines and autism– suggests that social media do have the power to misinform, manipulate, or control public opinion. As an example, current approaches such as debunking efforts or algorithmic-driven solutions based on the reputation of the source seem to prove ineffective against collective superstition. Indeed, experimental evidence shows that confirmatory information gets accepted even when containing deliberately false claims while dissenting information is mainly ignored, influences users’ emotions negatively and may even increase group polarization. Moreover, confirmation bias has been shown to play a pivotal role in information cascades, posing serious warnings about the efficacy of current debunking efforts. Nevertheless, mitigation strategies have to be adopted. To generalize the problem and to better understand social dynamics behind information spreading, in this work we rely on a tight quantitative analysis to investigate the behavior of more than 300M users w.r.t. news consumption on Facebook over a time span of six years (2010-2015). Through a massive analysis on 920 news outlets pages, we are able to characterize the anatomy of news consumption on a global and international scale. We show that users tend to focus on a limited set of pages (selective exposure) eliciting a sharp and polarized community structure among news outlets. Moreover, we find similar patterns around the Brexit –the British referendum to leave the European Union– debate, where we observe the spontaneous emergence of two well segregated and polarized groups of users around news outlets. Our findings provide interesting insights into the determinants of polarization and the evolution of core narratives on online debating. Our main aim is to understand and map the information space on online social media by identifying non-trivial proxies for the early detection of massive informational cascades. Furthermore, by combining users traces, we are finally able to draft the main concepts and beliefs of the core narrative of an echo chamber and its related perceptions.

Keywords: information spreading, misinformation, narratives, online social networks, polarization

Procedia PDF Downloads 291
55 Three-Stage Least Squared Models of a Station-Level Subway Ridership: Incorporating an Analysis on Integrated Transit Network Topology Measures

Authors: Jungyeol Hong, Dongjoo Park

Abstract:

The urban transit system is a critical part of a solution to the economic, energy, and environmental challenges. Furthermore, it ultimately contributes the improvement of people’s quality of lives. For taking these kinds of advantages, the city of Seoul has tried to construct an integrated transit system including both subway and buses. The effort led to the fact that approximately 6.9 million citizens use the integrated transit system every day for their trips. Diagnosing the current transit network is a significant task to provide more convenient and pleasant transit environment. Therefore, the critical objective of this study is to establish a methodological framework for the analysis of an integrated bus-subway network and to examine the relationship between subway ridership and parameters such as network topology measures, bus demand, and a variety of commercial business facilities. Regarding a statistical approach to estimate subway ridership at a station level, many previous studies relied on Ordinary Least Square regression, but there was lack of studies considering the endogeneity issues which might show in the subway ridership prediction model. This study focused on both discovering the impacts of integrated transit network topology measures and endogenous effect of bus demand on subway ridership. It could ultimately contribute to developing more accurate subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers Seoul city in South Korea, and it includes 243 subway stations and 10,120 bus stops with the temporal scope set during twenty-four hours with one-hour interval time panels each. The subway and bus ridership information in detail was collected from the Seoul Smart Card data in 2015 and 2016. First, integrated subway-bus network topology measures which have characteristics regarding connectivity, centrality, transitivity, and reciprocity were estimated based on the complex network theory. The results of integrated transit network topology analysis were compared to subway-only network topology. Also, the non-recursive approach which is Three-Stage Least Square was applied to develop the daily subway ridership model as capturing the endogeneity between bus and subway demands. Independent variables included roadway geometry, commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. Consequently, it was found that network topology measures were significant size effect. Especially, centrality measures showed that the elasticity was a change of 4.88% for closeness centrality, 24.48% for betweenness centrality while the elasticity of bus ridership was 8.85%. Moreover, it was proved that bus demand and subway ridership were endogenous in a non-recursive manner as showing that predicted bus ridership and predicted subway ridership is statistically significant in OLS regression models. Therefore, it shows that three-stage least square model appears to be a plausible model for efficient subway ridership estimation. It is expected that the proposed approach provides a reliable guideline that can be used as part of the spectrum of tools for evaluating a city-wide integrated transit network.

Keywords: integrated transit system, network topology measures, three-stage least squared, endogeneity, subway ridership

Procedia PDF Downloads 177
54 Using Structural Equation Modeling to Measure the Impact of Young Adult-Dog Personality Characteristics on Dog Walking Behaviours during the COVID-19 Pandemic

Authors: Renata Roma, Christine Tardif-Williams

Abstract:

Engaging in daily walks with a dog (f.e. Canis lupus familiaris) during the COVID-19 pandemic may be linked to feelings of greater social-connectedness and global self-worth, and lower stress after controlling for mental health issues, lack of physical contact with others, and other stressors associated with the current pandemic. Therefore, maintaining a routine of dog walking might mitigate the effects of stressors experienced during the pandemic and promote well-being. However, many dog owners do not walk their dogs for many reasons, which are related to the owner’s and the dog’s personalities. Note that the consistency of certain personality characteristics among dogs demonstrates that it is possible to accurately measure different dimensions of personality in both dogs and their human counterparts. In addition, behavioural ratings (e.g., the dog personality questionnaire - DPQ) are reliable tools to assess the dog’s personality. Clarifying the relevance of personality factors in the context of young adult-dog relationships can shed light on interactional aspects that can potentially foster protective behaviours and promote well-being among young adults during the pandemic. This study examines if and how nine combinations of dog- and young adult-related personality characteristics (e.g., neuroticism-fearfulness) can amplify the influence of personality factors in the context of dog walking during the COVID-19 pandemic. Responses to an online large-scale survey among 440 (389 females; 47 males; 4 nonbinaries, Mage=20.7, SD= 2.13 range=17-25) young adults living with a dog in Canada were analyzed using structural equation modeling (SEM). As extraversion, conscientiousness, and neuroticism, measured through the five-factor model (FFM) inventory, are related to maintaining a routine of physical activities, these dimensions were selected for this analysis. Following an approach successfully adopted in the field of dog-human interactions, the FFM was used as the organizing framework to measure and compare the human’s and the dog’s personality in the context of dog walking. The dog-related personality dimensions activity/excitability, responsiveness to training, and fearful were correlated dimensions captured through DPQ and were added to the analysis. Two questions were used to assess dog walking. The actor-partner interdependence model (APIM) was used to check if the young adult’s responses about the dog were biased; no significant bias was observed. Activity/excitability and responsiveness to training in dogs were greatly associated with dog walking. For young adults, high scores in conscientiousness and extraversion predicted more walks with the dog. Conversely, higher scores in neuroticism predicted less engagement in dog walking. For participants high in conscientiousness, the dog’s responsiveness to training (standardized=0.14, p=0.02) and the dog’s activity/excitability (standardized=0.15, p=0.00) levels moderated dog walking behaviours by promoting more daily walks. These results suggest that some combinations in young adult and dog personality characteristics are associated with greater synergy in the young adult-dog dyad that might amplify the impact of personality factors on young adults’ dog-walking routines. These results can inform programs designed to promote the mental and physical health of young adults during the Covid-19 pandemic by highlighting the impact of synergy and reciprocity in personality characteristics between young adults and dogs.

Keywords: Covid-19 pandemic, dog walking, personality, structural equation modeling, well-being

Procedia PDF Downloads 115
53 A Data-Driven Compartmental Model for Dengue Forecasting and Covariate Inference

Authors: Yichao Liu, Peter Fransson, Julian Heidecke, Jonas Wallin, Joacim Rockloev

Abstract:

Dengue, a mosquito-borne viral disease, poses a significant public health challenge in endemic tropical or subtropical countries, including Sri Lanka. To reveal insights into the complexity of the dynamics of this disease and study the drivers, a comprehensive model capable of both robust forecasting and insightful inference of drivers while capturing the co-circulating of several virus strains is essential. However, existing studies mostly focus on only one aspect at a time and do not integrate and carry insights across the siloed approach. While mechanistic models are developed to capture immunity dynamics, they are often oversimplified and lack integration of all the diverse drivers of disease transmission. On the other hand, purely data-driven methods lack constraints imposed by immuno-epidemiological processes, making them prone to overfitting and inference bias. This research presents a hybrid model that combines machine learning techniques with mechanistic modelling to overcome the limitations of existing approaches. Leveraging eight years of newly reported dengue case data, along with socioeconomic factors, such as human mobility, weekly climate data from 2011 to 2018, genetic data detecting the introduction and presence of new strains, and estimates of seropositivity for different districts in Sri Lanka, we derive a data-driven vector (SEI) to human (SEIR) model across 16 regions in Sri Lanka at the weekly time scale. By conducting ablation studies, the lag effects allowing delays up to 12 weeks of time-varying climate factors were determined. The model demonstrates superior predictive performance over a pure machine learning approach when considering lead times of 5 and 10 weeks on data withheld from model fitting. It further reveals several interesting interpretable findings of drivers while adjusting for the dynamics and influences of immunity and introduction of a new strain. The study uncovers strong influences of socioeconomic variables: population density, mobility, household income and rural vs. urban population. The study reveals substantial sensitivity to the diurnal temperature range and precipitation, while mean temperature and humidity appear less important in the study location. Additionally, the model indicated sensitivity to vegetation index, both max and average. Predictions on testing data reveal high model accuracy. Overall, this study advances the knowledge of dengue transmission in Sri Lanka and demonstrates the importance of incorporating hybrid modelling techniques to use biologically informed model structures with flexible data-driven estimates of model parameters. The findings show the potential to both inference of drivers in situations of complex disease dynamics and robust forecasting models.

Keywords: compartmental model, climate, dengue, machine learning, social-economic

Procedia PDF Downloads 84
52 Consumer Preferences for Low-Carbon Futures: A Structural Equation Model Based on the Domestic Hydrogen Acceptance Framework

Authors: Joel A. Gordon, Nazmiye Balta-Ozkan, Seyed Ali Nabavi

Abstract:

Hydrogen-fueled technologies are rapidly advancing as a critical component of the low-carbon energy transition. In countries historically reliant on natural gas for home heating, such as the UK, hydrogen may prove fundamental for decarbonizing the residential sector, alongside other technologies such as heat pumps and district heat networks. While the UK government is set to take a long-term policy decision on the role of domestic hydrogen by 2026, there are considerable uncertainties regarding consumer preferences for ‘hydrogen homes’ (i.e., hydrogen-fueled appliances for space heating, hot water, and cooking. In comparison to other hydrogen energy technologies, such as road transport applications, to date, few studies have engaged with the social acceptance aspects of the domestic hydrogen transition, resulting in a stark knowledge deficit and pronounced risk to policymaking efforts. In response, this study aims to safeguard against undesirable policy measures by revealing the underlying relationships between the factors of domestic hydrogen acceptance and their respective dimensions: attitudinal, socio-political, community, market, and behavioral acceptance. The study employs an online survey (n=~2100) to gauge how different UK householders perceive the proposition of switching from natural gas to hydrogen-fueled appliances. In addition to accounting for housing characteristics (i.e., housing tenure, property type and number of occupants per dwelling) and several other socio-structural variables (e.g. age, gender, and location), the study explores the impacts of consumer heterogeneity on hydrogen acceptance by recruiting respondents from across five distinct groups: (1) fuel poor householders, (2) technology engaged householders, (3) environmentally engaged householders, (4) technology and environmentally engaged householders, and (5) a baseline group (n=~700) which filters out each of the smaller targeted groups (n=~350). This research design reflects the notion that supporting a socially fair and efficient transition to hydrogen will require parallel engagement with potential early adopters and demographic groups impacted by fuel poverty while also accounting strongly for public attitudes towards net zero. Employing a second-order multigroup confirmatory factor analysis (CFA) in Mplus, the proposed hydrogen acceptance model is tested to fit the data through a partial least squares (PLS) approach. In addition to testing differences between and within groups, the findings provide policymakers with critical insights regarding the significance of knowledge and awareness, safety perceptions, perceived community impacts, cost factors, and trust in key actors and stakeholders as potential explanatory factors of hydrogen acceptance. Preliminary results suggest that knowledge and awareness of hydrogen are positively associated with support for domestic hydrogen at the household, community, and national levels. However, with the exception of technology and/or environmentally engaged citizens, much of the population remains unfamiliar with hydrogen and somewhat skeptical of its application in homes. Knowledge and awareness present as critical to facilitating positive safety perceptions, alongside higher levels of trust and more favorable expectations for community benefits, appliance performance, and potential cost savings. Based on these preliminary findings, policymakers should be put on red alert about diffusing hydrogen into the public consciousness in alignment with energy security, fuel poverty, and net-zero agendas.

Keywords: hydrogen homes, social acceptance, consumer heterogeneity, heat decarbonization

Procedia PDF Downloads 114
51 Accountability of Artificial Intelligence: An Analysis Using Edgar Morin’s Complex Thought

Authors: Sylvie Michel, Sylvie Gerbaix, Marc Bidan

Abstract:

Artificial intelligence (AI) can be held accountable for its detrimental impacts. This question gains heightened relevance given AI's pervasive reach across various domains, magnifying its power and potential. The expanding influence of AI raises fundamental ethical inquiries, primarily centering on biases, responsibility, and transparency. This encompasses discriminatory biases arising from algorithmic criteria or data, accidents attributed to autonomous vehicles or other systems, and the imperative of transparent decision-making. This article aims to stimulate reflection on AI accountability, denoting the necessity to elucidate the effects it generates. Accountability comprises two integral aspects: adherence to legal and ethical standards and the imperative to elucidate the underlying operational rationale. The objective is to initiate a reflection on the obstacles to this "accountability," facing the challenges of the complexity of artificial intelligence's system and its effects. Then, this article proposes to mobilize Edgar Morin's complex thought to encompass and face the challenges of this complexity. The first contribution is to point out the challenges posed by the complexity of A.I., with fractional accountability between a myriad of human and non-human actors, such as software and equipment, which ultimately contribute to the decisions taken and are multiplied in the case of AI. Accountability faces three challenges resulting from the complexity of the ethical issues combined with the complexity of AI. The challenge of the non-neutrality of algorithmic systems as fully ethically non-neutral actors is put forward by a revealing ethics approach that calls for assigning responsibilities to these systems. The challenge of the dilution of responsibility is induced by the multiplicity and distancing between the actors. Thus, a dilution of responsibility is induced by a split in decision-making between developers, who feel they fulfill their duty by strictly respecting the requests they receive, and management, which does not consider itself responsible for technology-related flaws. Accountability is confronted with the challenge of transparency of complex and scalable algorithmic systems, non-human actors self-learning via big data. A second contribution involves leveraging E. Morin's principles, providing a framework to grasp the multifaceted ethical dilemmas and subsequently paving the way for establishing accountability in AI. When addressing the ethical challenge of biases, the "hologrammatic" principle underscores the imperative of acknowledging the non-ethical neutrality of algorithmic systems inherently imbued with the values and biases of their creators and society. The "dialogic" principle advocates for the responsible consideration of ethical dilemmas, encouraging the integration of complementary and contradictory elements in solutions from the very inception of the design phase. Aligning with the principle of organizing recursiveness, akin to the "transparency" of the system, it promotes a systemic analysis to account for the induced effects and guides the incorporation of modifications into the system to rectify deviations and reintroduce modifications into the system to rectify its drifts. In conclusion, this contribution serves as an inception for contemplating the accountability of "artificial intelligence" systems despite the evident ethical implications and potential deviations. Edgar Morin's principles, providing a lens to contemplate this complexity, offer valuable perspectives to address these challenges concerning accountability.

Keywords: accountability, artificial intelligence, complexity, ethics, explainability, transparency, Edgar Morin

Procedia PDF Downloads 63
50 A Foucauldian Analysis of Postcolonial Hybridity in a Kuwaiti Novel

Authors: Annette Louise Dupont

Abstract:

Background and Introduction: Broadly defined, hybridity is a condition of racial and cultural ‘cross-pollination’ which arises as a result of contact between colonized and colonizer. It remains a highly contested concept in postcolonial studies as it is implicitly underpinned by colonial notions of ‘racial purity.’ While some postcolonial scholars argue that individuals exercise significant agency in the construction of their hybrid subjectivities, others underscore associated experiences of exclusion, marginalization, and alienation. Kuwait and the Philippines are among the most disparate of contemporary postcolonial states. While oil resources transformed the former British Mandate of Kuwait into one of the world’s richest countries, enduring poverty in the former US colony of the Philippines drives a global diaspora which produces multiple Filipino hybridities. Although more Filipinos work in the Arabian Gulf than in any other region of the world, scholarly and literary accounts of their experiences of hybridization in this region are relatively scarce when compared to those set in North America, Australia, Asia, and Europe. Study Aims and Significance: This paper aims to address this existing lacuna by investigating hybridity and other postcolonial themes in a novel by a Kuwaiti author which vividly portrays the lives of immigrants and citizens in Kuwait and which gives a rare voice and insight into the struggles of an Arab-Filipino and European-Filipina. Specifically, this paper explores the relationships between colonial discourses of ‘black’ and ‘white’ and postcolonial discourses pertaining to ‘brown’ Filipinos and ‘brown’ Arabs, in order to assess their impacts on the protagonists’ hybrid subjectivities. Methodology: Foucault’s notions of discourse not only provide a conceptual basis for analyzing the colonial ideology of Orientalism, but his theories related to the social exclusion of the ‘mad’ also elucidate the mechanisms by which power can operate to marginalize, alienate and subjectify the Other, therefore a Foucauldian lens is applied to the analysis of postcolonial themes and hybrid subjectivities portrayed in the novel. Findings: The study finds that Kuwaiti and Filipino discursive practices mirror those of former white colonialists and colonized black laborers and that these discursive practices combine with a former British colonial system of foreign labor sponsorship to create a form of governmentality in Kuwait which is based on exclusion and control. The novel’s rich social description and the reflections of the key protagonist and narrator suggest that such fiction has a significant role to play in highlighting the historical and cultural specificities of experiences of postcolonial hybridity in under-researched geographic, economic, social, and political settings. Whereas hybridity can appear abstract in scholarly accounts, the significance of literary accounts in which the lived experiences of hybrid protagonists are anchored to specific historical periods, places and discourses, is that contextual particularities are neither obscured nor dehistoricized. Conclusions: The application of Foucauldian theorizations of discourse, disciplinary, and biopower to the analysis of this Kuwaiti literary text serves to extend an understanding of the effects of contextually-specific discourses on hybrid Filipino subjectivities, as well as a knowledge of prevailing social dynamics in a little-researched postcolonial Arabian Gulf state.

Keywords: Filipino, Foucault, hybridity, Kuwait

Procedia PDF Downloads 128
49 “laws Drifting Off While Artificial Intelligence Thriving” – A Comparative Study with Special Reference to Computer Science and Information Technology

Authors: Amarendar Reddy Addula

Abstract:

Definition of Artificial Intelligence: Artificial intelligence is the simulation of mortal intelligence processes by machines, especially computer systems. Explicit operations of AI comprise expert systems, natural language processing, and speech recognition, and machine vision. Artificial Intelligence (AI) is an original medium for digital business, according to a new report by Gartner. The last 10 times represent an advance period in AI’s development, prodded by the confluence of factors, including the rise of big data, advancements in cipher structure, new machine literacy ways, the materialization of pall computing, and the vibrant open- source ecosystem. Influence of AI to a broader set of use cases and druggies and its gaining fashionability because it improves AI’s versatility, effectiveness, and rigidity. Edge AI will enable digital moments by employing AI for real- time analytics closer to data sources. Gartner predicts that by 2025, further than 50 of all data analysis by deep neural networks will do at the edge, over from lower than 10 in 2021. Responsible AI is a marquee term for making suitable business and ethical choices when espousing AI. It requires considering business and societal value, threat, trust, translucency, fairness, bias mitigation, explainability, responsibility, safety, sequestration, and nonsupervisory compliance. Responsible AI is ever more significant amidst growing nonsupervisory oversight, consumer prospects, and rising sustainability pretensions. Generative AI is the use of AI to induce new vestiges and produce innovative products. To date, generative AI sweats have concentrated on creating media content similar as photorealistic images of people and effects, but it can also be used for law generation, creating synthetic irregular data, and designing medicinals and accoutrements with specific parcels. AI is the subject of a wide- ranging debate in which there's a growing concern about its ethical and legal aspects. Constantly, the two are varied and nonplussed despite being different issues and areas of knowledge. The ethical debate raises two main problems the first, abstract, relates to the idea and content of ethics; the alternate, functional, and concerns its relationship with the law. Both set up models of social geste, but they're different in compass and nature. The juridical analysis is grounded on anon-formalistic scientific methodology. This means that it's essential to consider the nature and characteristics of the AI as a primary step to the description of its legal paradigm. In this regard, there are two main issues the relationship between artificial and mortal intelligence and the question of the unitary or different nature of the AI. From that theoretical and practical base, the study of the legal system is carried out by examining its foundations, the governance model, and the nonsupervisory bases. According to this analysis, throughout the work and in the conclusions, International Law is linked as the top legal frame for the regulation of AI.

Keywords: artificial intelligence, ethics & human rights issues, laws, international laws

Procedia PDF Downloads 94
48 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.

Keywords: cross-validation, importance sampling, information criteria, predictive accuracy

Procedia PDF Downloads 392
47 Modeling Discrimination against Gay People: Predictors of Homophobic Behavior against Gay Men among High School Students in Switzerland

Authors: Patrick Weber, Daniel Gredig

Abstract:

Background and Purpose: Research has well documented the impact of discrimination and micro-aggressions on the wellbeing of gay men and, especially, adolescents. For the prevention of homophobic behavior against gay adolescents, however, the focus has to shift on those who discriminate: For the design and tailoring of prevention and intervention, it is important to understand the factors responsible for homophobic behavior such as, for example, verbal abuse. Against this background, the present study aimed to assess homophobic – in terms of verbally abusive – behavior against gay people among high school students. Furthermore, it aimed to establish the predictors of the reported behavior by testing an explanatory model. This model posits that homophobic behavior is determined by negative attitudes and knowledge. These variables are supposed to be predicted by the acceptance of traditional gender roles, religiosity, orientation toward social dominance, contact with gay men, and by the perceived expectations of parents, friends and teachers. These social-cognitive variables in turn are assumed to be determined by students’ gender, age, immigration background, formal school level, and the discussion of gay issues in class. Method: From August to October 2016, we visited 58 high school classes in 22 public schools in a county in Switzerland, and asked the 8th and 9th year students on three formal school levels to participate in survey about gender and gay issues. For data collection, we used an anonymous self-administered questionnaire filled in during class. Data were analyzed using descriptive statistics and structural equation modelling (Generalized Least Square Estimates method). The sample included 897 students, 334 in the 8th and 563 in the 9th year, aged 12–17, 51.2% being female, 48.8% male, 50.3% with immigration background. Results: A proportion of 85.4% participants reported having made homophobic statements in the 12 month before survey, 4.7% often and very often. Analysis showed that respondents’ homophobic behavior was predicted directly by negative attitudes (β=0.20), as well as by the acceptance of traditional gender roles (β=0.06), religiosity (β=–0.07), contact with gay people (β=0.10), expectations of parents (β=–0.14) and friends (β=–0.19), gender (β=–0.22) and having a South-East-European or Western- and Middle-Asian immigration background (β=0.09). These variables were predicted, in turn, by gender, age, immigration background, formal school level, and discussion of gay issues in class (GFI=0.995, AGFI=0.979, SRMR=0.0169, CMIN/df=1.199, p>0.213, adj. R2 =0.384). Conclusion: Findings evidence a high prevalence of homophobic behavior in the responding high school students. The tested explanatory model explained 38.4% of the assessed homophobic behavior. However, data did not found full support of the model. Knowledge did not turn out to be a predictor of behavior. Except for the perceived expectation of teachers and orientation toward social dominance, the social-cognitive variables were not fully mediated by attitudes. Equally, gender and immigration background predicted homophobic behavior directly. These findings demonstrate the importance of prevention and provide also leverage points for interventions against anti-gay bias in adolescents – also in social work settings as, for example, in school social work, open youth work or foster care.

Keywords: discrimination, high school students, gay men, predictors, Switzerland

Procedia PDF Downloads 329
46 Bridging Minds, Building Success Beyond Metrics: Uncovering Human Influence on Project Performance: Case Study of University of Salford

Authors: David Oyewumi Oyekunle, David Preston, Florence Ibeh

Abstract:

The paper provides an overview of the impacts of the human dimension in project management and team management on projects, which is increasingly affecting the performance of organizations. Recognizing its crucial significance, the research focuses on analyzing the psychological and interpersonal dynamics within project teams. This research is highly significant in the dynamic field of project management, as it addresses important gaps and offers vital insights that align with the constantly changing demands of the profession. A case study was conducted at the University of Salford to examine how human activity affects project management and performance. The study employed a mixed methodology to gain a deeper understanding of the real-world experiences of the subjects and project teams. Data analysis procedures to address the research objectives included the deductive approach, which involves testing a clear hypothesis or theory, as well as descriptive analysis and visualization. The survey comprised a sample size of 40 participants out of 110 project management professionals, including staff and final students in the Salford Business School, using a purposeful sampling method. To mitigate bias, the study ensured diversity in the sample by including both staff and final students. A smaller sample size allowed for more in-depth analysis and a focused exploration of the research objective. Conflicts, for example, are intricate occurrences shaped by a multitude of psychological stimuli and social interactions and may have either a deterrent perspective or a positive perspective on project performance and project management productivity. The study identified conflict elements, including culture, environment, personality, attitude, individual project knowledge, team relationships, leadership, and team dynamics among team members, as crucial human activities to minimize conflict. The findings are highly significant in the dynamic field of project management, as they address important gaps and offer vital insights that align with the constantly changing demands of the profession. It provided project professionals with valuable insights that can help them create a collaborative and high-performing project environment. Uncovering human influence on project performance, effective communication, optimal team synergy, and a keen understanding of project scope are necessary for the management of projects to attain exceptional performance and efficiency. For the research to achieve the aims of this study, it was acknowledged that the productive dynamics of teams and strong group cohesiveness are crucial for effectively managing conflicts in a beneficial and forward-thinking manner. Addressing the identified human influence will contribute to a more sustainable project management approach and offer opportunities for exploration and potential contributions to both academia and practical project management.

Keywords: human dimension, project management, team dynamics, conflict resolution

Procedia PDF Downloads 105
45 Rapid, Automated Characterization of Microplastics Using Laser Direct Infrared Imaging and Spectroscopy

Authors: Andreas Kerstan, Darren Robey, Wesam Alvan, David Troiani

Abstract:

Over the last 3.5 years, Quantum Cascade Lasers (QCL) technology has become increasingly important in infrared (IR) microscopy. The advantages over fourier transform infrared (FTIR) are that large areas of a few square centimeters can be measured in minutes and that the light intensive QCL makes it possible to obtain spectra with excellent S/N, even with just one scan. A firmly established solution of the laser direct infrared imaging (LDIR) 8700 is the analysis of microplastics. The presence of microplastics in the environment, drinking water, and food chains is gaining significant public interest. To study their presence, rapid and reliable characterization of microplastic particles is essential. Significant technical hurdles in microplastic analysis stem from the sheer number of particles to be analyzed in each sample. Total particle counts of several thousand are common in environmental samples, while well-treated bottled drinking water may contain relatively few. While visual microscopy has been used extensively, it is prone to operator error and bias and is limited to particles larger than 300 µm. As a result, vibrational spectroscopic techniques such as Raman and FTIR microscopy have become more popular, however, they are time-consuming. There is a demand for rapid and highly automated techniques to measure particle count size and provide high-quality polymer identification. Analysis directly on the filter that often forms the last stage in sample preparation is highly desirable as, by removing a sample preparation step it can both improve laboratory efficiency and decrease opportunities for error. Recent advances in infrared micro-spectroscopy combining a QCL with scanning optics have created a new paradigm, LDIR. It offers improved speed of analysis as well as high levels of automation. Its mode of operation, however, requires an IR reflective background, and this has, to date, limited the ability to perform direct “on-filter” analysis. This study explores the potential to combine the filter with an infrared reflective surface filter. By combining an IR reflective material or coating on a filter membrane with advanced image analysis and detection algorithms, it is demonstrated that such filters can indeed be used in this way. Vibrational spectroscopic techniques play a vital role in the investigation and understanding of microplastics in the environment and food chain. While vibrational spectroscopy is widely deployed, improvements and novel innovations in these techniques that can increase the speed of analysis and ease of use can provide pathways to higher testing rates and, hence, improved understanding of the impacts of microplastics in the environment. Due to its capability to measure large areas in minutes, its speed, degree of automation and excellent S/N, the LDIR could also implemented for various other samples like food adulteration, coatings, laminates, fabrics, textiles and tissues. This presentation will highlight a few of them and focus on the benefits of the LDIR vs classical techniques.

Keywords: QCL, automation, microplastics, tissues, infrared, speed

Procedia PDF Downloads 66
44 Transport Hubs as Loci of Multi-Layer Ecosystems of Innovation: Case Study of Airports

Authors: Carolyn Hatch, Laurent Simon

Abstract:

Urban mobility and the transportation industry are undergoing a transformation, shifting from an auto production-consumption model that has dominated since the early 20th century towards new forms of personal and shared multi-modality [1]. This is shaped by key forces such as climate change, which has induced a shift in production and consumption patterns and efforts to decarbonize and improve transport services through, for instance, the integration of vehicle automation, electrification and mobility sharing [2]. Advanced innovation practices and platforms for experimentation and validation of new mobility products and services that are increasingly complex and multi-stakeholder-oriented are shaping this new world of mobility. Transportation hubs – such as airports - are emblematic of these disruptive forces playing out in the mobility industry. Airports are emerging as the core of innovation ecosystems on and around contemporary mobility issues, and increasingly recognized as complex public/private nodes operating in many societal dimensions [3,4]. These include urban development, sustainability transitions, digital experimentation, customer experience, infrastructure development and data exploitation (for instance, airports generate massive and often untapped data flows, with significant potential for use, commercialization and social benefit). Yet airport innovation practices have not been well documented in the innovation literature. This paper addresses this gap by proposing a model of airport innovation that aims to equip airport stakeholders to respond to these new and complex innovation needs in practice. The methodology involves: 1 – a literature review bringing together key research and theory on airport innovation management, open innovation and innovation ecosystems in order to evaluate airport practices through an innovation lens; 2 – an international benchmarking of leading airports and their innovation practices, including such examples as Aéroports de Paris, Schipol in Amsterdam, Changi in Singapore, and others; and 3 – semi-structured interviews with airport managers on key aspects of organizational practice, facilitated through a close partnership with the Airport Council International (ACI), a major stakeholder in this research project. Preliminary results find that the most successful airports are those that have shifted to a multi-stakeholder, platform ecosystem model of innovation. The recent entrance of new actors in airports (Google, Amazon, Accor, Vinci, Airbnb and others) have forced the opening of organizational boundaries to share and exchange knowledge with a broader set of ecosystem players. This has also led to new forms of governance and intermediation by airport actors to connect complex, highly distributed knowledge, along with new kinds of inter-organizational collaboration, co-creation and collective ideation processes. Leading airports in the case study have demonstrated a unique capacity to force traditionally siloed activities to “think together”, “explore together” and “act together”, to share data, contribute expertise and pioneer new governance approaches and collaborative practices. In so doing, they have successfully integrated these many disruptive change pathways and forced their implementation and coordination towards innovative mobility outcomes, with positive societal, environmental and economic impacts. This research has implications for: 1 - innovation theory, 2 - urban and transport policy, and 3 - organizational practice - within the mobility industry and across the economy.

Keywords: airport management, ecosystem, innovation, mobility, platform, transport hubs

Procedia PDF Downloads 181
43 Neologisms and Word-Formation Processes in Board Game Rulebook Corpus: Preliminary Results

Authors: Athanasios Karasimos, Vasiliki Makri

Abstract:

This research focuses on the design and development of the first text Corpus based on Board Game Rulebooks (BGRC) with direct application on the morphological analysis of neologisms and tendencies in word-formation processes. Corpus linguistics is a dynamic field that examines language through the lens of vast collections of texts. These corpora consist of diverse written and spoken materials, ranging from literature and newspapers to transcripts of everyday conversations. By morphologically analyzing these extensive datasets, morphologists can gain valuable insights into how language functions and evolves, as these extensive datasets can reflect the byproducts of inflection, derivation, blending, clipping, compounding, and neology. This entails scrutinizing how words are created, modified, and combined to convey meaning in a corpus of challenging, creative, and straightforward texts that include rules, examples, tutorials, and tips. Board games teach players how to strategize, consider alternatives, and think flexibly, which are critical elements in language learning. Their rulebooks reflect not only their weight (complexity) but also the language properties of each genre and subgenre of these games. Board games are a captivating realm where strategy, competition, and creativity converge. Beyond the excitement of gameplay, board games also spark the art of word creation. Word games, like Scrabble, Codenames, Bananagrams, Wordcraft, Alice in the Wordland, Once uUpona Time, challenge players to construct words from a pool of letters, thus encouraging linguistic ingenuity and vocabulary expansion. These games foster a love for language, motivating players to unearth obscure words and devise clever combinations. On the other hand, the designers and creators produce rulebooks, where they include their joy of discovering the hidden potential of language, igniting the imagination, and playing with the beauty of words, making these games a delightful fusion of linguistic exploration and leisurely amusement. In this research, more than 150 rulebooks in English from all types of modern board games, either language-independent or language-dependent, are used to create the BGRC. A representative sample of each genre (family, party, worker placement, deckbuilding, dice, and chance games, strategy, eurogames, thematic, role-playing, among others) was selected based on the score from BoardGameGeek, the size of the texts and the level of complexity (weight) of the game. A morphological model with morphological networks, multi-word expressions, and word-creation mechanics based on the complexity of the textual structure, difficulty, and board game category will be presented. In enabling the identification of patterns, trends, and variations in word formation and other morphological processes, this research aspires to make avail of this creative yet strict text genre so as to (a) give invaluable insight into morphological creativity and innovation that (re)shape the lexicon of the English language and (b) test morphological theories. Overall, it is shown that corpus linguistics empowers us to explore the intricate tapestry of language, and morphology in particular, revealing its richness, flexibility, and adaptability in the ever-evolving landscape of human expression.

Keywords: board game rulebooks, corpus design, morphological innovations, neologisms, word-formation processes

Procedia PDF Downloads 99
42 Philippine Site Suitability Analysis for Biomass, Hydro, Solar, and Wind Renewable Energy Development Using Geographic Information System Tools

Authors: Jara Kaye S. Villanueva, M. Rosario Concepcion O. Ang

Abstract:

For the past few years, Philippines has depended most of its energy source on oil, coal, and fossil fuel. According to the Department of Energy (DOE), the dominance of coal in the energy mix will continue until the year 2020. The expanding energy needs in the country have led to increasing efforts to promote and develop renewable energy. This research is a part of the government initiative in preparation for renewable energy development and expansion in the country. The Philippine Renewable Energy Resource Mapping from Light Detection and Ranging (LiDAR) Surveys is a three-year government project which aims to assess and quantify the renewable energy potential of the country and to put them into usable maps. This study focuses on the site suitability analysis of the four renewable energy sources – biomass (coconut, corn, rice, and sugarcane), hydro, solar, and wind energy. The site assessment is a key component in determining and assessing the most suitable locations for the construction of renewable energy power plants. This method maximizes the use of both the technical methods in resource assessment, as well as taking into account the environmental, social, and accessibility aspect in identifying potential sites by utilizing and integrating two different methods: the Multi-Criteria Decision Analysis (MCDA) method and Geographic Information System (GIS) tools. For the MCDA, Analytical Hierarchy Processing (AHP) is employed to determine the parameters needed for the suitability analysis. To structure these site suitability parameters, various experts from different fields were consulted – scientists, policy makers, environmentalists, and industrialists. The need to have a well-represented group of people to consult with is relevant to avoid bias in the output parameter of hierarchy levels and weight matrices. AHP pairwise matrix computation is utilized to derive weights per level out of the expert’s gathered feedback. Whereas from the threshold values derived from related literature, international studies, and government laws, the output values were then consulted with energy specialists from the DOE. Geospatial analysis using GIS tools translate this decision support outputs into visual maps. Particularly, this study uses Euclidean distance to compute for the distance values of each parameter, Fuzzy Membership algorithm which normalizes the output from the Euclidean Distance, and the Weighted Overlay tool for the aggregation of the layers. Using the Natural Breaks algorithm, the suitability ratings of each of the map are classified into 5 discrete categories of suitability index: (1) not suitable (2) least suitable, (3) suitable, (4) moderately suitable, and (5) highly suitable. In this method, the classes are grouped based on the best groups similar values wherein each subdivision are set from the rest based on the big difference in boundary values. Results show that in the entire Philippine area of responsibility, biomass has the highest suitability rating with rice as the most suitable at 75.76% suitability percentage, whereas wind has the least suitability percentage with score 10.28%. Solar and Hydro fall in the middle of the two, with suitability values 28.77% and 21.27%.

Keywords: site suitability, biomass energy, hydro energy, solar energy, wind energy, GIS

Procedia PDF Downloads 149
41 Electrical Transport through a Large-Area Self-Assembled Monolayer of Molecules Coupled with Graphene for Scalable Electronic Applications

Authors: Chunyang Miao, Bingxin Li, Shanglong Ning, Christopher J. B. Ford

Abstract:

While it is challenging to fabricate electronic devices close to atomic dimensions in conventional top-down lithography, molecular electronics is promising to help maintain the exponential increase in component densities via using molecular building blocks to fabricate electronic components from the bottom up. It offers smaller, faster, and more energy-efficient electronic and photonic systems. A self-assembled monolayer (SAM) of molecules is a layer of molecules that self-assembles on a substrate. They are mechanically flexible, optically transparent, low-cost, and easy to fabricate. A large-area multi-layer structure has been designed and investigated by the team, where a SAM of designed molecules is sandwiched between graphene and gold electrodes. Each molecule can act as a quantum dot, with all molecules conducting in parallel. When a source-drain bias is applied, significant current flows only if a molecular orbital (HOMO or LUMO) lies within the source-drain energy window. If electrons tunnel sequentially on and off the molecule, the charge on the molecule is well-defined and the finite charging energy causes Coulomb blockade of transport until the molecular orbital comes within the energy window. This produces ‘Coulomb diamonds’ in the conductance vs source-drain and gate voltages. For different tunnel barriers at either end of the molecule, it is harder for electrons to tunnel out of the dot than in (or vice versa), resulting in the accumulation of two or more charges and a ‘Coulomb staircase’ in the current vs voltage. This nanostructure exhibits highly reproducible Coulomb-staircase patterns, together with additional oscillations, which are believed to be attributed to molecular vibrations. Molecules are more isolated than semiconductor dots, and so have a discrete phonon spectrum. When tunnelling into or out of a molecule, one or more vibronic states can be excited in the molecule, providing additional transport channels and resulting in additional peaks in the conductance. For useful molecular electronic devices, achieving the optimum orbital alignment of molecules to the Fermi energy in the leads is essential. To explore it, a drop of ionic liquid is employed on top of the graphene to establish an electric field at the graphene, which screens poorly, gating the molecules underneath. Results for various molecules with different alignments of Fermi energy to HOMO have shown highly reproducible Coulomb-diamond patterns, which agree reasonably with DFT calculations. In summary, this large-area SAM molecular junction is a promising candidate for future electronic circuits. (1) The small size (1-10nm) of the molecules and good flexibility of the SAM lead to the scalable assembly of ultra-high densities of functional molecules, with advantages in cost, efficiency, and power dissipation. (2) The contacting technique using graphene enables mass fabrication. (3) Its well-observed Coulomb blockade behaviour, narrow molecular resonances, and well-resolved vibronic states offer good tuneability for various functionalities, such as switches, thermoelectric generators, and memristors, etc.

Keywords: molecular electronics, Coulomb blokade, electron-phonon coupling, self-assembled monolayer

Procedia PDF Downloads 63
40 Assessing the Experiences of South African and Indian Legal Profession from the Perspective of Women Representation in Higher Judiciary: The Square Peg in a Round Hole Story

Authors: Sricheta Chowdhury

Abstract:

To require a woman to choose between her work and her personal life is the most acute form of discrimination that can be meted out against her. No woman should be given a choice to choose between her motherhood and her career at Bar, yet that is the most detrimental discrimination that has been happening in Indian Bar, which no one has questioned so far. The falling number of women in practice is a reality that isn’t garnering much attention given the sharp rise in women studying law but is not being able to continue in the profession. Moving from a colonial misogynist whim to a post-colonial “new-age construct of Indian woman” façade, the policymakers of the Indian Judiciary have done nothing so far to decolonize itself from its rudimentary understanding of ‘equality of gender’ when it comes to the legal profession. Therefore, when Indian jurisprudence was (and is) swooning to the sweeping effect of transformative constitutionalism in the understanding of equality as enshrined under the Indian Constitution, one cannot help but question why the legal profession remained out of brushing effect of achieving substantive equality. The Airline industry’s discriminatory policies were not spared from criticism, nor were the policies where women’s involvement in any establishment serving liquor (Anuj Garg case), but the judicial practice did not question the stereotypical bias of gender and unequal structural practices until recently. That necessitates the need to examine the existing Bar policies and the steps taken by the regulatory bodies in assessing the situations that are in favor or against the purpose of furthering women’s issues in present-day India. From a comparative feminist point of concern, South Africa’s pro-women Bar policies are attractive to assess their applicability and extent in terms of promoting inclusivity at the Bar. This article intends to tap on these two countries’ potential in carving a niche in giving women an equal platform to play a substantive role in designing governance policies through the Judiciary. The article analyses the current gender composition of the legal profession while endorsing the concept of substantive equality as a requisite in designing an appropriate appointment process of the judges. It studies the theoretical framework on gender equality, examines the international and regional instruments and analyses the scope of welfare policies that Indian legal and regulatory bodies can undertake towards a transformative initiative in re-modeling the Judiciary to a more diverse and inclusive institution. The methodology employs a comparative and analytical understanding of doctrinal resources. It makes quantitative use of secondary data and qualitative use of primary data collected for determining the present status of Indian women legal practitioners and judges. With respect to quantitative data, statistics on the representation of women as judges and chief justices and senior advocates from their official websites from 2018 till present have been utilized. In respect of qualitative data, results of the structured interviews conducted through open and close-ended questions with retired lady judges of the higher judiciary and senior advocates of the Supreme Court of India, contacted through snowball sampling, are utilized.

Keywords: gender, higher judiciary, legal profession, representation, substantive equality

Procedia PDF Downloads 83
39 Neoliberal Settler City: Socio-Spatial Segregation, Livelihood of Artists/Craftsmen in Delhi

Authors: Sophy Joseph

Abstract:

The study uses the concept of ‘Settler city’ to understand the nature of peripheralization that a neoliberal city initiates. The settler city designs powerless communities without inherent rights, title and sovereignty. Kathputli Colony, home to generations of artists/craftsmen, who have kept heritage of arts/crafts alive, has undergone eviction of its population from urban space. The proposed study, ‘Neoliberal Settler City: Socio-spatial segregation and livelihood of artists/craftsmen in Delhi’ would problematize the settler city as a colonial technology. The colonial regime has ‘erased’ the ‘unwanted’ as primitive and swept them to peripheries in the city. This study would also highlight how structural change in political economy has undermined their crafts/arts by depriving them from practicing/performing it with dignity in urban space. The interconnections between citizenship and In-Situ Private Public Partnership in Kathputli rehabilitation has become part of academic exercise. However, a comprehensive study connecting inherent characteristics of neoliberal settler city, trajectory of political economy of unorganized workers - artists/craftsmen and legal containment and exclusion leading to dispossession and marginalization of communities from the city site, is relevant to contextualize the trauma of spatial segregation. This study would deal with political, cultural, social and economic dominant behavior of the structure in the state formation, accumulation of property and design of urban space, fueled by segregation of marginalized/unorganized communities and disowning the ‘footloose proletariat’, the migrant workforce. The methodology of study involves qualitative research amongst communities and the field work-oral testimonies and personal accounts- becomes the primary material to theorize the realities. The secondary materials in the forms of archival materials about historical evolution of Delhi as a planned city from various archives, would be used. As the study also adopt ‘narrative approach’ in qualitative study, the life experiences of craftsmen/artists as performers and emotional trauma of losing their livelihood and space forms an important record to understand the instability and insecurity that marginalization and development attributes on urban poor. The study attempts to prove that though there was a change in political tradition from colonialism to constitutional democracy, new state still follows the policy of segregation and dispossession of the communities. It is this dispossession from the space, deprivation of livelihood and non-consultative process in rehabilitation that reflects the neoliberal approach of the state and also critical findings in the study. This study would entail critical spatial lens analyzing ethnographic and sociological data, representational practices and development debates to understand ‘urban otherization’ against craftsmen/artists. This seeks to develop a conceptual framework for understanding the resistance of communities against primitivity attached with them and to decolonize the city. This would help to contextualize the demand for declaring Kathputli Colony as ‘heritage artists village’. The conceptualization and contextualization would help to argue for right to city of the communities, collective rights to property, services and self-determination. The aspirations of the communities also help to draw normative orientation towards decolonization. It is important to study this site as part of the framework, ‘inclusive cities’ because cities are rarely noted as important sites of ‘community struggles’.

Keywords: neoliberal settler city, socio-spatial segregation, the livelihood of artists/craftsmen, dispossession of indigenous communities, urban planning and cultural uprooting

Procedia PDF Downloads 130
38 Reproductive Biology and Lipid Content of Albacore Tuna (Thunnus alalunga) in the Western Indian Ocean

Authors: Zahirah Dhurmeea, Iker Zudaire, Heidi Pethybridge, Emmanuel Chassot, Maria Cedras, Natacha Nikolic, Jerome Bourjea, Wendy West, Chandani Appadoo, Nathalie Bodin

Abstract:

Scientific advice on the status of fish stocks relies on indicators that are based on strong assumptions on biological parameters such as condition, maturity and fecundity. Currently, information on the biology of albacore tuna, Thunnus alalunga, in the Indian Ocean is scarce. Consequently, many parameters used in stock assessment models for Indian Ocean albacore originate largely from other studied stocks or species of tuna. Inclusion of incorrect biological data in stock assessment models would lead to inappropriate estimates of stock status used by fisheries manager’s to establish future catch allowances. The reproductive biology of albacore tuna in the western Indian Ocean was examined through analysis of the sex ratio, spawning season, length-at-maturity (L50), spawning frequency, fecundity and fish condition. In addition, the total lipid content (TL) and lipid class composition in the gonads, liver and muscle tissues of female albacore during the reproductive cycle was investigated. A total of 923 female and 867 male albacore were sampled from 2013 to 2015. A bias in sex-ratio was found in favour of females with fork length (LF) <100 cm. Using histological analyses and gonadosomatic index, spawning was found to occur between 10°S and 30°S, mainly to the east of Madagascar from October to January. Large females contributed more to reproduction through their longer spawning period compared to small individuals. The L50 (mean ± standard error) of female albacore was estimated at 85.3 ± 0.7 cm LF at the vitellogenic 3 oocyte stage maturity threshold. Albacore spawn on average every 2.2 days within the spawning region and spawning months from November to January. Batch fecundity varied between 0.26 and 2.09 million eggs and the relative batch fecundity (mean  standard deviation) was estimated at 53.4 ± 23.2 oocytes g-1 of somatic-gutted weight. Depending on the maturity stage, TL in ovaries ranged from 7.5 to 577.8 mg g-1 of wet weight (ww) with different proportions of phospholipids (PL), wax esters (WE), triacylglycerol (TAG) and sterol (ST). The highest TL were observed in immature (mostly TAG and PL) and spawning capable ovaries (mostly PL, WE and TAG). Liver TL varied from 21.1 to 294.8 mg g-1 (ww) and acted as an energy (mainly TAG and PL) storage prior to reproduction when the lowest TL was observed. Muscle TL varied from 2.0 to 71.7 g-1 (ww) in mature females without a clear pattern between maturity stages, although higher values of up to 117.3 g-1 (ww) was found in immature females. TL results suggest that albacore could be viewed predominantly as a capital breeder relying mostly on lipids stored before the onset of reproduction and with little additional energy derived from feeding. This study is the first one to provide new information on the reproductive development and classification of albacore in the western Indian Ocean. The reproductive parameters will reduce uncertainty in current stock assessment models which will eventually promote sustainability of the fishery.

Keywords: condition, size-at-maturity, spawning behaviour, temperate tuna, total lipid content

Procedia PDF Downloads 260